Federal Register Vol. 81, No.214,

Federal Register Volume 81, Issue 214 (November 4, 2016)

Page Range76843-78019
FR Document

Current View
Page and SubjectPDF
81 FR 76974 - Sunshine Act Meeting NoticePDF
81 FR 76950 - Center for Scientific Review; Notice of Closed MeetingsPDF
81 FR 76914 - Grant of Authority; Establishment of a Foreign-Trade Zone, Under the Alternative Site Framework, Vancouver, WashingtonPDF
81 FR 76916 - 1-Hydroxyethylidene-1, 1-Diphosphonic Acid From the People's Republic of China: Affirmative Preliminary Determination of Sales at Less Than Fair Value, and Postponement of Final DeterminationPDF
81 FR 76919 - Quarterly Update to Annual Listing of Foreign Government Subsidies on Articles of Cheese Subject to an In-Quota Rate of DutyPDF
81 FR 76920 - Antidumping or Countervailing Duty Order, Finding, or Suspended Investigation; Opportunity To Request Administrative ReviewPDF
81 FR 76915 - Approval of Subzone Status; Westlake Chemical Corporation; Sulphur, LouisianaPDF
81 FR 76915 - Foreign-Trade Zone 44-Morris County, New Jersey; Application for Subzone; AGFA Corporation; Branchburg, New JerseyPDF
81 FR 76945 - Agency Information Collection Activities: Proposed Collection; Comment RequestPDF
81 FR 76946 - Agency Information Collection Activities: Submission for OMB Review; Comment RequestPDF
81 FR 76957 - 60-Day Notice of Proposed Information Collection: Improving the Speed of Housing Recovery Program Launch After Severe DisastersPDF
81 FR 76914 - Foreign-Trade Zone (FTZ) 80-San Antonio, Texas; Notification of Proposed Production Activity; CGT U.S., Ltd.; (Polyvinyl Chloride (PVC) Coated Upholstery Fabric Cover Stock); New Braunfels, TexasPDF
81 FR 76962 - Notice of Additional Scoping Meeting for the Columbia River System Operations Environmental Impact StatementPDF
81 FR 76915 - Foreign-Trade Zone (FTZ) 44-Morris County, New Jersey; Authorization of Production Activity; Givaudan Flavors Corporation (Flavor Products); East Hanover, New JerseyPDF
81 FR 76942 - Clean Air Act Advisory Committee (CAAAC): Notice of MeetingPDF
81 FR 76962 - Notice on Outer Continental Shelf Oil and Gas Lease SalesPDF
81 FR 76953 - Florida; Amendment No. 1 to Notice of an Emergency DeclarationPDF
81 FR 76927 - 36(b)(1) Arms Sales NotificationPDF
81 FR 76942 - Environmental Impact Statements; Notice of AvailabilityPDF
81 FR 76968 - Proposed Extension of Information Collection; Health Standards for Diesel Particulate Matter Exposure (Underground Coal Mines)PDF
81 FR 76923 - Procurement List; AdditionPDF
81 FR 76923 - Procurement List; Proposed Addition and DeletionsPDF
81 FR 76955 - Georgia; Amendment No. 1 to Notice of an Emergency DeclarationPDF
81 FR 76955 - Florida; Amendment No. 2 to Notice of a Major Disaster DeclarationPDF
81 FR 76954 - Wisconsin; Major Disaster and Related DeterminationsPDF
81 FR 76956 - South Carolina; Amendment No. 4 to Notice of a Major Disaster DeclarationPDF
81 FR 76954 - South Carolina; Amendment No. 5 to Notice of a Major Disaster DeclarationPDF
81 FR 76994 - National Express LLC-Acquisition of Control-Trinity, Inc., Trinity Cars, Inc., and Trinity Student Delivery, LLCPDF
81 FR 76875 - Fisheries of the Exclusive Economic Zone Off Alaska; Exchange of Flatfish in the Bering Sea and Aleutian Islands Management AreaPDF
81 FR 76955 - South Carolina; Amendment No. 6 to Notice of a Major Disaster DeclarationPDF
81 FR 76956 - South Carolina; Amendment No. 3 to Notice of a Major Disaster DeclarationPDF
81 FR 76954 - South Carolina; Amendment No. 1 to Notice of a Major Disaster DeclarationPDF
81 FR 76956 - South Carolina; Amendment No. 2 to Notice of a Major Disaster DeclarationPDF
81 FR 76874 - Atlantic Highly Migratory Species; Atlantic Bluefin Tuna Fisheries; 2016 General Category FisheryPDF
81 FR 76861 - Amendments to OFAC Regulations To Remove the Former Liberian Regime of Charles Taylor Sanctions Regulations and References to Fax-on-Demand ServicePDF
81 FR 76994 - Notice of Public MeetingPDF
81 FR 76992 - 30-Day Notice of Proposed Information Collection: Statement of Material Change, Merger, Acquisition, or Divestment of a Registered PartyPDF
81 FR 76932 - 36(b)(1) Arms Sales NotificationPDF
81 FR 76918 - Meeting of the United States Travel and Tourism Advisory BoardPDF
81 FR 76934 - 36(b)(1) Arms Sales NotificationPDF
81 FR 76939 - Innovative Solar 47, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket Section 204 AuthorizationPDF
81 FR 76939 - Midcontinent Independent System Operator, Inc.; Notice of Institution of Section 206 Proceeding and Refund Effective DatePDF
81 FR 76938 - Quantum Power Corp; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket Section 204 AuthorizationPDF
81 FR 76941 - Combined Notice of FilingsPDF
81 FR 76939 - Combined Notice of Filings #1PDF
81 FR 76965 - Agency Information Collection Activities; Proposed eCollection eComments Requested; Extension of a Currently Approved Collection: Capital Punishment Report of Inmates Under Sentence of DeathPDF
81 FR 76963 - Agency Information Collection Activities; Proposed eCollection eComments Requested; Application and Permit for Permanent Exportation of Firearms (National Firearms Act) ATF F 9 (5320.9)PDF
81 FR 76964 - Agency Information Collection Activities; Proposed eCollection eComments Requested; Extension of a Currently Approved Collection: National Drug Threat SurveyPDF
81 FR 76944 - Change in Bank Control Notices; Acquisitions of Shares of a Bank or Bank Holding CompanyPDF
81 FR 76943 - Formations of, Acquisitions by, and Mergers of Bank Holding CompaniesPDF
81 FR 76936 - Intent To Prepare a Draft Environmental Impact Statement, and Intent To Conduct Public Scoping Meetings for the Upper Susquehanna River Basin Comprehensive Flood Damage Reduction Study, New YorkPDF
81 FR 76938 - President's Council of Advisors on Science and TechnologyPDF
81 FR 76927 - Proposed Collection; Comment RequestPDF
81 FR 76972 - Privacy Act of 1974, as Amended; System of Records NoticePDF
81 FR 76865 - Special Local Regulations; Key West World Championship, Key West, FLPDF
81 FR 76966 - Notice of Lodging of Proposed Consent Decree Under the Comprehensive Environmental Response, Compensation, and Liability ActPDF
81 FR 76967 - Agency Information Collection Activities; Submission for OMB Review; Comment Request; American Time Use SurveyPDF
81 FR 76966 - Federal-State Unemployment Compensation Program: Certifications for 2016 Under the Federal Unemployment Tax ActPDF
81 FR 76960 - Notice of Availability of the Draft Archeological Resources Management Plan, Environmental Impact Statement, Knife River Indian Villages National Historic Site, North DakotaPDF
81 FR 76929 - 36(b)(1) Arms Sales NotificationPDF
81 FR 77002 - Sanctions Action Pursuant to Executive Order 13224PDF
81 FR 76913 - Agenda and Notice of Public Meeting of the Maine Advisory Committee; CorrectionPDF
81 FR 76949 - Withdrawal of 60-Day Notice of Proposed Information Collection: Unaccompanied Children Case Summary FormPDF
81 FR 76937 - Notice of Availability of Record of Decision for the Northwest Training and Testing Final Environmental Impact Statement/Overseas Environmental Impact StatementPDF
81 FR 77001 - Agency Information Collection Activities: Information Collection Renewal; Comment Request; Appraisals for Higher-Priced Mortgage LoansPDF
81 FR 76867 - International Trademark Classification ChangesPDF
81 FR 76877 - Energy Conservation Program: Test Procedures for Integrated Light-Emitting Diode LampsPDF
81 FR 76870 - Suspension of Community EligibilityPDF
81 FR 76924 - Agency Information Collection Activities: Comment RequestPDF
81 FR 76922 - Marine Mammals; File No. 15324PDF
81 FR 76922 - U.S. Integrated Ocean Observing System (IOOS®) Advisory CommitteePDF
81 FR 76969 - Notice of Intent To Award-Grant Awards for the Provision of Civil Legal Services to Eligible Low-Income Clients Beginning January 1, 2017PDF
81 FR 77000 - Advisory Committee on Transportation EquityPDF
81 FR 76948 - Announcement of the Award of Nine Single-Source Program Expansion Supplement Grants Under the Unaccompanied Children's (UC) ProgramPDF
81 FR 76925 - Change to the Freight Carrier Registration Program (FCRP) Open SeasonPDF
81 FR 76912 - Black Hills National Forest Advisory BoardPDF
81 FR 76960 - Indian Gaming; Tribal-State Class III Gaming Compact Taking Effect in the State of CaliforniaPDF
81 FR 76925 - Government-Industry Advisory Panel; Notice of Federal Advisory Committee MeetingPDF
81 FR 76944 - Agency Forms Undergoing Paperwork Reduction Act ReviewPDF
81 FR 76973 - Business and Operations Advisory Committee; Notice of MeetingPDF
81 FR 76931 - Charter Renewal of Department of Defense Federal Advisory CommitteesPDF
81 FR 76974 - Advisory Committee for Education and Human Resources; Notice of MeetingPDF
81 FR 77003 - Privacy Act of 1974; Department of the Treasury, Bureau of Engraving and Printing (BEP) -.051-BEP Chief Counsel Files System of RecordsPDF
81 FR 76911 - Information Collection Request; Inventory Property ManagementPDF
81 FR 76999 - Notice of Proposed Buy America Public Interest Waiver for Hurricane Sandy Emergency Relief Work Performed for the World Trade CenterPDF
81 FR 76866 - Drawbridge Operation Regulations; Tchefuncta River, Madisonville, LAPDF
81 FR 76889 - Drawbridge Operation Regulations; Tchefuncta River, Madisonville, LAPDF
81 FR 76997 - Notice of Proposed Buy America Waiver for Replacement Parts on Diesel Multiple Unit Rail VehiclesPDF
81 FR 76953 - National Offshore Safety Advisory Committee; VacanciesPDF
81 FR 76913 - Notice of Petitions by Firms for Determination of Eligibility To Apply for Trade Adjustment AssistancePDF
81 FR 76962 - Notice of Receipt of Complaint; Solicitation of Comments Relating to the Public InterestPDF
81 FR 76991 - Florida Disaster Number FL-00121PDF
81 FR 76977 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Relating to the Listing and Trading of Shares of the ForceShares Daily 4X US Market Futures Long Fund and ForceShares Daily 4X US Market Futures Short Fund Under Commentary .02 to NYSE Arca Equities Rule 8.200PDF
81 FR 76975 - Order Granting a Limited Exemption From Rule 102 of Regulation M Concerning NASDAQ Stock Market LLC's New Product Support Incentives Pursuant to Regulation M Rule 102(e)PDF
81 FR 76988 - Self-Regulatory Organizations; Miami International Securities Exchange LLC; Notice of Filing and Immediate Effectiveness of a Proposed Rule Change To Amend Its Fee SchedulePDF
81 FR 76987 - Self-Regulatory Organizations; ICE Clear Credit LLC; Order Approving Proposed Rule Change To Provide for the Clearance of Additional Credit Default Swap ContractsPDF
81 FR 76990 - Florida Disaster Number FL-00120PDF
81 FR 76991 - Military Reservist Economic Injury Disaster Loans Interest Rate for First Quarter FY 2017PDF
81 FR 76990 - South Carolina Disaster Number SC-00041PDF
81 FR 76942 - Information Collections Being Reviewed by the Federal Communications Commission Under Delegated AuthorityPDF
81 FR 76991 - Florida Disaster Number FL-00120.PDF
81 FR 76990 - North Carolina Disaster Number NC-00081PDF
81 FR 76911 - Fremont and Winema Resource Advisory CommitteePDF
81 FR 76912 - Fremont and Winema Resource Advisory CommitteePDF
81 FR 76924 - Information Collection; Submission for OMB Review, Comment RequestPDF
81 FR 76997 - Notice of Cancellation of Environmental Impact Statement for the Norfolk International Airport, Norfolk, VirginiaPDF
81 FR 76997 - Passenger Facility Charge (PFC) Program; Draft FAA Order 5500.1BPDF
81 FR 76848 - Airworthiness Directives; The Boeing Company AirplanesPDF
81 FR 76949 - National Institute of Aging (NIA), National Institute of Mental Health (NIMH), and National Center for Advancing Translational Sciences (NCATS): Cooperative Research and Development Agreement (CRADA) and Licensing Opportunity for Ketamine for the Treatment of Depression and Other Anxiety-Related DisordersPDF
81 FR 76951 - Proposed Collection; 60-Day Comment Request; The Atherosclerosis Risk in Communities Study (National Heart Lung and Blood Institute)PDF
81 FR 76958 - Endangered Species; Receipt of Applications for PermitPDF
81 FR 76952 - National Institute of Mental Health Amended Notice of MeetingPDF
81 FR 76952 - Eunice Kennedy Shriver National Institute of Child Health and Human Development; Notice of MeetingPDF
81 FR 76974 - Notice of Permit Modification Received Under the Antarctic Conservation Act of 1978PDF
81 FR 76908 - Fisheries of the Caribbean, Gulf of Mexico, and South Atlantic; Reef Fish Fishery of the Gulf of Mexico; Amendment 43PDF
81 FR 76948 - Submission for OMB Review; Comment RequestPDF
81 FR 76891 - Approval and Promulgation of Implementation Plans and Designation of Areas for Air Quality Planning Purposes; Louisiana; Redesignation of Baton Rouge Nonattainment Area, 2008 8-Hour Ozone Nonattainment Area to AttainmentPDF
81 FR 76899 - Medicare and Medicaid Programs; Fire Safety Requirements for Certain Dialysis FacilitiesPDF
81 FR 76863 - Technical Amendments to Various Bank Secrecy Act RegulationsPDF
81 FR 76859 - Amendments to the Export Administration Regulations: Update of Arms Embargoes on Cote d'Ivoire, Liberia, Sri Lanka and Vietnam, and Recognition of India as Member of the Missile Technology Control RegimePDF
81 FR 76961 - Record of Decision for Non-Federal Oil and Gas Regulation Revision Environmental Impact Statement (EIS)PDF
81 FR 77972 - General Provisions and Non-Federal Oil and Gas RightsPDF
81 FR 76958 - Federal Property Suitable as Facilities To Assist the HomelessPDF
81 FR 76905 - Proposed Supplementary Rules for Fort Ord National Monument, CaliforniaPDF
81 FR 76857 - Establishment of Class E Airspace; Camden, ALPDF
81 FR 76858 - Establishment of Class E Airspace; Murray, KYPDF
81 FR 76886 - Proposed Establishment of Class E Airspace, Thermopolis, WYPDF
81 FR 76855 - Amendment of Class D and Class E Airspace; Eugene, OR, and Corvallis, ORPDF
81 FR 76854 - Amendment of Class E Airspace; Albany, ORPDF
81 FR 76888 - Proposed Amendment of Class E Airspace, Monongahela, PAPDF
81 FR 76845 - Airworthiness Directives; Pilatus Aircraft Ltd. AirplanesPDF
81 FR 76883 - Airworthiness Directives; Pilatus Aircraft Ltd. AirplanesPDF
81 FR 78015 - Defense Federal Acquisition Regulation Supplement: Offset Costs (DFARS Case 2015-D028)PDF
81 FR 78014 - Defense Federal Acquisition Regulation Supplement: Independent Research and Development Expenses (DFARS Case 2016-D017)PDF
81 FR 78012 - Defense Federal Acquisition Regulation Supplement: Pilot Program on Acquisition of Military Purpose Nondevelopmental Items (DFARS Case 2016-D014)PDF
81 FR 78011 - Defense Federal Acquisition Regulation Supplement: Contiguous United States (DFARS Case 2016-D005)PDF
81 FR 78008 - Defense Federal Acquisition Regulation Supplement: Enhancing the Effectiveness of Independent Research and Development (DFARS Case 2016-D002)PDF
81 FR 76885 - Airworthiness Directives; Turbomeca S.A. Turboshaft EnginesPDF
81 FR 76843 - Airworthiness Directives; Saab AB, Saab Aeronautics (Formerly Known as Saab AB, Saab Aerosystems) AirplanesPDF
81 FR 77834 - Medicare Program; End-Stage Renal Disease Prospective Payment System, Coverage and Payment for Renal Dialysis Services Furnished to Individuals With Acute Kidney Injury, End-Stage Renal Disease Quality Incentive Program, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program Bid Surety Bonds, State Licensure and Appeals Process for Breach of Contract Actions, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program and Fee Schedule Adjustments, Access to Care Issues for Durable Medical Equipment; and the Comprehensive End-Stage Renal Disease Care ModelPDF
81 FR 76851 - Airworthiness Directives; The Boeing Company AirplanesPDF
81 FR 77008 - Medicare Program; Merit-Based Incentive Payment System (MIPS) and Alternative Payment Model (APM) Incentive Under the Physician Fee Schedule, and Criteria for Physician-Focused Payment ModelsPDF

Issue

81 214 Friday, November 4, 2016 Contents Agriculture Agriculture Department See

Farm Service Agency

See

Forest Service

Alcohol Tobacco Firearms Alcohol, Tobacco, Firearms, and Explosives Bureau NOTICES Agency Information Collection Activities; Proposals, Submissions, and Approvals: Application and Permit for Permanent Exportation of Firearms, 76963-76964 2016--26704 Army Army Department NOTICES Change to the Freight Carrier Registration Program Open Season, 76925 2016--26672 Consumer Financial Protection Bureau of Consumer Financial Protection NOTICES Agency Information Collection Activities; Proposals, Submissions, and Approvals, 76924 2016--26678 Centers Disease Centers for Disease Control and Prevention NOTICES Agency Information Collection Activities; Proposals, Submissions, and Approvals, 76944-76945 2016--26667 Centers Medicare Centers for Medicare & Medicaid Services RULES Medicare Program: End-Stage Renal Disease Prospective Payment System, etc., 77834-77969 2016--26152 Merit-Based Incentive Payment System and Alternative Payment Model Incentive under the Physician Fee Schedule, and Criteria for Physician-Focused Payment Models, 77435-77831 2016--25240 PROPOSED RULES Medicare and Medicaid Programs: Fire Safety Requirements for Certain Dialysis Facilities, 76899-76904 2016--26583 NOTICES Agency Information Collection Activities; Proposals, Submissions, and Approvals, 76945-76947 2016--26743 2016--26745 Children Children and Families Administration NOTICES Agency Information Collection Activities; Proposals, Submissions, and Approvals: Tribal Child Support Enforcement Direct Funding Request, 76948-76949 2016--26615 Unaccompanied Children Case Summary Form; Withdrawal, 76949 2016--26686 Single-Source Grants: Unaccompanied Children's Program, 76948 2016--26673 Civil Rights Civil Rights Commission NOTICES Meetings: Maine Advisory Committee; Correction, 76913 2016--26687 Coast Guard Coast Guard RULES Drawbridge Operations: Tchefuncta River, Madisonville, LA, 76866-76867 2016--26655 Special Local Regulations: Key West World Championship, Key West, FL, 76865-76866 2016--26695 PROPOSED RULES Drawbridge Operations: Tchefuncta River, Madisonville, LA, 76889-76891 2016--26654 NOTICES Requests for Nominations: National Offshore Safety Advisory Committee, 76953 2016--26651 Commerce Commerce Department See

Economic Development Administration

See

Foreign-Trade Zones Board

See

Industry and Security Bureau

See

International Trade Administration

See

National Oceanic and Atmospheric Administration

See

Patent and Trademark Office

Committee for Purchase Committee for Purchase From People Who Are Blind or Severely Disabled NOTICES Procurement List; Additions and Deletions, 76923-76924 2016--26731 2016--26732 Comptroller Comptroller of the Currency NOTICES Agency Information Collection Activities; Proposals, Submissions, and Approvals: Appraisals for Higher-Priced Mortgage Loans, 77001-77002 2016--26683 Corporation Corporation for National and Community Service NOTICES Agency Information Collection Activities; Proposals, Submissions, and Approvals, 76924-76925 2016--26633 Defense Acquisition Defense Acquisition Regulations System RULES Defense Federal Acquisition Regulation Supplements: Contiguous United States, 78011-78012 2016--26367 Enhancing the Effectiveness of Independent Research and Development, 78008-78011 2016--26366 Pilot Program on Acquisition of Military Purpose Nondevelopmental Items, 78012-78013 2016--26368 PROPOSED RULES Defense Federal Acquisition Regulation Supplements: Independent Research and Development Expenses, 78014-78015 2016--26369 Offset Costs, 78015-78019 2016--26377 Defense Department Defense Department See

Army Department

See

Defense Acquisition Regulations System

See

Engineers Corps

See

Navy Department

NOTICES Agency Information Collection Activities; Proposals, Submissions, and Approvals, 76927 2016--26697 Arms Sales, 76927-76936 2016--26689 2016--26711 2016--26714 2016--26735 Charter Renewals: Department of Defense Wage Committee, 76931-76932 2016--26665 Meetings: Government-Industry Advisory Panel, 76925-76926 2016--26669
Economic Development Economic Development Administration NOTICES Trade Adjustment Assistance Eligibility; Petitions, 76913-76914 2016--26650 Employment and Training Employment and Training Administration NOTICES Federal-State Unemployment Compensation Program: Certifications for 2016 under the Federal Unemployment Tax Act, 76966-76967 2016--26691 Energy Department Energy Department See

Federal Energy Regulatory Commission

PROPOSED RULES Energy Conservation Program: Test Procedures for Integrated Light-Emitting Diode Lamps, 76877-76882 2016--26681 NOTICES Meetings: President's Council of Advisors on Science and Technology, 76938 2016--26698
Engineers Engineers Corps NOTICES Environmental Impact Statements; Availability, etc.: Upper Susquehanna River Basin Comprehensive Flood Damage Reduction Study; Public Scoping Meetings, 76936-76937 2016--26699 Environmental Protection Environmental Protection Agency PROPOSED RULES Air Quality State Implementation Plans; Approvals and Promulgations: Louisiana; Redesignation of Baton Rouge Nonattainment Area, 2008 8-Hour Ozone Nonattainment Area to Attainment, 76891-76899 2016--26584 NOTICES Environmental Impact Statements; Availability, 76942 2016--26734 Meetings; Clean Air Act Advisory Committee, 76942 2016--26738 Farm Service Farm Service Agency NOTICES Agency Information Collection Activities; Proposals, Submissions, and Approvals: Inventory Property Management, 76911 2016--26660 Federal Aviation Federal Aviation Administration RULES Airworthiness Directives: Pilatus Aircraft Ltd. Airplanes, 76845-76848 2016--26431 Saab AB, Saab Aeronautics (Formerly Known as Saab AB, Saab Aerosystems) Airplanes, 76843-76845 2016--26327 The Boeing Company Airplanes, 76848-76854 2016--25958 2016--26629 Class D and E Airspace; Amendments: Eugene, OR, and Corvallis, OR, 76855-76857 2016--26439 Class E Airspace; Amendments: Albany, OR, 76854-76855 2016--26437 Class E Airspace; Establishments: Camden, AL, 76857-76858 2016--26455 Murray, KY, 76858-76859 2016--26449 PROPOSED RULES Airworthiness Directives: Pilatus Aircraft Ltd. Airplanes, 76883-76884 2016--26429 Turbomeca S.A. Turboshaft Engines, 76885-76886 2016--26335 Class E Airspace; Amendments: Monongahela, PA, 76888-76889 2016--26436 Class E Airspace; Establishments: Thermopolis, WY, 76886-76888 2016--26440 NOTICES Environmental Impact Statements; Availability, etc.: Norfolk International Airport, Norfolk, VA; Cancellation, 76997 2016--26631 Passenger Facility Charge Programs: Draft FAA Order 5500.1B, 76997 2016--26630 Federal Communications Federal Communications Commission NOTICES Agency Information Collection Activities; Proposals, Submissions, and Approvals, 76942-76943 2016--26639 Federal Emergency Federal Emergency Management Agency RULES Suspensions of Community Eligibility, 76870-76874 2016--26679 NOTICES Emergency and Related Determinations: Florida; Amendment No. 1, 76953-76954 2016--26736 Georgia; Amendment No. 1, 76955 2016--26730 Major Disaster Declarations: Florida; Amendment No. 2, 76955 2016--26728 South Carolina; Amendment No. 1, 76954 2016--26720 South Carolina; Amendment No. 2, 76956 2016--26719 South Carolina; Amendment No. 3, 76956-76957 2016--26721 South Carolina; Amendment No. 4, 76956 2016--26726 South Carolina; Amendment No. 5, 76954 2016--26725 South Carolina; Amendment No. 6, 76955-76956 2016--26722 Major Disasters and Related Determinations: Wisconsin, 76954-76955 2016--26727 Federal Energy Federal Energy Regulatory Commission NOTICES Combined Filings, 76939-76942 2016--26706 2016--26707 Initial Market-Based Rate Filings Including Requests for Blanket Section 204 Authorizations: Innovative Solar 47, LLC, 76939 2016--26710 Quantum Power Corp, 76938-76939 2016--26708 Section 206 of the Federal Power Act: Midcontinent Independent System Operator, Inc., 76939 2016--26709 Federal Reserve Federal Reserve System NOTICES Changes in Bank Control: Acquisitions of Shares of a Bank or Bank Holding Company, 76944 2016--26702 Formations of, Acquisitions by, and Mergers of Bank Holding Companies, 76943-76944 2016--26701 Federal Transit Federal Transit Administration NOTICES Buy America Waivers: Hurricane Sandy Emergency Relief Work Performed for the World Trade Center, 76999-77000 2016--26656 Replacement Parts on Diesel Multiple Unit Rail Vehicles, 76997-76999 2016--26653 Financial Crimes Financial Crimes Enforcement Network RULES Bank Secrecy Act Regulations; Technical Amendments, 76863-76865 2016--26557 Fish Fish and Wildlife Service NOTICES Endangered Species Permits; Applications, 76958-76959 2016--26626 Foreign Assets Foreign Assets Control Office RULES Former Liberian Regime of Charles Taylor Sanctions Regulations, 76861-76863 2016--26717 NOTICES Blocking or Unblocking of Persons and Properties, 77002-77003 2016--26688 Foreign Trade Foreign-Trade Zones Board NOTICES Production Activities: CGT U.S., Ltd., Foreign-Trade Zone 80, San Antonio, TX, 76914-76915 2016--26741 Givaudan Flavors Corp., Foreign-Trade Zone 44, Morris County, NJ, 76915-76916 2016--26739 Reorganizations under Alternative Site Framework: Establishment of a Foreign-Trade Zone, Vancouver, WA, 76914 2016--26757 Subzone Applications: AGFA Corp., Foreign-Trade Zone 44, Morris County, NJ, 76915 2016--26746 Subzone Approvals: Westlake Chemical Corp., Sulphur, LA, 76915 2016--26748 Forest Forest Service NOTICES Meetings: Black Hills National Forest Advisory Board, 76912-76913 2016--26671 Fremont and Winema Resource Advisory Committee, 76911-76912 2016--26634 2016--26635 Health and Human Health and Human Services Department See

Centers for Disease Control and Prevention

See

Centers for Medicare & Medicaid Services

See

Children and Families Administration

See

National Institutes of Health

Homeland Homeland Security Department See

Coast Guard

See

Federal Emergency Management Agency

Housing Housing and Urban Development Department NOTICES Agency Information Collection Activities; Proposals, Submissions, and Approvals: Improving the Speed of Housing Recovery Program Launch after Severe Disasters, 76957-76958 2016--26742 Federal Properties Suitable as Facilities to Assist the Homeless, 76958 2016--26468 Indian Affairs Indian Affairs Bureau NOTICES Indian Gaming: Tribal-State Class III Gaming Compact Taking Effect in the State of California, 76960 2016--26670 Industry Industry and Security Bureau RULES Export Administration Regulations: Update of Arms Embargoes on Cote d'Ivoire, Liberia, Sri Lanka and Vietnam, and Recognition of India as Member of Missile Technology Control Regime, 76859-76861 2016--26535 Interior Interior Department See

Fish and Wildlife Service

See

Indian Affairs Bureau

See

Land Management Bureau

See

National Park Service

See

Ocean Energy Management Bureau

See

Reclamation Bureau

International Trade Adm International Trade Administration NOTICES Antidumping or Countervailing Duty Investigations, Orders, or Reviews: Opportunity to Request Administrative Review, 76920-76922 2016--26749 Determinations of Sales at Less than Fair Value: 1-Hydroxyethylidene-1, 1-Diphosphonic Acid from the People's Republic of China, 76916-76918 2016--26755 Meetings: United States Travel and Tourism Advisory Board, 76918-76919 2016--26713 Quarterly Update to Annual Listing of Foreign Government Subsidies: Articles of Cheese Subject to an In-Quota Rate of Duty, 76919 2016--26751 International Trade Com International Trade Commission NOTICES Complaints: Certain UV Curable Coatings for Optical Fibers, Coated Optical Fibers, and Products Containing Same, 76962-76963 2016--26649 Justice Department Justice Department See

Alcohol, Tobacco, Firearms, and Explosives Bureau

NOTICES Agency Information Collection Activities; Proposals, Submissions, and Approvals: Capital Punishment Report of Inmates under Sentence of Death, 76965-76966 2016--26705 National Drug Threat Survey, 76964-76965 2016--26703 Consent Decrees: Consent Decrees under CERCLA, 76966 2016--26694
Labor Department Labor Department See

Employment and Training Administration

See

Mine Safety and Health Administration

NOTICES Agency Information Collection Activities; Proposals, Submissions, and Approvals: American Time Use Survey, 76967-76968 2016--26692
Land Land Management Bureau PROPOSED RULES Public Lands: Fort Ord National Monument, CA, 76905-76908 2016--26457 Legal Legal Services Corporation NOTICES Funding Availability: Grant Awards for the Provision of Civil Legal Services to Eligible Low-Income Clients, 76969-76972 2016--26675 Mine Mine Safety and Health Administration NOTICES Agency Information Collection Activities; Proposals, Submissions, and Approvals: Health Standards for Diesel Particulate Matter Exposure (Underground Coal Mines), 76968-76969 2016--26733 National Archives National Archives and Records Administration NOTICES Privacy Act; Systems of Records, 76972-76973 2016--26696 National Institute National Institutes of Health NOTICES Agency Information Collection Activities; Proposals, Submissions, and Approvals: National Heart, Lung, and Blood Institute—The Atherosclerosis Risk in Communities Study, 76951-76952 2016--26627 Cooperative Research and Development Agreements: Ketamine for the Treatment of Depression and other Anxiety-Related Disorders, 76949-76950 2016--26628 Meetings: Center for Scientific Review, 76950-76951 2016--26770 Eunice Kennedy Shriver National Institute of Child Health and Human Development, 76952-76953 2016--26624 National Institute of Mental Health, 76952 2016--26625 National Oceanic National Oceanic and Atmospheric Administration RULES Atlantic Highly Migratory Species: Atlantic Bluefin Tuna Fisheries; 2016 General Category Fishery, 76874-76875 2016--26718 Fisheries of the Exclusive Economic Zone Off Alaska: Exchange of Flatfish in the Bering Sea and Aleutian Islands Management Area, 76875-76876 2016--26723 PROPOSED RULES Fisheries of the Caribbean, Gulf of Mexico, and South Atlantic: Reef Fish Fishery of the Gulf of Mexico; Amendment 43, 76908-76910 2016--26616 NOTICES Meetings: Integrated Ocean Observing System Advisory Committee; Teleconference, 76922-76923 2016--26676 Permits: Marine Mammals; File No. 15324, 76922 2016--26677 National Park National Park Service RULES General Provisions and Non-Federal Oil and Gas Rights, 77972-78005 2016--26489 NOTICES Environmental Impact Statements; Availability, etc.: Knife River Indian Villages National Historic Site Draft Archeological Resources Management Plan, North Dakota, 76960-76961 2016--26690 Non-Federal Oil and Gas Regulation Revision; Record of Decision, 76961-76962 2016--26492 National Science National Science Foundation NOTICES Antarctic Conservation Act Permits, 76974 2016--26622 Meetings: Advisory Committee for Education and Human Resources, 76974 2016--26664 Business and Operations Advisory Committee, 76973-76974 2016--26666 Navy Navy Department NOTICES Environmental Impact Statements; Availability, etc.: Northwest Training and Testing, 76937-76938 2016--26685 Nuclear Regulatory Nuclear Regulatory Commission NOTICES Meetings; Sunshine Act, 76974-76975 2016--26827 Ocean Energy Management Ocean Energy Management Bureau NOTICES Outer Continental Shelf Oil and Gas Lease Sales: List of Restricted Joint Bidders, 76962 2016--26737 Patent Patent and Trademark Office RULES International Trademark Classification; Changes, 76867-76870 2016--26682 Reclamation Reclamation Bureau NOTICES Environmental Impact Statements; Availability, etc.: Columbia River System Operations, 76962 2016--26740 Securities Securities and Exchange Commission NOTICES Orders: NASDAQ Stock Market LLC, 76975-76977 2016--26646 Self-Regulatory Organizations; Proposed Rule Changes: ICE Clear Credit LLC, 76987-76988 2016--26644 Miami International Securities Exchange LLC, 76988-76990 2016--26645 NYSE Arca, Inc., 76977-76986 2016--26647 Small Business Small Business Administration NOTICES Disaster Declarations: Florida, 76991-76992 2016--26636 2016--26638 Florida; Amendment 1, 76990-76991 2016--26643 Florida; Amendment 3, 76991 2016--26648 North Carolina; Amendment 9, 76990 2016--26637 South Carolina; Amendment 1, 76990 2016--26641 Military Reservist Economic Injury Disaster Loans: Interest Rate for First Quarter FY 2017, 76991 2016--26642 State Department State Department NOTICES Agency Information Collection Activities; Proposals, Submissions, and Approvals: Statement of Material Change, Merger, Acquisition, or Divestment of a Registered Party, 76992-76994 2016--26715 Meetings: Preparations for the International Maritime Organization Council, 76994 2016--26716 Surface Transportation Surface Transportation Board NOTICES Acquisition of Control Exemptions: National Express LLC, over Trinity, Inc., Trinity Cars, Inc., and Trinity Student Delivery, LLC, 76994-76997 2016--26724 Transportation Department Transportation Department See

Federal Aviation Administration

See

Federal Transit Administration

NOTICES Meetings: Advisory Committee on Transportation Equity, 77000-77001 2016--26674
Treasury Treasury Department See

Comptroller of the Currency

See

Financial Crimes Enforcement Network

See

Foreign Assets Control Office

NOTICES Privacy Act; Systems of Records, 77003-77006 2016--26661
Separate Parts In This Issue Part II Health and Human Services Department, Centers for Medicare & Medicaid Services, 77435-77831 2016--25240 Part III Health and Human Services Department, Centers for Medicare & Medicaid Services, 77834-77969 2016--26152 Part IV Interior Department, National Park Service, 77972-78005 2016--26489 Part V Defense Department, Defense Acquisition Regulations System, 78008-78019 2016--26367 2016--26366 2016--26368 2016--26369 2016--26377 Reader Aids

Consult the Reader Aids section at the end of this issue for phone numbers, online resources, finding aids, and notice of recently enacted public laws.

To subscribe to the Federal Register Table of Contents electronic mailing list, go to https://public.govdelivery.com/accounts/USGPOOFR/subscriber/new, enter your e-mail address, then follow the instructions to join, leave, or manage your subscription.

81 214 Friday, November 4, 2016 Rules and Regulations DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 39 [Docket No. FAA-2015-6544; Directorate Identifier 2014-NM-198-AD; Amendment 39-18704; AD 2016-22-15] RIN 2120-AA64 Airworthiness Directives; Saab AB, Saab Aeronautics (Formerly Known as Saab AB, Saab Aerosystems) Airplanes AGENCY:

Federal Aviation Administration (FAA), Department of Transportation (DOT).

ACTION:

Final rule.

SUMMARY:

We are superseding Airworthiness Directive (AD) 2012-24-06 for certain Saab AB, Saab Aeronautics Model 340A (SAAB/SF340A) and SAAB 340B airplanes. AD 2012-24-06 required replacing the stall warning computer (SWC) with a new SWC that provides an artificial stall warning in icing conditions, and modifying the airplane for the replacement of the SWC. This new AD adds airplanes to the applicability, and adds requirements to replace the existing SWCs with new, improved SWCs, and to modify the airplane for the new replacement of the SWC. This new AD also reduces the compliance time for replacing the SWCs. This AD was prompted by a determination that airplanes with certain modifications were excluded from the applicability in AD 2012-24-06, and are affected by the identified unsafe condition; and that the SWC required by AD 2012-24-06 contained erroneous logic. We are issuing this AD to prevent natural stall events during operation in icing conditions, which could result in loss of control of the airplane.

DATES:

This AD is effective December 9, 2016.

The Director of the Federal Register approved the incorporation by reference of certain publications listed in this AD as of December 9, 2016.

ADDRESSES:

For service information identified in this final rule, contact Saab AB, Saab Aeronautics, SE-581 88, Linköping, Sweden; telephone +46 13 18 5591; fax +46 13 18 4874; email [email protected]; Internet http://www.saabgroup.com. You may view this referenced service information at the FAA, Transport Airplane Directorate, 1601 Lind Avenue SW., Renton, WA. For information on the availability of this material at the FAA, call 425-227-1221. It is also available on the Internet at http://www.regulations.gov by searching for and locating Docket No. FAA-2015-6544.

Examining the AD Docket

You may examine the AD docket on the Internet at http://www.regulations.gov by searching for and locating Docket No. FAA-2015-6544; or in person at the Docket Management Facility between 9 a.m. and 5 p.m., Monday through Friday, except Federal holidays. The AD docket contains this AD, the regulatory evaluation, any comments received, and other information. The address for the Docket Office (telephone 800-647-5527) is Docket Management Facility, U.S. Department of Transportation, Docket Operations, M-30, West Building Ground Floor, Room W12-140, 1200 New Jersey Avenue SE., Washington, DC 20590.

FOR FURTHER INFORMATION CONTACT:

Shahram Daneshmandi, Aerospace Engineer, International Branch, ANM-116, Transport Airplane Directorate, FAA, 1601 Lind Avenue SW., Renton, WA 98057-3356; telephone 425-227-1112; fax 425-227-1149.

SUPPLEMENTARY INFORMATION:

Discussion

We issued a supplemental notice of proposed rulemaking (SNPRM) to amend 14 CFR part 39 to supersede AD 2012-24-06, Amendment 39-17276 (77 FR 73279, December 10, 2012) (“AD 2012-24-06”). AD 2012-24-06 applied to certain Saab AB, Saab Aeronautics Model 340A (SAAB/SF340A) and SAAB 340B airplanes. The SNPRM published in the Federal Register on July 12, 2016 (81 FR 45072) (“the SNPRM”). We preceded the SNPRM with a notice of proposed rulemaking (NPRM) that published in the Federal Register on December 17, 2015 (80 FR 78699) (“the NPRM”). The NPRM was prompted by a determination that airplanes with certain modifications were excluded from the applicability in AD 2012-24-06, and are affected by the identified unsafe condition; and the SWC required by AD 2012-24-06 contained erroneous logic. The NPRM proposed to add airplanes to the applicability and to add requirements to replace the existing SWCs with new, improved SWCs and to modify the airplane for the new replacement of the SWC. The SNPRM proposed to reduce the compliance time for replacing the SWCs. We are issuing this AD to prevent natural stall events during operation in icing conditions, which could result in loss of control of the airplane.

The European Aviation Safety Agency (EASA), which is the Technical Agent for the Member States of the European Union, has issued EASA Airworthiness Directive 2014-0218, dated September 29, 2014 (referred to after this as the Mandatory Continuing Airworthiness Information, or “the MCAI”), to correct an unsafe condition on certain Saab AB, Saab Aeronautics Model 340A (SAAB/SF340A) and SAAB 340B airplanes. The MCAI states:

A few natural stall events, specifically when operating in icing conditions, have been experienced on SAAB 340 series aeroplanes, without receiving a prior stall warning.

This condition, if not corrected, could result in loss of control of the aeroplane.

To address this potential unsafe condition, SAAB developed a modified stall warning system, incorporating improved stall warning logic, and issued Service Bulletin (SB) 340-27-098 and SB 340-27-099, providing instructions to replace the Stall Warning Computer (SWC) with a new SWC, and instructions to activate the new SWC. The new system included stall warning curves optimized for operation in icing conditions, which are activated by selection of Engine Anti-Ice.

Consequently, EASA issued AD 2011-0219 to require installation of the improved SWC.

After that [EASA] AD was issued, in-service experience with the improved stall warning system revealed cases of premature stall warning activation during the take-off phase. In numerous recorded cases, the onset of stall warning occurred without the 6 minute delay after weight off wheels.

This condition, if not corrected, could lead to premature stick shaker activation and consequent increase in pilot workload during the take-off phase, possibly resulting in reduced control of the aeroplane.

To correct this unsafe condition, EASA issued AD 2013-0254 retaining the requirements of EASA AD 2011-0219, which was superseded, to require deactivation of the ice speed curves in the improved SWC on SAAB 340 aeroplanes, in accordance with SAAB SB 340-27-116.

Since EASA AD 2013-0254 was issued, SAAB developed a technical solution to eliminate the premature activation of the stall warning ice curves and issued SB 340-27-120 (modification of the existing Stall Warning System installation), SB 340-27-121 (activation of improved SWC for aeroplanes with a basic wing tip) and SB 340-27-122 (activation of improved SWC for aeroplanes with an extended wing tip). SAAB SB 340-27-120 provides modification and installation instructions valid for pre- and post-SB 340-27-097, 340-27-098, SB 340-27-099 and SB 340-27-116 aeroplanes. For aeroplanes modified in accordance with SAAB AB mod. No. 2650 and/or mod. No. 2859 which are no longer registered in Canada, SAAB AB issued SAAB AB SB 340-27-109 to provide modification and installation instructions to remove the ice speed curve function.

For the reasons described above, this [EASA] AD retains the requirements of EASA AD 2013-0254, which is superseded, and requires modification of the Stall Warning and Identification System and replacement of the SWC with an improved unit.

You may examine the MCAI in the AD docket on the Internet at http://www.regulations.gov by searching for and locating Docket No. FAA-2015-6544.

Comments

We gave the public the opportunity to participate in developing this AD. We received no comments on the SNPRM or on the determination of the cost to the public.

Conclusion

We reviewed the available data and determined that air safety and the public interest require adopting this AD as proposed, except for minor editorial changes. We have determined that these minor changes:

• Are consistent with the intent that was proposed in the SNPRM for correcting the unsafe condition; and

• Do not add any additional burden upon the public than was already proposed in the SNPRM.

Related Service Information Under 1 CFR Part 51

Saab AB, Saab Aeronautics has issued the following service information:

• Saab Service Bulletin 340-27-109, dated April 14, 2014.

• Saab Service Bulletin 340-27-116, dated October 18, 2013.

• Saab Service Bulletin 340-27-120, dated July 11, 2014.

• Saab Service Bulletin 340-27-121, dated July 11, 2014.

• Saab Service Bulletin 340-27-122, dated July 11, 2014.

The service information describes procedures for deactivating the stall warning speed curves in the SWCs for certain airplanes; replacing the existing SWCs with new, improved SWCs; and modifying the airplane for the new replacement of the SWC. These documents are distinct since they apply to different airplane models in different configurations. This service information is reasonably available because the interested parties have access to it through their normal course of business or by the means identified in the ADDRESSES section.

Costs of Compliance

We estimate that this AD affects 105 airplanes of U.S. registry.

The actions required by AD 2012-24-06 and retained in this AD, take about 78 work-hours per product, at an average labor rate of $85 per work-hour. Required parts cost about $33,000 per product. Based on these figures, the estimated cost of the actions that were required by AD 2012-24-06 is $39,630 per product.

The new requirement of this AD adds no additional economic burden.

Authority for This Rulemaking

Title 49 of the United States Code specifies the FAA's authority to issue rules on aviation safety. Subtitle I, section 106, describes the authority of the FAA Administrator. “Subtitle VII: Aviation Programs,” describes in more detail the scope of the Agency's authority.

We are issuing this rulemaking under the authority described in “Subtitle VII, Part A, Subpart III, Section 44701: General requirements.” Under that section, Congress charges the FAA with promoting safe flight of civil aircraft in air commerce by prescribing regulations for practices, methods, and procedures the Administrator finds necessary for safety in air commerce. This regulation is within the scope of that authority because it addresses an unsafe condition that is likely to exist or develop on products identified in this rulemaking action.

Regulatory Findings

We determined that this AD will not have federalism implications under Executive Order 13132. This AD will not have a substantial direct effect on the States, on the relationship between the national government and the States, or on the distribution of power and responsibilities among the various levels of government.

For the reasons discussed above, I certify that this AD:

1. Is not a “significant regulatory action” under Executive Order 12866;

2. Is not a “significant rule” under the DOT Regulatory Policies and Procedures (44 FR 11034, February 26, 1979);

3. Will not affect intrastate aviation in Alaska; and

4. Will not have a significant economic impact, positive or negative, on a substantial number of small entities under the criteria of the Regulatory Flexibility Act.

List of Subjects in 14 CFR Part 39

Air transportation, Aircraft, Aviation safety, Incorporation by reference, Safety.

Adoption of the Amendment

Accordingly, under the authority delegated to me by the Administrator, the FAA amends 14 CFR part 39 as follows:

PART 39—AIRWORTHINESS DIRECTIVES 1. The authority citation for part 39 continues to read as follows: Authority:

49 U.S.C. 106(g), 40113, 44701.

§ 39.13 [Amended]
2. The FAA amends § 39.13 by removing Airworthiness Directive (AD) 2012-24-06, Amendment 39-17276 (77 FR 73279, December 10, 2012), and adding the following new AD: 2016-22-15 Saab AB, Saab Aeronautics: Amendment 39-18704; Docket No. FAA-2015-6544; Directorate Identifier 2014-NM-198-AD. (a) Effective Date

This AD is effective December 9, 2016.

(b) Affected ADs

This AD replaces AD 2012-24-06, Amendment 39-17276 (77 FR 73279, December 10, 2012) (“AD 2012-24-06”).

(c) Applicability

This AD applies to Saab AB, Saab Aeronautics (formerly known as Saab AB, Saab Aerosystems) Model 340A (SAAB/SF340A) and SAAB 340B airplanes, certificated in any category, as identified in paragraphs (c)(1) and (c)(2) of this AD.

(1) Model 340A (SAAB/SF340A) airplanes, serial numbers 004 through 159 inclusive.

(2) Model SAAB 340B airplanes, serial numbers 160 through 459 inclusive, except serial numbers 170, 342, 362, 363, 367, 372, 379, 385, 395, 405, 409, 431, 441, and 455.

(d) Subject

Air Transport Association (ATA) of America Code 27, Flight Controls.

(e) Reason

This AD was prompted by a determination that airplanes with certain modifications were excluded from the applicability in AD 2012-24-06, and are affected by the identified unsafe condition; and the stall warning computer (SWC) required by AD 2012-24-06 contained erroneous logic. We are issuing this AD to prevent natural stall events during operation in icing conditions, which could result in loss of control of the airplane.

(f) Compliance

Comply with this AD within the compliance times specified, unless already done.

(g) Deactivation of Stall Speed Curves

For airplanes identified in paragraphs (g)(1) and (g)(2) of this AD: Within 30 days after the effective date of this AD, do the deactivation specified in paragraph (g)(1) or (g)(2) of this AD, as applicable to airplane configuration, in accordance with the Accomplishment Instructions of Saab Service Bulletin 340-27-116, dated October 18, 2013.

(1) For airplanes with a basic wing tip that has been modified using Saab Service Bulletin 340-27-098: Deactivate the stall speed curves in the SWC having part number (P/N) 0020AK6.

(2) For airplanes with an extended wing tip that has been modified using Saab Service Bulletin 340-27-099: Deactivate the stall speed curves in the SWC having P/N 0020AK7.

(h) Replacement of SWCs

Within 3 months after the effective date of this AD: Do the replacement specified in paragraph (h)(1) or (h)(2) of this AD, as applicable.

(1) For airplanes with basic wing tips: Replace all SWCs with new, improved SWCs having P/N 0020AK6-1, in accordance with the Accomplishment Instructions of Saab Service Bulletin 340-27-121, dated July 11, 2014.

(2) For airplanes with extended wing tips: Replace all SWCs with new, improved SWCs having P/N 0020AK7-1, in accordance with the Accomplishment Instructions of Saab Service Bulletin 340-27-122, dated July 11, 2014.

(i) Concurrent Modification

Before or concurrently with the accomplishment of the applicable requirements of paragraph (h) of this AD, do the actions specified in paragraph (i)(1) or (i)(2) of this AD, as applicable to airplane configuration.

(1) For airplanes on which either Saab AB Modification 2650 or Modification 2859 is not installed: Modify the stall warning and identification system, in accordance with the Accomplishment Instructions of Saab Service Bulletin 340-27-120, dated July 11, 2014.

(2) For airplanes on which either Saab AB Modification 2650 or Modification 2859 is installed, or on which both modifications are installed: Modify the stall warning and identification system, in accordance with the Accomplishment Instructions of Saab Service Bulletin 340-27-109, dated April 14, 2014.

(j) Parts Installation Prohibitions

After the replacement required by paragraph (h) of this AD, no person may install any SWC having P/N 0020AK, 0020AK1, 0020AK2, 0020AK4, 0020AK6, 0020AK7, or 0020AK3 MOD 1, on any airplane.

(k) Other FAA AD Provisions

The following provisions also apply to this AD:

(1) Alternative Methods of Compliance (AMOCs): The Manager, International Branch, ANM-116, Transport Airplane Directorate, FAA, has the authority to approve AMOCs for this AD, if requested using the procedures found in 14 CFR 39.19. In accordance with 14 CFR 39.19, send your request to your principal inspector or local Flight Standards District Office, as appropriate. If sending information directly to the International Branch, send it to ATTN: Shahram Daneshmandi, Aerospace Engineer, International Branch, ANM-116, Transport Airplane Directorate, FAA, 1601 Lind Avenue SW., Renton, WA 98057-3356; telephone 425-227-1112; fax 425-227-1149. Information may be emailed to: [email protected]. Before using any approved AMOC, notify your appropriate principal inspector, or lacking a principal inspector, the manager of the local flight standards district office/certificate holding district office. The AMOC approval letter must specifically reference this AD.

(2) Contacting the Manufacturer: For any requirement in this AD to obtain corrective actions from a manufacturer, the action must be accomplished using a method approved by the Manager, International Branch, ANM-116, Transport Airplane Directorate, FAA; or the European Aviation Safety Agency (EASA); or Saab AB, Saab Aeronautics' EASA Design Organization Approval (DOA). If approved by the DOA, the approval must include the DOA-authorized signature.

(l) Related Information

Refer to Mandatory Continuing Airworthiness Information (MCAI) EASA Airworthiness Directive 2014-0218, dated September 29, 2014, for related information. This MCAI may be found in the AD docket on the Internet at http://www.regulations.gov by searching for and locating Docket No. FAA-2015-6544.

(m) Material Incorporated by Reference

(1) The Director of the Federal Register approved the incorporation by reference (IBR) of the service information listed in this paragraph under 5 U.S.C. 552(a) and 1 CFR part 51.

(2) You must use this service information as applicable to do the actions required by this AD, unless this AD specifies otherwise.

(i) Saab Service Bulletin 340-27-109, dated April 14, 2014.

(ii) Saab Service Bulletin 340-27-116, dated October 18, 2013.

(iii) Saab Service Bulletin 340-27-120, dated July 11, 2014.

(iv) Saab Service Bulletin 340-27-121, dated July 11, 2014.

(v) Saab Service Bulletin 340-27-122, dated July 11, 2014.

(3) For service information identified in this AD, contact Saab AB, Saab Aeronautics, SE-581 88, Linköping, Sweden; telephone +46 13 18 5591; fax +46 13 18 4874; email [email protected]; Internet http://www.saabgroup.com.

(4) You may view this service information at the FAA, Transport Airplane Directorate, 1601 Lind Avenue SW., Renton, WA. For information on the availability of this material at the FAA, call 425-227-1221.

(5) You may view this service information that is incorporated by reference at the National Archives and Records Administration (NARA). For information on the availability of this material at NARA, call 202-741-6030, or go to: http://www.archives.gov/federal-register/cfr/ibr-locations.html.

Issued in Renton, Washington, on October 25, 2016. Dionne Palermo, Acting Manager, Transport Airplane Directorate, Aircraft Certification Service.
[FR Doc. 2016-26327 Filed 11-3-16; 8:45 am] BILLING CODE 4910-13-P
DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 39 [Docket No. FAA-2016-9356; Directorate Identifier 2016-CE-033-AD; Amendment 39-18701; AD 2016-22-12] RIN 2120-AA64 Airworthiness Directives; Pilatus Aircraft Ltd. Airplanes AGENCY:

Federal Aviation Administration (FAA), DOT.

ACTION:

Final rule; request for comments.

SUMMARY:

We are adopting a new airworthiness directive (AD) for all Pilatus Aircraft Ltd. Models PC-6, PC-6-H1, PC-6-H2, PC-6/350, PC-6/350-H1, PC-6/350-H2, PC-6/A, PC-6/A-H1, PC-6/A-H2, PC-6/B-H2, PC-6/B1-H2, PC-6/B2-H2, PC-6/B2-H4, PC-6/C-H2, and PC-6/C1-H2 airplanes. This AD results from mandatory continuing airworthiness information (MCAI) issued by the aviation authority of another country to identify and correct an unsafe condition on an aviation product. The MCAI describes the unsafe condition as wear and cracks on the stabilizer-trim attachment and structural components. We are issuing this AD to require actions to address the unsafe condition on these products.

DATES:

This AD is effective November 4, 2016.

The Director of the Federal Register approved the incorporation by reference of a certain publication listed in this AD as of November 4, 2016.

We must receive comments on this AD by December 19, 2016.

ADDRESSES:

You may send comments by any of the following methods:

Federal eRulemaking Portal: Go to http://www.regulations.gov. Follow the instructions for submitting comments.

Fax: (202) 493-2251.

Mail: U.S. Department of Transportation, Docket Operations, M-30, West Building Ground Floor, Room W12-140, 1200 New Jersey Avenue SE., Washington, DC 20590.

Hand Delivery: U.S. Department of Transportation, Docket Operations, M-30, West Building Ground Floor, Room W12-140, 1200 New Jersey Avenue SE., Washington, DC 20590, between 9 a.m. and 5 p.m., Monday through Friday, except Federal holidays.

For service information identified in this AD, contact Pilatus Aircraft Ltd., Customer Liaison Manager, CH-6371 STANS, Switzerland; telephone: +41 41 619 3333; fax: +41 41 619 7311; Internet: http://www.pilatus-aircraft.com. You may view this referenced service information at the FAA, Small Airplane Directorate, 901 Locust, Kansas City, Missouri 64106. For information on the availability of this material at the FAA, call (816) 329-4148. It is also available on the Internet at http://www.regulations.gov by searching for locating Docket No. FAA-2016-9356.

Examining the AD Docket

You may examine the AD docket on the Internet at http://www.regulations.gov by searching for and locating Docket No. FAA-2016-9356; or in person at the Docket Management Facility between 9 a.m. and 5 p.m., Monday through Friday, except Federal holidays. The AD docket contains this AD, the regulatory evaluation, any comments received, and other information. The street address for the Docket Office (telephone (800) 647-5527) is in the ADDRESSES section. Comments will be available in the AD docket shortly after receipt.

FOR FURTHER INFORMATION CONTACT:

Doug Rudolph, Aerospace Engineer, FAA, Small Airplane Directorate, 901 Locust, Room 301, Kansas City, Missouri 64106; telephone: (816) 329-4059; fax: (816) 329-4090; email: [email protected]

SUPPLEMENTARY INFORMATION:

Discussion

The European Aviation Safety Agency (EASA), which is the Technical Agent for the Member States of the European Community, has issued Emergency AD No. 2016-0202-E, dated October 7, 2016 (referred to after this as “the MCAI”), to correct an unsafe condition for the specified products. The MCAI states:

Wear and cracks on the stabilizer-trim attachment and relevant structural components have been reported on aeroplanes having accomplished Pilatus Service Bulletin (SB) 53-001 Revision 1, as previously required by FOCA AD HB-2005-263.

Subsequent investigation identified that slightly asymmetric installation and/or operational conditions may result in strong stabilizer vibration, causing crack initiation in the stabilizer-trim attachment fitting or connecting piece.

This condition, if not detected and corrected, may lead to a failure of the fitting or connecting piece, possibly resulting in disconnection of the horizontal stabilizer rear attachment, with consequent loss of control of the aeroplane.

To address this potential unsafe condition, Pilatus issued SB No. 53-003 (hereafter referred to as `the SB' in this AD) to provide inspection instructions.

For the reason described above, this AD requires visual and non destructive inspections of the affected stabilizer-trim attachment components and the related parts and structure to detect cracks, and, depending on findings, the replacement of the affected parts. This AD also provides additional requirements for installation of these parts.

You may examine the MCAI on the Internet at http://www.regulations.gov by searching for and locating Docket No. FAA-2016-9356. Related Service Information Under 1 CFR Part 51

Pilatus Aircraft Ltd. has issued PC-6 Service Bulletin No. 53-003, Revision 1, dated October 13, 2016. The service information describes procedures for inspecting the stabilizer-trim attachment components and the related parts and structure to detect cracks and replacing all cracked parts. This service information is reasonably available because the interested parties have access to it through their normal course of business or by the means identified in the ADDRESSES section of this AD.

FAA's Determination and Requirements of This AD

This product has been approved by the aviation authority of another country, and is approved for operation in the United States. Pursuant to our bilateral agreement with this State of Design Authority, they have notified us of the unsafe condition described in the MCAI and service information referenced above. We are issuing this AD because we evaluated all information provided by the State of Design Authority and determined the unsafe condition exists and is likely to exist or develop on other products of the same type design.

FAA's Determination of the Effective Date

An unsafe condition exists that requires the immediate adoption of this AD. The FAA has found that the risk to the flying public justifies waiving notice and comment prior to adoption of this rule because failure of the stabilizer control system fitting or connecting piece could result in disconnection of the horizontal stabilizer rear attachment, with consequent loss of control. Therefore, we determined that notice and opportunity for public comment before issuing this AD are impracticable and that good cause exists for making this amendment effective in fewer than 30 days.

Comments Invited

This AD is a final rule that involves requirements affecting flight safety, and we did not precede it by notice and opportunity for public comment. We invite you to send any written relevant data, views, or arguments about this AD. Send your comments to an address listed under the ADDRESSES section. Include “Docket No. FAA-2016-9356; Directorate Identifier 2016-CE-033-AD” at the beginning of your comments. We specifically invite comments on the overall regulatory, economic, environmental, and energy aspects of this AD. We will consider all comments received by the closing date and may amend this AD because of those comments.

We will post all comments we receive, without change, to http://www.regulations.gov, including any personal information you provide. We will also post a report summarizing each substantive verbal contact we receive about this AD.

Costs of Compliance

We estimate that this AD will affect 30 products of U.S. registry. We also estimate that it will take about 5 work-hours per product to comply with the basic inspection requirements of this AD. The average labor rate is $85 per work-hour.

Based on these figures, we estimate the cost of this AD on U.S. operators to be $12,750, or $425 per product.

In addition, we estimate that any necessary follow-on replacement actions will take about 6 work-hours and require parts costing $2,000, for a cost of $2,510 per product. We have no way of determining the number of products that may need these actions.

Paperwork Reduction Act

A federal agency may not conduct or sponsor, and a person is not required to respond to, nor shall a person be subject to penalty for failure to comply with a collection of information subject to the requirements of the Paperwork Reduction Act unless that collection of information displays a current valid OMB control number. The control number for the collection of information required by this AD is 2120-0056. The paperwork cost associated with this AD has been detailed in the Costs of Compliance section of this document and includes time for reviewing instructions, as well as completing and reviewing the collection of information. Therefore, all reporting associated with this AD is mandatory. Comments concerning the accuracy of this burden and suggestions for reducing the burden should be directed to the FAA at 800 Independence Ave., SW., Washington, DC 20591. ATTN: Information Collection Clearance Officer, AES-200.

Authority for This Rulemaking

Title 49 of the United States Code specifies the FAA's authority to issue rules on aviation safety. Subtitle I, section 106, describes the authority of the FAA Administrator. “Subtitle VII: Aviation Programs,” describes in more detail the scope of the Agency's authority.

We are issuing this rulemaking under the authority described in “Subtitle VII, Part A, Subpart III, section 44701: General requirements.” Under that section, Congress charges the FAA with promoting safe flight of civil aircraft in air commerce by prescribing regulations for practices, methods, and procedures the Administrator finds necessary for safety in air commerce. This regulation is within the scope of that authority because it addresses an unsafe condition that is likely to exist or develop on products identified in this rulemaking action.

Regulatory Findings

We determined that this AD will not have federalism implications under Executive Order 13132. This AD will not have a substantial direct effect on the States, on the relationship between the national government and the States, or on the distribution of power and responsibilities among the various levels of government.

For the reasons discussed above, I certify that this AD:

(1) Is not a “significant regulatory action” under Executive Order 12866,

(2) Is not a “significant rule” under the DOT Regulatory Policies and Procedures (44 FR 11034, February 26, 1979),

(3) Will not affect intrastate aviation in Alaska, and

(4) Will not have a significant economic impact, positive or negative, on a substantial number of small entities under the criteria of the Regulatory Flexibility Act.

List of Subjects in 14 CFR Part 39

Air transportation, Aircraft, Aviation safety, Incorporation by reference, Safety.

Adoption of the Amendment

Accordingly, under the authority delegated to me by the Administrator, the FAA amends 14 CFR part 39 as follows:

PART 39—AIRWORTHINESS DIRECTIVES 1. The authority citation for part 39 continues to read as follows: Authority:

49 U.S.C. 106(g), 40113, 44701.

§ 39.13 [Amended]
2. The FAA amends § 39.13 by adding the following new AD: 2016-22-12 Pilatus Aircraft Ltd.: Amendment 39-18701; Docket No. FAA-2016-9356; Directorate Identifier 2016-CE-033-AD. (a) Effective Date

This airworthiness directive (AD) becomes effective November 4, 2016.

(b) Affected ADs

None.

(c) Applicability

(1) This AD applies to PILATUS Models PC-6, PC-6-H1, PC-6-H2, PC-6/350, PC-6/350-H1, PC-6/350-H2, PC-6/A, PC-6/A-H1, PC-6/A-H2, PC-6/B-H2, PC-6/B1-H2, PC- 6/B2-H2, PC-6/B2-H4, PC-6/C-H2, and PC-6/C1-H2 airplanes, all manufacturer serial numbers, including MSN 2001 through 2092 (see Note 1 of paragraph c), certificated in any category.

Note 1 of paragraph (c):

For MSN 2001-2092, these airplanes are also identified as Fairchild Republic Company PC-6 airplanes, Fairchild Industries PC-6 airplanes, Fairchild Heli Porter PC-6 airplanes, or Fairchild-Hiller Corporation PC-6 airplanes.

(2) For the purpose of this AD, an “affected part” is any stabilizer-trim attachment component and the related parts and structure, as identified in Pilatus Aircraft Ltd. (Pilatus) PC-6 Service Bulletin (SB) No. 53-003, Revision 1, dated October 13, 2016.

(d) Subject

Air Transport Association of America (ATA) Code 53: Fuselage.

(e) Reason

This AD results from mandatory continuing airworthiness information (MCAI) issued by the aviation authority of another country to identify and correct an unsafe condition on an aviation product. The MCAI describes the unsafe condition as wear and cracks on the stabilizer-trim attachment and structural components. This condition, if not corrected, could cause failure of the stabilizer control system fitting or connecting piece, which could result in disconnection of the horizontal stabilizer rear attachment with consequent loss of control.

(f) Actions and Compliance

Unless already done, do the following actions.

(1) For MSN 337 through 1005 and 2001 through 2092: Before further flight after November 4, 2016 (the effective date of this AD), do a visual inspection of the affected stabilizer-trim attachment and structural components following the Accomplishment Instructions in Pilatus PC-6 SB No. 53-003, Revision 1, dated October 13, 2016.

(2) For MSN 337 through 1005 and 2001 through 2092: Within the next 100 hours time-in-service after November 4, 2016 (the effective date of this AD), do a visual inspection and dye-penetrant or eddy current inspection of the affected stabilizer-trim attachment and structural components following the Accomplishment Instructions in Pilatus PC-6 SB No. 53-003, Revision 1, dated October 13, 2016.

(3) For MSN 337 through 1005 and 2001 through 2092: If any crack is found during any inspection required by paragraphs (f)(1) and (2) of this AD, before further flight, replace the affected part with a serviceable part following the Accomplishment Instructions in Pilatus PC-6 SB No. 53-003, Revision 1, dated October 13, 2016. For the purpose of this AD, a “serviceable part” is an affected part that is new, or has passed an inspection before installation following the Accomplishment Instructions in Pilatus PC-6 SB No. 53-003, Revision 1, dated October 13, 2016.

(4) For MSN 337 through 1005 and 2001 through 2092: Within 10 days after the inspections required by paragraphs (f)(1) and (2) of this AD or within the next 10 days after the effective date of this AD, whichever occurs later, report the results to Pilatus at the address in paragraph (j)(3) of this AD using the Report Form in Pilatus PC-6 SB No. 53-003, Revision 1, dated October 13, 2016.

(5) For all affected MSNs: As of November 4, 2016 (the effective date of this AD), an affected part listed in Pilatus PC-6 SB No. 53-003, Revision 1, dated October 13, 2016, may be installed provided it is a serviceable part. For the purpose of this AD, a “serviceable part” is an affected part that is new, or has passed an inspection before installation following the Accomplishment Instructions in Pilatus PC-6 SB No. 53-003, Revision 1, dated October 13, 2016.

(g) Credit for Actions Done Following Previous Service Information

This AD allows credit for the visual inspection required in paragraph (f)(1) of this AD if done before November 4, 2016 (the effective date of this AD), following Pilatus PC-6 SB No. 53-003, dated October 4, 2016. The dye-penetrant or eddy current inspection must still be done following Pilatus PC-6 SB No. 53-003, Revision 1, dated October 13, 2016.

(h) Other FAA AD Provisions

The following provisions also apply to this AD:

(1) Alternative Methods of Compliance (AMOCs): The Manager, Standards Office, FAA, has the authority to approve AMOCs for this AD, if requested using the procedures found in 14 CFR 39.19. Send information to ATTN: Doug Rudolph, Aerospace Engineer, FAA, Small Airplane Directorate, 901 Locust, Room 301, Kansas City, Missouri 64106; telephone: (816) 329-4059; fax: (816) 329-4090; email: [email protected]. Before using any approved AMOC on any airplane to which the AMOC applies, notify your appropriate principal inspector (PI) in the FAA Flight Standards District Office (FSDO), or lacking a PI, your local FSDO.

(2) Airworthy Product: For any requirement in this AD to obtain corrective actions from a manufacturer or other source, use these actions if they are FAA-approved. Corrective actions are considered FAA-approved if they are approved by the State of Design Authority (or their delegated agent). You are required to assure the product is airworthy before it is returned to service.

(3) Reporting Requirements: For any reporting requirement in this AD, a federal agency may not conduct or sponsor, and a person is not required to respond to, nor shall a person be subject to a penalty for failure to comply with a collection of information subject to the requirements of the Paperwork Reduction Act unless that collection of information displays a current valid OMB Control Number. The OMB Control Number for this information collection is 2120-0056. Public reporting for this collection of information is estimated to be approximately 5 minutes per response, including the time for reviewing instructions, completing and reviewing the collection of information. All responses to this collection of information are mandatory. Comments concerning the accuracy of this burden and suggestions for reducing the burden should be directed to the FAA at: 800 Independence Ave. SW., Washington, DC 20591, Attn: Information Collection Clearance Officer, AES-200.

(i) Related Information

Refer to MCAI European Aviation Safety Agency (EASA) AD No. 2016-0202-E, dated October 7, 2016, and Pilatus Aircraft Ltd. PC-6 Service Bulletin No. 53-003, dated October 4, 2016. You may examine the MCAI on the Internet at http://www.regulations.gov by searching for and locating Docket No. FAA-2016-9356.

(j) Material Incorporated by Reference

(1) The Director of the Federal Register approved the incorporation by reference (IBR) of the service information listed in this paragraph under 5 U.S.C. 552(a) and 1 CFR part 51.

(2) You must use this service information as applicable to do the actions required by this AD, unless this AD specifies otherwise.

(i) Pilatus Aircraft Ltd. PC-6 Service Bulletin No. 53-003, Revision 1, dated October 13, 2016.

(ii) Reserved.

(3) For Pilatus Aircraft Ltd. service information identified in this AD, contact Pilatus Aircraft Ltd., Customer Liaison Manager, CH-6371 STANS, Switzerland; telephone: +41 41 619 3333; fax: +41 41 619 7311; Internet: http://www.pilatus-aircraft.com.

(4) You may view this service information at the FAA, FAA, Small Airplane Directorate, 901 Locust, Kansas City, Missouri 64106. For information on the availability of this material at the FAA, call (816) 329-4148. It is also available on the Internet at http://www.regulations.gov by searching for locating Docket No. FAA-2016-9356.

(5) You may view this service information that is incorporated by reference at the National Archives and Records Administration (NARA). For information on the availability of this material at NARA, call 202-741-6030, or go to: http://www.archives.gov/federal-register/cfr/ibr-locations.html.

Issued in Kansas City, Missouri on October 27, 2016. Pat Mullen, Acting Manager, Small Airplane Directorate, Aircraft Certification Service.
[FR Doc. 2016-26431 Filed 11-3-16; 8:45 am] BILLING CODE 4910-13-P
DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 39 [Docket No. FAA-2016-9306; Directorate Identifier 2016-NM-169-AD; Amendment 39-18707; AD 2016-22-18] RIN 2120-AA64 Airworthiness Directives; The Boeing Company Airplanes AGENCY:

Federal Aviation Administration (FAA), DOT.

ACTION:

Final rule; request for comments.

SUMMARY:

We are adopting a new airworthiness directive (AD) for all The Boeing Company Model MD-90-30 airplanes. This AD requires a detailed inspection of the forward and aft surfaces on the left and right sides at the cant station 1520 bulkhead for any crack in the upper cap and (cap) doubler, webs and doublers, stiffeners, and the lower tee cap between longerons 3 through 11, and repairs if necessary. This AD was prompted by a report of cracking in various structures in the fuselage cant station 1520 bulkhead. We are issuing this AD to detect and correct cracking in the bulkhead, which could result in reduced structural integrity of the airplane.

DATES:

This AD is effective November 21, 2016.

The Director of the Federal Register approved the incorporation by reference of a certain publication listed in this AD as of November 21, 2016.

We must receive comments on this AD by December 19, 2016.

ADDRESSES:

You may send comments, using the procedures found in 14 CFR 11.43 and 11.45, by any of the following methods:

Federal eRulemaking Portal: Go to http://www.regulations.gov. Follow the instructions for submitting comments.

Fax: 202-493-2251.

Mail: U.S. Department of Transportation, Docket Operations, M-30, West Building Ground Floor, Room W12-140, 1200 New Jersey Avenue SE., Washington, DC 20590.

Hand Delivery: Deliver to Mail address above between 9 a.m. and 5 p.m., Monday through Friday, except Federal holidays.

For service information identified in this final rule, contact Boeing Commercial Airplanes, Attention: Contractual & Data Services (C&DS), 2600 Westminster Blvd., MC 110-SK57, Seal Beach, CA 90740-5600; telephone 562-797-1717; Internet https://www.myboeingfleet.com. You may view this referenced service information at the FAA, Transport Airplane Directorate, 1601 Lind Avenue SW., Renton, WA. For information on the availability of this material at the FAA, call 425-227-1221. It is also available on the Internet at http://www.regulations.gov by searching for and locating Docket No. FAA-2016-9306.

Examining the AD Docket

You may examine the AD docket on the Internet at http://www.regulations.gov by searching for and locating Docket No. FAA-2016-9306; or in person at the Docket Management Facility between 9 a.m. and 5 p.m., Monday through Friday, except Federal holidays. The AD docket contains this AD, the regulatory evaluation, any comments received, and other information. The street address for the Docket Office (phone: 800-647-5527) is in the ADDRESSES section. Comments will be available in the AD docket shortly after receipt.

FOR FURTHER INFORMATION CONTACT:

George Garrido, Aerospace Engineer, Airframe Branch, ANM-120L, FAA, Los Angeles Aircraft Certification Office (ACO), 3960 Paramount Boulevard, Lakewood, CA 90712-4137; phone: 562-627-5232; fax: 562-627-5210; email: [email protected].

SUPPLEMENTARY INFORMATION:

Discussion

We have received a report indicating an operator found cracking of various structures in the fuselage cant station 1520 bulkhead on a Model MD-90-30 airplane. The cracks were in the upper left area of the bulkhead, between longerons 5 and 10, in the web, lower tee cap, and the upper cap and (cap) doubler. The affected airplane had accumulated 52,993 total flight hours and 28,718 total flight cycles. Boeing analysis determined that the operational and limit loads cannot duplicate this condition, and the root cause is suspected to be the result of a high load event based on service experience. Cracking of the bulkhead, if not detected and corrected, could result in the inability of the structure to sustain limit loads, and consequent reduced structural integrity of the airplane. We are issuing this AD to correct the unsafe condition on these products.

Related Service Information Under 1 CFR Part 51

We reviewed Boeing Alert Service Bulletin MD90-53A037, dated September 19, 2016. The service information describes procedures for a detailed inspection of the forward and aft surfaces on the left and right sides at cant station 1520 bulkhead for any crack in the upper cap and (cap) doubler, webs and doublers, stiffeners, and the lower tee cap between longerons 3 through 11, and repairs. This service information is reasonably available because the interested parties have access to it through their normal course of business or by the means identified in the ADDRESSES section.

FAA's Determination

We are issuing this AD because we evaluated all the relevant information and determined the unsafe condition described previously is likely to exist or develop in other products of the same type design.

AD Requirements

This AD requires accomplishing the actions specified in the service information described previously, except as discussed under “Differences Between this Proposed AD and the Service Information.” For information on the procedures and compliance times, see this service information at http://www.regulations.gov by searching for and locating Docket No. FAA-2016-9306.

Differences Between This Proposed AD and the Service Information

Boeing Alert Service Bulletin MD90-53A037, dated September 19, 2016, specifies to contact the manufacturer for certain instructions, but this AD would require accomplishment of repair methods, modification deviations, and alteration deviations in one of the following ways:

• In accordance with a method that we approve; or

• Using data that meet the certification basis of the airplane, and that have been approved by the Boeing Commercial Airplanes Organization Designation Authorization (ODA) whom we have authorized to make those findings.

FAA's Justification and Determination of the Effective Date

An unsafe condition exists that requires the immediate adoption of this AD. The FAA has found that the risk to the flying public justifies waiving notice and comment prior to adoption of this rule because undetected cracking of the bulkhead may result in the inability of the structure to sustain limit loads, and consequent reduced structural integrity of the airplane. Therefore, we find that notice and opportunity for prior public comment are impracticable and that good cause exists for making this amendment effective in less than 30 days.

Comments Invited

This AD is a final rule that involves requirements affecting flight safety and was not preceded by notice and an opportunity for public comment. However, we invite you to send any written data, views, or arguments about this AD. Send your comments to an address listed under the ADDRESSES section. Include the docket number FAA-2016-9306 and Directorate Identifier 2016-NM-169-AD at the beginning of your comments. We specifically invite comments on the overall regulatory, economic, environmental, and energy aspects of this AD. We will consider all comments received by the closing date and may amend this AD because of those comments.

We will post all comments we receive, without change, to http://www.regulations.gov, including any personal information you provide. We will also post a report summarizing each substantive verbal contact we receive about this AD.

Costs of Compliance

We estimate that this AD affects 71 airplanes of U.S. registry. We estimate the following costs to comply with this AD:

Estimated Costs Action Labor cost Parts cost Cost per
  • product
  • Cost on U.S. operators
    Detailed inspection 2 work-hours × $85 per hour = $170 $0 $170 $12,070

    We have received no definitive data that would enable us to provide cost estimates for the on-condition actions specified in this AD.

    Authority for This Rulemaking

    Title 49 of the United States Code specifies the FAA's authority to issue rules on aviation safety. Subtitle I, section 106, describes the authority of the FAA Administrator. “Subtitle VII: Aviation Programs” describes in more detail the scope of the Agency's authority.

    We are issuing this rulemaking under the authority described in Subtitle VII, Part A, Subpart III, Section 44701: “General requirements.” Under that section, Congress charges the FAA with promoting safe flight of civil aircraft in air commerce by prescribing regulations for practices, methods, and procedures the Administrator finds necessary for safety in air commerce. This regulation is within the scope of that authority because it addresses an unsafe condition that is likely to exist or develop on products identified in this rulemaking action.

    Regulatory Findings

    This AD will not have federalism implications under Executive Order 13132. This AD will not have a substantial direct effect on the States, on the relationship between the national government and the States, or on the distribution of power and responsibilities among the various levels of government.

    For the reasons discussed above, I certify that this AD:

    (1) Is not a “significant regulatory action” under Executive Order 12866,

    (2) Is not a “significant rule” under DOT Regulatory Policies and Procedures (44 FR 11034, February 26, 1979),

    (3) Will not affect intrastate aviation in Alaska, and

    (4) Will not have a significant economic impact, positive or negative, on a substantial number of small entities under the criteria of the Regulatory Flexibility Act.

    List of Subjects in 14 CFR Part 39

    Air transportation, Aircraft, Aviation safety, Incorporation by reference, Safety.

    Adoption of the Amendment

    Accordingly, under the authority delegated to me by the Administrator, the FAA amends 14 CFR part 39 as follows:

    PART 39—AIRWORTHINESS DIRECTIVES 1. The authority citation for part 39 continues to read as follows: Authority:

    49 U.S.C. 106(g), 40113, 44701.

    § 39.13 [Amended]
    2. The FAA amends § 39.13 by adding the following new airworthiness directive (AD): 2016-22-18 The Boeing Company: Amendment 39-18707; Docket No. FAA-2016-9306; Directorate Identifier 2016-NM-169-AD. (a) Effective Date

    This AD is effective November 21, 2016.

    (b) Affected ADs

    None.

    (c) Applicability

    This AD applies to all The Boeing Company Model MD-90-30 airplanes, certificated in any category.

    (d) Subject

    Air Transport Association (ATA) of America Code 53, Fuselage.

    (e) Unsafe Condition

    This AD was prompted by a report of cracking in various structures in the fuselage cant station 1520 bulkhead. We are issuing this AD to detect and correct cracking in the bulkhead, which could result in reduced structural integrity of the airplane.

    (f) Compliance

    Comply with this AD within the compliance times specified, unless already done.

    (g) Detailed Inspection of the Cant Station 1520 Bulkhead

    At the applicable time specified in paragraph 1.E., “Compliance,” of Boeing Alert Service Bulletin MD90-53A037, dated September 19, 2016, except as required by paragraph (j) of this AD: On the left and right sides at the cant station 1520 bulkhead, do a detailed inspection of the forward and aft surfaces, for any crack in the upper cap and (cap) doubler, webs and doublers, stiffeners, and the lower tee cap between longerons 3 through 11, in accordance with the Accomplishment Instructions of Boeing Alert Service Bulletin MD90-53A037, dated September 19, 2016.

    (h) Repair of Cracks in the Bulkhead Web or Doubler

    If any crack is found in the bulkhead web or doubler, do the repair in accordance with Boeing Alert Service Bulletin MD90-53A037, dated September 19, 2016. Do all repairs before further flight.

    (i) Repair of Non-Web or Non-Doubler Cracks in the Bulkhead

    If any non-web or non-doubler crack is found in the bulkhead, repair before further flight using a method approved in accordance with the procedures specified in paragraph (l) of this AD.

    (j) Service Information Exception

    Where paragraph 1.E., “Compliance,” of Boeing Alert Service Bulletin MD90-53A037, dated September 19, 2016, specifies a compliance time “after the original issue date of this service bulletin,” this AD requires compliance within the specified compliance time after the effective date of this AD.

    (k) Special Flight Permit

    Special flight permits may be issued in accordance with sections 21.197 and 21.199 of the Federal Aviation Regulations (14 CFR 21.197 and 21.199) to operate the airplane to a location where the airplane can be repaired, but if any crack is found as identified in Boeing Alert Service Bulletin MD90-53A037, dated September 19, 2016, concurrence by the Manager, Los Angeles Aircraft Certification Office (ACO), FAA, is required before issuance of the special flight permit.

    (l) Alternative Methods of Compliance (AMOCs)

    (1) The Manager, Los Angeles ACO, FAA, has the authority to approve AMOCs for this AD, if requested using the procedures found in 14 CFR 39.19. In accordance with 14 CFR 39.19, send your request to your principal inspector or local Flight Standards District Office, as appropriate. If sending information directly to the manager of the ACO, send it to the attention of the person identified in paragraph (m) of this AD. Information may be emailed to: [email protected].

    (2) Before using any approved AMOC, notify your appropriate principal inspector, or lacking a principal inspector, the manager of the local flight standards district office/certificate holding district office.

    (3) An AMOC that provides an acceptable level of safety may be used for any repair, modification, or alteration required by this AD if it is approved by the Boeing Commercial Airplanes Organization Designation Authorization (ODA) that has been authorized by the Manager, Los Angeles ACO, to make those findings. To be approved, the repair method, modification deviation, or alteration deviation must meet the certification basis of the airplane, and the approval must specifically refer to this AD.

    (4) Except as required by paragraph (j) of this AD: For service information that contains steps that are labeled as Required for Compliance (RC), the provisions of paragraphs (l)(4)(i) and (l)(4)(ii) of this AD apply.

    (i) The steps labeled as RC, including substeps under an RC step and any figures identified in an RC step, must be done to comply with the AD. If a step or substep is labeled “RC Exempt,” then the RC requirement is removed from that step or substep. An AMOC is required for any deviations to RC steps, including substeps and identified figures.

    (ii) Steps not labeled as RC may be deviated from using accepted methods in accordance with the operator's maintenance or inspection program without obtaining approval of an AMOC, provided the RC steps, including substeps and identified figures, can still be done as specified, and the airplane can be put back in an airworthy condition.

    (m) Related Information

    For more information about this AD, contact George Garrido, Aerospace Engineer, Airframe Branch, ANM-120L, FAA, Los Angeles Aircraft Certification Office (ACO), 3960 Paramount Boulevard, Lakewood, CA 90712-4137; phone: 562-627-5232; fax: 562-627-5210; email: [email protected].

    (n) Material Incorporated by Reference

    (1) The Director of the Federal Register approved the incorporation by reference (IBR) of the service information listed in this paragraph under 5 U.S.C. 552(a) and 1 CFR part 51.

    (2) You must use this service information as applicable to do the actions required by this AD, unless the AD specifies otherwise.

    (i) Boeing Alert Service Bulletin MD90-53A037, dated September 19, 2016.

    (ii) Reserved.

    (3) For service information identified in this AD, contact Boeing Commercial Airplanes, Attention: Contractual & Data Services (C&DS), 2600 Westminster Blvd., MC 110-SK57, Seal Beach, CA 90740-5600; telephone 562-797-1717; Internet https://www.myboeingfleet.com.

    (4) You may view this service information at the FAA, Transport Airplane Directorate, 1601 Lind Avenue SW., Renton, WA. For information on the availability of this material at the FAA, call 425-227-1221.

    (5) You may view this service information that is incorporated by reference at the National Archives and Records Administration (NARA). For information on the availability of this material at NARA, call 202-741-6030, or go to: http://www.archives.gov/federal-register/cfr/ibr-locations.html.

    Issued in Renton, Washington, on October 26, 2016. Dionne Palermo, Acting Manager, Transport Airplane Directorate, Aircraft Certification Service.
    [FR Doc. 2016-26629 Filed 11-3-16; 8:45 am] BILLING CODE 4910-13-P
    DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 39 [Docket No. FAA-2016-6669; Directorate Identifier 2015-NM-191-AD; Amendment 39-18698; AD 2016-22-09] RIN 2120-AA64 Airworthiness Directives; The Boeing Company Airplanes AGENCY:

    Federal Aviation Administration (FAA), DOT.

    ACTION:

    Final rule.

    SUMMARY:

    We are superseding Airworthiness Directive (AD) 2006-20-11 for certain The Boeing Company Model 757-200, -200CB, and -200PF series airplanes. AD 2006-20-11 required initial and repetitive detailed or high frequency eddy current (HFEC) inspections for cracks around the rivets at the upper fastener row of the skin lap splice of the fuselage, and repair of any crack found. This new AD no longer allows the detailed inspections and instead requires repetitive external HFEC inspections for cracking of the skin lap splices of the fuselage, and repair if necessary. This AD was prompted by an evaluation done by the design approval holder (DAH) indicating that the fuselage skin lap splice is subject to widespread fatigue damage (WFD). We are issuing this AD to detect and correct fatigue cracking at certain skin lap splice locations of the fuselage, which could result in reduced structural integrity and rapid decompression of the airplane.

    DATES:

    This AD is effective December 9, 2016.

    The Director of the Federal Register approved the incorporation by reference of a certain publication listed in this AD as of December 9, 2016.

    The Director of the Federal Register approved the incorporation by reference of a certain other publication listed in this AD as of November 8, 2006 (71 FR 58485, October 4, 2006).

    ADDRESSES:

    For service information identified in this final rule, contact Boeing Commercial Airplanes, Attention: Contractual & Data Services (C&DS), 2600 Westminster Blvd., MC 110-SK57, Seal Beach, CA 90740; telephone 562-797-1717; Internet https://www.myboeingfleet.com. You may view this referenced service information at the FAA, Transport Airplane Directorate, 1601 Lind Avenue SW., Renton, WA. For information on the availability of this material at the FAA, call 425-227-1221. It is also available on the Internet at http://www.regulations.gov by searching for and locating Docket No. FAA-2016-6669.

    Examining the AD Docket

    You may examine the AD docket on the Internet at http://www.regulations.gov by searching for and locating Docket No. FAA-2016-6669; or in person at the Docket Management Facility between 9 a.m. and 5 p.m., Monday through Friday, except Federal holidays. The AD docket contains this AD, the regulatory evaluation, any comments received, and other information. The address for the Docket Office (phone: 800-647-5527) is Docket Management Facility, U.S. Department of Transportation, Docket Operations, M-30, West Building Ground Floor, Room W12-140, 1200 New Jersey Avenue SE., Washington, DC 20590.

    FOR FURTHER INFORMATION CONTACT:

    Eric Schrieber, Aerospace Engineer, Airframe Branch, ANM-120L, FAA, Los Angeles Aircraft Certification Office (ACO), 3960 Paramount Boulevard, Lakewood, CA 90712-4137; phone: 562-627-5348; fax: 562-627-5210; email: [email protected].

    SUPPLEMENTARY INFORMATION:

    Discussion

    We issued a notice of proposed rulemaking (NPRM) to amend 14 CFR part 39 to supersede AD 2006-20-11, Amendment 39-14781 (71 FR 58485, October 4, 2006) (“AD 2006-20-11”). AD 2006-20-11 applied to certain The Boeing Company Model 757-200, -200CB, and -200PF series airplanes. The NPRM published in the Federal Register on May 12, 2016 (81 FR 29508) (“the NPRM”). The NPRM was prompted by an evaluation done by the DAH indicating that the fuselage skin lap splice is subject to WFD. The NPRM proposed to require repetitive external HFEC inspections for cracking of the skin lap splices of the fuselage, and repair if necessary. We are issuing this AD to detect and correct fatigue cracking at certain skin lap splice locations of the fuselage, which could result in reduced structural integrity and rapid decompression of the airplane.

    Comments

    We gave the public the opportunity to participate in developing this AD. The following presents the comments received on the NPRM and the FAA's response to each comment.

    Support of the NPRM

    FedEx provided comments that supported the intent of the NPRM.

    Request To Change Compliance Time

    Boeing and United Airlines (UA) asked that we change the compliance time for the repetitive HFEC inspections specified in paragraph (j) of the proposed AD. Boeing learned that some operators began doing inspections long before the 37,500-flight-cycle threshold was attained. Boeing stated that the compliance table in Boeing Special Attention Service Bulletin 757-53-0090, Revision 1, dated November 19, 2015, provided grace periods for doing the HFEC inspections after doing previous inspections, but did not provide for previous inspections being done within the grace period or before the required threshold of 37,500 flight cycles, whichever occurs later. Boeing added that, as written, the service information specifies repetitive inspections within 3,000 flight cycles after any previous detailed inspection and within 12,000 flight cycles after any previous HFEC inspection—even if the interval occurred before the 37,500-flight-cycle threshold.

    UA stated that if an operator decided to proactively accomplish either a detailed or HFEC inspection before the specified compliance time in, and in accordance with either Boeing Special Attention Service Bulletin 757-53-0090, dated June 2, 2005 or Boeing Special Attention Service Bulletin 757-53-0090, Revision 1, dated November 19, 2015, then the inspection would have to be repeated within 3,000 or 12,000 flight cycles, depending on which inspection was previously done. UA stated that this compliance time could be much sooner than the intended 37,500 flight cycles. UA noted that it discussed this problem with Boeing and hoped it could be clarified in the NPRM.

    We agree with the commenters' requests to change the compliance time for the repetitive HFEC inspections specified in paragraph (j) of this AD. According to the proposed AD, operators that accomplished the inspections early would be required to do the inspections before reaching the inspection threshold specified in paragraph (j) of the proposed AD. It was not the intent of Boeing or the FAA to require that the airplane be inspected prior to reaching the required threshold. Therefore, we have added new paragraphs (j)(1) and (j)(2) to this AD to include the additional compliance times.

    Effect of Winglets on Accomplishment of the Proposed Actions

    Aviation Partners Boeing stated that accomplishing the supplemental type certificate (STC) ST01518SE does not affect compliance with the actions specified in the NPRM.

    We agree with the commenter. We have redesignated paragraph (c) of the proposed AD as (c)(1) and added a new paragraph (c)(2) to this AD to state that installation of STC ST01518SE does not affect the ability to accomplish the actions required by this final rule. Therefore, for airplanes on which STC ST01518SE is installed, a “change in product” alternative method of compliance (AMOC) approval request is not necessary to comply with the requirements of 14 CFR 39.17.

    Request To Include Approved Repairs in Revised Service Information

    UA asked that instructions for approved repairs be incorporated into the next revision of Boeing Special Attention Service Bulletin 757-53-0090, Revision 1, dated November 19, 2015, as an AMOC to the NPRM. UA stated that the lack of approved repairs in the service information adds a significant burden to operators, Boeing Designated Airworthiness Representatives, and the Los Angeles Aircraft Certification Office.

    We acknowledge the commenter's concern. If the service information is revised to include instructions for approved repairs, affected operators may request approval to use the later revision of the referenced service information as an AMOC, under the provisions of paragraph (m) of this AD. We have made no change to this AD in this regard.

    Conclusion

    We reviewed the relevant data, considered the comments received, and determined that air safety and the public interest require adopting this AD with the changes described previously, and minor editorial changes. We have determined that these minor changes:

    • Are consistent with the intent that was proposed in the NPRM for correcting the unsafe condition; and

    • Do not add any additional burden upon the public than was already proposed in the NPRM.

    We also determined that these changes will not increase the economic burden on any operator or increase the scope of this AD.

    Related Service Information Under 1 CFR Part 51

    We reviewed Boeing Special Attention Service Bulletin 757-53-0090, Revision 1, dated November 19, 2015. The service information describes procedures for repetitive external HFEC inspections for cracking of the skin lap splices of the fuselage. This service information is reasonably available because the interested parties have access to it through their normal course of business or by the means identified in the ADDRESSES section.

    Costs of Compliance

    We estimate that this AD affects 572 airplanes of U.S. registry.

    We estimate the following costs to comply with this AD:

    Estimated Costs Action Labor cost Parts cost Cost per
  • product
  • Cost on U.S. operators
    Inspections [retained actions from AD 2006-20-11] Up to 20 work-hours × $85 per hour = up to $1,700 per inspection cycle $0 Up to $1,700 per inspection cycle Up to $972,400 per inspection cycle. New inspections Up to 20 work-hours × $85 per hour = up to $1,700 per inspection cycle $0 Up to $1,700 per inspection cycle Up to $972,400 per inspection cycle.

    We have received no definitive data that would enable us to provide a cost estimate for the on-condition repairs specified in this AD.

    Authority for This Rulemaking

    Title 49 of the United States Code specifies the FAA's authority to issue rules on aviation safety. Subtitle I, Section 106, describes the authority of the FAA Administrator. Subtitle VII, Aviation Programs, describes in more detail the scope of the Agency's authority.

    We are issuing this rulemaking under the authority described in Subtitle VII, Part A, Subpart III, Section 44701, “General requirements.” Under that section, Congress charges the FAA with promoting safe flight of civil aircraft in air commerce by prescribing regulations for practices, methods, and procedures the Administrator finds necessary for safety in air commerce. This regulation is within the scope of that authority because it addresses an unsafe condition that is likely to exist or develop on products identified in this rulemaking action.

    Regulatory Findings

    We have determined that this AD will not have federalism implications under Executive Order 13132. This AD will not have a substantial direct effect on the States, on the relationship between the national government and the States, or on the distribution of power and responsibilities among the various levels of government.

    For the reasons discussed above, I certify that this AD:

    (1) Is not a “significant regulatory action” under Executive Order 12866,

    (2) Is not a “significant rule” under DOT Regulatory Policies and Procedures (44 FR 11034, February 26, 1979),

    (3) Will not affect intrastate aviation in Alaska, and

    (4) Will not have a significant economic impact, positive or negative, on a substantial number of small entities under the criteria of the Regulatory Flexibility Act.

    List of Subjects in 14 CFR Part 39

    Air transportation, Aircraft, Aviation safety, Incorporation by reference, Safety.

    Adoption of the Amendment

    Accordingly, under the authority delegated to me by the Administrator, the FAA amends 14 CFR part 39 as follows:

    PART 39—AIRWORTHINESS DIRECTIVES 1. The authority citation for part 39 continues to read as follows: Authority:

    49 U.S.C. 106(g), 40113, 44701.

    § 39.13 [Amended]
    2. The FAA amends § 39.13 by removing Airworthiness Directive (AD) 2006-20-11, Amendment 39-14781 (71 FR 58485, October 4, 2006), and adding the following new AD: 2016-22-09 The Boeing Company: Amendment 39-18698; Docket No. FAA-2016-6669; Directorate Identifier 2015-NM-191-AD. (a) Effective Date

    This AD is effective December 9, 2016.

    (b) Affected ADs

    This AD replaces AD 2006-20-11, Amendment 39-14781 (71 FR 58485, October 4, 2006) (“AD 2006-20-11”). This AD affects AD 2006-11-11, Amendment 39-14615 (71 FR 30278, May 26, 2006) (“AD 2006-11-11”).

    (c) Applicability

    (1) This AD applies to The Boeing Company Model 757-200, -200CB, and -200PF series airplanes, certificated in any category, as identified in Boeing Special Attention Service Bulletin 757-53-0090, Revision 1, dated November 19, 2015.

    (2) Installation of Supplemental Type Certificate (STC) ST01518SE (http://rgl.faa.gov/Regulatory_and_Guidance_Library/rgSTC.nsf/0/38B606833BBD98B386257FAA00602538?OpenDocument&Highlight=st01518se) does not affect the ability to accomplish the actions required by this AD. Therefore, for airplanes on which STC ST01518SE is installed, a “change in product” alternative method of compliance (AMOC) approval request is not necessary to comply with the requirements of 14 CFR 39.17.

    (d) Subject

    Air Transport Association (ATA) of America Code 53, Fuselage.

    (e) Unsafe Condition

    This AD was prompted by an evaluation done by the design approval holder indicating that the fuselage skin lap splice is subject to widespread fatigue damage. We are issuing this AD to detect and correct fatigue cracking at certain skin lap splice locations of the fuselage, which could result in reduced structural integrity and rapid decompression of the airplane.

    (f) Compliance

    Comply with this AD within the compliance times specified, unless already done.

    (g) Retained Initial and Repetitive Inspections, With Terminating Action

    This paragraph restates the requirements of paragraph (f) of AD 2006-20-11, with terminating action. Do initial and repetitive detailed or high frequency eddy current (HFEC) inspections for cracking around the rivets at the upper fastener row of the skin lap splice of the fuselage by doing all the actions in accordance with the Accomplishment Instructions of Boeing Special Attention Service Bulletin 757-53-0090, dated June 2, 2005, except as provided by paragraphs (h) and (i) of this AD. Do the inspections at the applicable times specified in Paragraph 1.E., “Compliance,” of Boeing Special Attention Service Bulletin 757-53-0090, dated June 2, 2005; except where Boeing Special Attention Service Bulletin 757-53-0090, dated June 2, 2005, specifies a compliance time “after the original release date of this service bulletin,” this AD requires compliance after November 8, 2006 (the effective date of AD 2006-20-11). Accomplishing an inspection required by paragraph (j) of this AD terminates the inspections required by this paragraph.

    (h) Retained Repair, With No Changes

    This paragraph restates the requirements of paragraph (g) of AD 2006-20-11, with no changes. If any crack is found during any inspection required by paragraph (g) of this AD: Before further flight, repair the crack using a method approved in accordance with the procedures specified in paragraph (m) of this AD.

    (i) Retained Provision Regarding Reporting, With No Changes

    This paragraph restates the provision specified in paragraph (h) of AD 2006-20-11, with no changes. Although Boeing Special Attention Service Bulletin 757-53-0090, dated June 2, 2005, recommends that inspection results be reported to the manufacturer, this AD does not include that requirement.

    (j) New Repetitive Inspections

    At the applicable time specified in table 1 of paragraph 1.E., “Compliance,” of Boeing Special Attention Service Bulletin 757-53-0090, Revision 1, dated November 19, 2015, except as provided by paragraphs (j)(1), (j)(2), and (l)(1) of this AD: Do an external HFEC inspection for cracking of the skin lap splices of the fuselage, in accordance with the Accomplishment Instructions of Boeing Special Attention Service Bulletin 757-53-0090, Revision 1, dated November 19, 2015. Repeat the inspection thereafter at the applicable times specified in table 1 of paragraph 1.E., “Compliance,” of Boeing Special Attention Service Bulletin 757-53-0090, Revision 1, dated November 19, 2015. Doing an inspection required by this paragraph terminates the inspections required by paragraph (g) of this AD.

    (1) For airplanes on which Option 1 (detailed inspection) of Boeing Special Attention Service Bulletin 757-53-0090, dated June 2, 2005, has been done: Repeat the HFEC inspection before the accumulation of 37,500 total flight cycles, or within 3,000 flight cycles after accomplishing the most recent detailed inspection, whichever occurs later.

    (2) For airplanes on which Option 2 (HFEC inspection) of Boeing Special Attention Service Bulletin 757-53-0090, dated June 2, 2005, has been done: Repeat the HFEC inspection before the accumulation of 37,500 total flight cycles, or within 12,000 flight cycles after accomplishing the most recent HFEC inspection, whichever occurs later.

    (k) Repair for Cracking Found During Inspections Required by Paragraph (j) of This AD

    If any cracking is found during any inspection required by paragraph (j) of this AD, repair before further flight using a method approved in accordance with the procedures specified in paragraph (m) of this AD.

    (l) Exceptions to Service Information

    (1) Where Boeing Special Attention Service Bulletin 757-53-0090, Revision 1, dated November 19, 2015, specifies a compliance time “after the Revision 1 date of this service bulletin,” this AD requires compliance within the specified compliance time after the effective date of this AD.

    (2) Although Boeing Special Attention Service Bulletin 757-53-0090, Revision 1, dated November 19, 2015, specifies to contact Boeing for repair instructions, and specifies that action as “RC” (Required for Compliance), paragraph (k) of this AD requires repair before further flight using a method approved in accordance with the procedures specified in paragraph (m) of this AD.

    (m) Alternative Methods of Compliance (AMOCs)

    (1) The Manager, Los Angeles Aircraft Certification Office (ACO), FAA, has the authority to approve AMOCs for this AD, if requested using the procedures found in 14 CFR 39.19. In accordance with 14 CFR 39.19, send your request to your principal inspector or local Flight Standards District Office, as appropriate. If sending information directly to the manager of the ACO, send it to the attention of the person identified in paragraph (n)(1) of this AD. Information may be emailed to: [email protected].

    (2) Before using any approved AMOC, notify your appropriate principal inspector, or lacking a principal inspector, the manager of the local flight standards district office/certificate holding district office.

    (3) An AMOC that provides an acceptable level of safety may be used for any repair, modification, or alteration required by this AD if it is approved by the Boeing Commercial Airplanes Organization Designation Authorization (ODA) that has been authorized by the Manager, Los Angeles ACO, to make those findings. To be approved the repair method, modification deviation, or alteration deviation must meet the certification basis of the airplane and the approval must specifically refer to this AD.

    (4) AMOCs approved for AD 2006-20-11, are approved as AMOCs for the corresponding provisions of paragraphs (g) and (j) of this AD.

    (5) Except as required by paragraph (l)(2) of this AD: For service information that contains steps that are labeled as Required for Compliance (RC), the provisions of paragraphs (m)(5)(i) and (m)(5)(ii) apply.

    (i) The steps labeled as RC, including substeps under an RC step and any figures identified in an RC step, must be done to comply with the AD. If a step or substep is labeled “RC Exempt,” then the RC requirement is removed from that step or substep. An AMOC is required for any deviations to RC steps, including substeps and identified figures.

    (ii) Steps not labeled as RC may be deviated from using accepted methods in accordance with the operator's maintenance or inspection program without obtaining approval of an AMOC, provided the RC steps, including substeps and identified figures, can still be done as specified, and the airplane can be put back in an airworthy condition.

    (6) The inspections specified in paragraph (g) of this AD are approved as an AMOC to paragraph (h) of AD 2006-11-11 for the inspections of Significant Structural Items (SSI) 53-30-07 and 53-60-07 (fuselage lap splices, left and right upper fastener row) listed in the May 2003 or June 2005 revision of the Boeing 757 Maintenance Planning Data (MPD) Document D622N001-9. This AMOC applies only to the common areas identified in paragraphs (m)(6)(i) and (m)(6)(ii) of this AD. All provisions of AD 2006-11-11 that are not specifically referenced in the above statements remain fully applicable and must be complied with as specified in AD 2006-11-11. Operators may revise their maintenance or inspection program with these alternative inspections for common areas.

    (i) Common areas inspected before the effective date of this AD, in accordance with the Accomplishment Instructions of Boeing Special Attention Service Bulletin 757-53-0090, dated June 2, 2005.

    (ii) Common areas inspected in accordance with the Accomplishment Instructions of Boeing Special Attention Service Bulletin 757-53-0090, Revision 1, dated November 19, 2015.

    (n) Related Information

    For more information about this AD, contact Eric Schrieber, Aerospace Engineer, Airframe Branch, ANM-120L, FAA, Los Angeles ACO, 3960 Paramount Boulevard, Lakewood, CA 90712-4137; phone: 562-627-5348; fax: 562-627-5210; email: [email protected].

    (o) Material Incorporated by Reference

    (1) The Director of the Federal Register approved the incorporation by reference (IBR) of the service information listed in this paragraph under 5 U.S.C. 552(a) and 1 CFR part 51.

    (2) You must use this service information as applicable to do the actions required by this AD, unless the AD specifies otherwise.

    (3) The following service information was approved for IBR on December 9, 2016.

    (i) Boeing Special Attention Service Bulletin 757-53-0090, Revision 1, dated November 19, 2015.

    (ii) Reserved.

    (4) The following service information was approved for IBR on November 8, 2006 (71 FR 58485, October 4, 2006).

    (i) Boeing Special Attention Service Bulletin 757-53-0090, dated June 2, 2005.

    (ii) Reserved.

    (5) For service information identified in this AD, contact Boeing Commercial Airplanes, Attention: Contractual & Data Services (C&DS), 2600 Westminster Blvd., MC 110-SK57, Seal Beach, CA 90740; telephone 562-797-1717; Internet https://www.myboeingfleet.com.

    (6) You may view this service information at the FAA, Transport Airplane Directorate, 1601 Lind Avenue SW., Renton, WA. For information on the availability of this material at the FAA, call 425-227-1221.

    (7) You may view this service information that is incorporated by reference at the National Archives and Records Administration (NARA). For information on the availability of this material at NARA, call 202-741-6030, or go to: http://www.archives.gov/federal-register/cfr/ibr-locations.html.

    Issued in Renton, Washington, on October 20, 2016. Dionne Palermo, Acting Manager, Transport Airplane Directorate, Aircraft Certification Service.
    [FR Doc. 2016-25958 Filed 11-3-16; 8:45 am] BILLING CODE 4910-13-P
    DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 71 [Docket No. FAA-2015-3992; Airspace Docket No. 15-ANM-14] Amendment of Class E Airspace; Albany, OR AGENCY:

    Federal Aviation Administration (FAA), DOT.

    ACTION:

    Final rule.

    SUMMARY:

    This action amends Class E airspace at Albany Municipal Airport, Albany, OR. Advances in Global Positioning System (GPS) mapping accuracy and a reliance on precise geographic coordinates to define airport and airspace reference points have made this airspace redesign necessary for the safety and management of Instrument Flight Rules (IFR) operations.

    DATES:

    Effective 0901 UTC, January 5, 2017. The Director of the Federal Register approves this incorporation by reference action under Title 1, Code of Federal Regulations, part 51, subject to the annual revision of FAA Order 7400.11 and publication of conforming amendments.

    ADDRESSES:

    FAA Order 7400.11A, Airspace Designations and Reporting Points, and subsequent amendments can be viewed online at http://www.faa.gov/air_traffic/publications/. For further information, you can contact the U.S. Department of Transportation, Docket Operations, 1200 New Jersey Avenue SE., West Bldg. Ground Floor Rm. W12-140, Washington, DC 20590; Telephone: 1-800-647-5527, or 202-366-9826. The Order is also available for inspection at the National Archives and Records Administration (NARA). For information on the availability of FAA Order 7400.11A at NARA, call 202-741-6030, or go to http://www.archives.gov/federal_register/code_of_federal-regulations/ibr_locations.html. FAA Order 7400.11, Airspace Designations and Reporting Points, is published yearly and effective on September 15.

    FOR FURTHER INFORMATION CONTACT:

    Tom Clark, Federal Aviation Administration, Operations Support Group, Western Service Center, 1601 Lind Avenue SW., Renton, WA 98057; telephone (425) 203-4511.

    SUPPLEMENTARY INFORMATION:

    Authority for This Rulemaking

    The FAA's authority to issue rules regarding aviation safety is found in Title 49 of the United States Code. Subtitle I, Section 106 describes the authority of the FAA Administrator. Subtitle VII, Aviation Programs, describes in more detail the scope of the agency's authority. This rulemaking is promulgated under the authority described in Subtitle VII, Part A, Subpart I, Section 40103. Under that section, the FAA is charged with prescribing regulations to assign the use of airspace necessary to ensure the safety of aircraft and the efficient use of airspace. This regulation is within the scope of that authority as it modifies controlled airspace at Albany Municipal Airport, Albany, OR.

    History

    On August 15, 2016, the FAA published in the Federal Register a notice of proposed rulemaking (NPRM) to modify Class E airspace extending upward from 700 feet above the surface at Albany Municipal Airport, Albany, OR (81 FR 53964) Docket No. FAA-2015-3992. Interested parties were invited to participate in this rulemaking effort by submitting written comments on the proposal to the FAA. On August 29, 2016, the FAA received a request from Mr. Charles West for a pictorial overlay of the airspace proposal. On September 6, 2016, the FAA provided a diagram of the proposed changes via email to Mr. West and also to Senator Jeff Merkley, Mitch T. Swecker of the Oregon Department of Aviation, and to Mary Rosenblum of the Oregon Pilots Association. No other comments were received.

    Class D and Class E airspace designations are published in paragraph 5000, 6002, 6004, and 6005, respectively, of FAA Order 7400.11A, dated August 3, 2016, and effective September 15, 2016, which is incorporated by reference in 14 CFR part 71.1. The Class E airspace designation listed in this document will be published subsequently in the Order.

    Availability and Summary of Documents for Incorporation by Reference

    This document amends FAA Order 7400.11A, Airspace Designations and Reporting Points, dated August 3, 2016, and effective September 15, 2016. FAA Order 7400.11A is publicly available as listed in the ADDRESSES section of this document. FAA Order 7400.11A lists Class A, B, C, D, and E airspace areas, air traffic service routes, and reporting points.

    The Rule

    This action amends Title 14 Code of Federal Regulations (14 CFR) Part 71 by modifying Class E airspace extending upward from 700 feet above the surface at Albany Municipal Airport, Albany, OR. Controlled airspace extends to within a 6.7-mile radius of the airport to accommodate IFR departures up to 1,200 feet above the surface; includes a small extension to the southwest to accommodate IFR arrivals below 1,500 feet above the surface; and the segment east of longitude 123° is removed, as there are no IFR operations within that area. These modifications are necessary for the safety and management of IFR operations at the airport, while preserving the navigable airspace for aviation.

    Class E airspace designations are published in paragraph 6005 of FAA Order 7400.11A, dated August 3, 2016, and effective September 15, 2016, which is incorporated by reference in 14 CFR 71.1. The Class E airspace designations listed in this document will be published subsequently in the Order.

    Regulatory Notices and Analyses

    The FAA has determined that this regulation only involves an established body of technical regulations for which frequent and routine amendments are necessary to keep them operationally current, is non-controversial and unlikely to result in adverse or negative comments. It, therefore: (1) Is not a “significant regulatory action” under Executive Order 12866; (2) is not a “significant rule” under DOT Regulatory Policies and Procedures (44 FR 11034; February 26, 1979); and (3) does not warrant preparation of a Regulatory Evaluation as the anticipated impact is so minimal. Since this is a routine matter that only affects air traffic procedures and air navigation, it is certified that this rule, when promulgated, does not have a significant economic impact on a substantial number of small entities under the criteria of the Regulatory Flexibility Act.

    Environmental Review

    The FAA has determined that this action qualifies for categorical exclusion under the National Environmental Policy Act in accordance with FAA Order 1050.1F, “Environmental Impacts: Policies and Procedures”, paragraph 5-6.5a. This airspace action is not expected to cause any potentially significant environmental impacts, and no extraordinary circumstances exist that warrant preparation of an environmental assessment.

    Lists of Subjects in 14 CFR Part 71

    Airspace, Incorporation by reference, Navigation (Air).

    Adoption of the Amendment

    In consideration of the foregoing, the Federal Aviation Administration amends 14 CFR part 71 as follows:

    PART 71—DESIGNATION OF CLASS A, B, C, D, AND E AIRSPACE AREAS; AIR TRAFFIC SERVICE ROUTES; AND REPORTING POINTS 1. The authority citation for Part 71 continues to read as follows: Authority:

    49 U.S.C. 106(f), 106(g); 40103, 40113, 40120; E.O. 10854, 24 FR 9565, 3 CFR, 1959-1963 Comp., p. 389.

    § 71.1 [Amended]
    2. The incorporation by reference in 14 CFR 71.1 of FAA Order 7400.11A, Airspace Designations and Reporting Points, dated August 3, 2016, and effective September 15, 2016, is amended as follows: Paragraph 6005: Class E Airspace Areas Extending Upward from 700 feet or More Above the Surface of the Earth ANM OR E5 Albany, OR [Modified] Albany Municipal Airport, OR (Lat. 44°38′16″ N., long. 123°03′34″ W.)

    That airspace extending upward from 700 feet above the surface, within a 6.7-mile radius of Albany Municipal Airport, beginning at the 158° bearing from the airport clockwise to the 022° bearing, thence to the point of beginning, and that airspace 1.4 miles each side of the 230° bearing from the airport extending from the 6.7-mile radius to 8.5 miles southwest of the airport.

    Issued in Seattle, Washington, on October 24, 2016. Tracey Johnson, Manager, Operations Support Group, Western Service Center.
    [FR Doc. 2016-26437 Filed 11-3-16; 8:45 am] BILLING CODE 4910-13-P
    DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 71 [Docket No. FAA-2015-3991; Airspace Docket No. 15-ANM-13] Amendment of Class D and Class E Airspace; Eugene, OR, and Corvallis, OR AGENCY:

    Federal Aviation Administration (FAA), DOT.

    ACTION:

    Final rule.

    SUMMARY:

    This action amends Class E airspace designated as an extension, and Class E airspace extending upward from 700 feet above the surface at Mahlon Sweet Field Airport, Eugene, OR, to accommodate airspace redesign for the safety and management of Instrument Flight Rules (IFR) operations at the airport. Corvallis Municipal Airport is removed from the Mahlon Sweet Field Airport regulatory text by creating a stand-alone airspace designation for the airport. Additionally, this action updates the airport reference points for these airports in Class D and E airspace, as well as removes the Notice to Airmen (NOTAM) requirement noted in Class E surface area airspace.

    DATES:

    Effective 0901 UTC, January 5, 2017. The Director of the Federal Register approves this incorporation by reference action under Title 1, Code of Federal Regulations, part 51, subject to the annual revision of FAA Order 7400.11 and publication of conforming amendments.

    ADDRESSES:

    FAA Order 7400.11A, Airspace Designations and Reporting Points, and subsequent amendments can be viewed online at http://www.faa.gov/air_traffic/publications/. For further information, you can contact the U.S. Department of Transportation, Docket Operations, 1200 New Jersey Avenue SE., West Bldg. Ground Floor Rm W12-140, Washington, DC 20590; Telephone: 1-800-647-5527, or 202-366-9826. The Order is also available for inspection at the National Archives and Records Administration (NARA). For information on the availability of FAA Order 7400.11A at NARA, call 202-741-6030, or go to http://www.archives.gov/federal_register/code_of_federal-regulations/ibr_locations.html.

    FAA Order 7400.11, Airspace Designations and Reporting Points, is published yearly and effective on September 15.

    FOR FURTHER INFORMATION CONTACT:

    Tom Clark, Federal Aviation Administration, Operations Support Group, Western Service Center, 1601 Lind Avenue SW., Renton, WA, 98057; telephone (425) 203-4511.

    SUPPLEMENTARY INFORMATION:

    Authority for This Rulemaking

    The FAA's authority to issue rules regarding aviation safety is found in Title 49 of the United States Code. Subtitle I, Section 106 describes the authority of the FAA Administrator. Subtitle VII, Aviation Programs, describes in more detail the scope of the agency's authority. This rulemaking is promulgated under the authority described in Subtitle VII, Part A, Subpart I, Section 40103. Under that section, the FAA is charged with prescribing regulations to assign the use of airspace necessary to ensure the safety of aircraft and the efficient use of airspace. This regulation is within the scope of that authority as it modifies controlled airspace at Mahlon Sweet Field Airport, Eugene, OR, and Corvallis Municipal Airport, Corvallis, OR.

    History

    On August 15, 2016, the FAA published in the Federal Register (81 FR 53962) Docket No. FAA-2015-3991, a notice of proposed rulemaking (NPRM) to modify Class E airspace designated as an extension to a Class D or E surface airspace area, and Class E airspace extending upward from 700 feet above the surface, at Mahlon Sweet Field Airport, Eugene OR, and to establish Class E airspace extending upward from 700 feet above the surface at Corvallis Municipal Airport, Corvallis, OR. Interested parties were invited to participate in this rulemaking effort by submitting written comments on the proposal to the FAA. On August 29, 2016, the FAA received a request from Mr. Charles West for a pictorial overlay of the airspace proposal. On September 6, 2016, the FAA provided a diagram of the proposed changes via email to Mr. West and also to Senator Jeff Merkley, Mitch T. Swecker of the Oregon Department of Aviation, and to Mary Rosenblum of the Oregon Pilots Association. No other comments were received.

    Class D and Class E airspace designations are published in paragraph 5000, 6002, 6004, and 6005, respectively, of FAA Order 7400.11A, dated August 3, 2016, and effective September 15, 2016, which is incorporated by reference in 14 CFR part 71.1. The Class E airspace designation listed in this document will be published subsequently in the Order.

    Availability and Summary of Documents for Incorporation by Reference

    This document amends FAA Order 7400.11A, Airspace Designations and Reporting Points, dated August 3, 2016, and effective September 15, 2016. FAA Order 7400.11A is publicly available as listed in the ADDRESSES section of this document. FAA Order 7400.11A lists Class A, B, C, D, and E airspace areas, air traffic service routes, and reporting points.

    The Rule

    The FAA is amending Title 14 Code of Federal Regulations (14 CFR) Part 71 by modifying Class E airspace designated as an extension to a Class D or Class E surface area, and Class E airspace extending upward from 700 feet above the surface at Mahlon Sweet Field Airport, Eugene, OR. The Class E surface extension to the north is slightly enlarged to contain aircraft using the VOR-A approach, and the extension to the south is enlarged to contain aircraft using the RNP (RNAV) Z instrument approaches as they descend below 1,000 feet above the surface. Class E airspace extending upward from 700 feet above the surface is reduced to the northeast and west of the airport to only that area necessary to contain IFR arrival aircraft descending below 1,500 feet above the surface, and IFR departure aircraft, until reaching 1,200 feet above the surface. The Class E airspace area extending upward from 1,200 feet above the surface is removed, as this airspace area is provided by the Bend, OR, Class E En Route airspace area.

    Also, this action creates stand-alone Class E airspace extending upward from 700 feet above the surface for Corvallis Municipal Airport, Corvallis, OR, thereby removing reference to Corvallis Municipal Airport from the Mahlon Sweet Field Airport airspace designation. The overall Class E airspace area near Corvallis Municipal Airport is slightly reduced north, and slightly enlarged west of the airport. The geographic coordinates of these airports are updated for all Class D and Class E airspace areas.

    Regulatory Notices and Analyses

    The FAA has determined that this regulation only involves an established body of technical regulations for which frequent and routine amendments are necessary to keep them operationally current, is non-controversial and unlikely to result in adverse or negative comments. It, therefore: (1) Is not a “significant regulatory action” under Executive Order 12866; (2) is not a “significant rule” under DOT Regulatory Policies and Procedures (44 FR 11034; February 26, 1979); and (3) does not warrant preparation of a Regulatory Evaluation as the anticipated impact is so minimal. Since this is a routine matter that only affects air traffic procedures and air navigation, it is certified that this rule, when promulgated, does not have a significant economic impact on a substantial number of small entities under the criteria of the Regulatory Flexibility Act.

    Environmental Review

    The FAA has determined that this action qualifies for categorical exclusion under the National Environmental Policy Act in accordance with FAA Order 1050.1F, “Environmental Impacts: Policies and Procedures,” paragraph 5-6.5a. This airspace action is not expected to cause any potentially significant environmental impacts, and no extraordinary circumstances exist that warrant preparation of an environmental assessment.

    Lists of Subjects in 14 CFR Part 71

    Airspace, Incorporation by reference, Navigation (air).

    Adoption of the Amendment

    In consideration of the foregoing, the Federal Aviation Administration amends 14 CFR part 71 as follows:

    PART 71—DESIGNATION OF CLASS A, B, C, D, AND E AIRSPACE AREAS; AIR TRAFFIC SERVICE ROUTES; AND REPORTING POINTS 1. The authority citation for Part 71 continues to read as follows: Authority:

    49 U.S.C. 106(f), 106(g); 40103, 40113, 40120; E.O. 10854, 24 FR 9565, 3 CFR, 1959-1963 Comp., p. 389.

    § 71.1 [Amended]
    2. The incorporation by reference in 14 CFR 71.1 of FAA Order 7400.11A, Airspace Designations and Reporting Points, dated August 3, 2016, and effective September 15, 2016, is amended as follows: Paragraph 5000: Class D Airspace. ANM OR D Eugene, OR [Modified] Mahlon Sweet Field Airport, OR (Lat. 44°07′29″ N., long. 123°12′43″ W.)

    That airspace extending upward from the surface to and including 2,900 feet MSL within a 4.6-mile radius of Mahlon Sweet Field Airport. This Class D airspace area is effective during the specific dates and times established in advance by a Notice to Airmen. The effective date and time will thereafter be continuously published in the Chart Supplement.

    Paragraph 6002: Class E Airspace Designated as Surface Areas. ANM OR E2 Eugene, OR [Modified] Mahlon Sweet Field Airport, OR (Lat. 44°07′29″ N., long. 123°12′43″ W.)

    That airspace extending upward from the surface within a 4.6-mile radius of Mahlon Sweet Field Airport.

    Paragraph 6004: Class E Airspace Areas Designated as an Extension to a Class D or Class E Surface Area. ANM OR E4 Eugene, OR [Modified] Mahlon Sweet Field Airport, OR (Lat. 44°07′29″ N., long. 123°12′43″ W.)

    That airspace extending upward from the surface within 3 miles west and 2 miles east of the Mahlon Sweet Field Airport 008° bearing, extending from the 4.6-mile radius of the airport to 6.8 miles north of the airport, and within the area bounded by the airport 142° bearing clockwise to the airport 213° bearing, extending from the 4.6-mile radius to 13.5 miles south of the airport, and within the area bounded by the airport 213° bearing clockwise to the airport 226° bearing, extending from the 4.6-mile radius to 14 miles southwest of the airport.

    Paragraph 6005: Class E Airspace Areas Extending Upward From 700 Feet or More Above the Surface of the Earth. ANM OR E5 Corvallis, OR [New] Corvallis Municipal Airport, OR (Lat. 44°29′50″ N., long. 123°17′22″ W.)

    That airspace extending upward from 700 feet above the surface within a 6-mile radius of Corvallis Municipal Airport, and 2.4 miles each side of the airport 007° bearing, extending from the 6-mile radius to 12.4 miles north of the airport, and 2.6 miles each side of the airport 104° bearing extending from the 6-mile radius to 7.1 miles east of the airport, and 2 miles each side of the airport 188° bearing extending from the 6-mile radius to 7.1 miles south of the airport.

    ANM OR E5 Eugene, OR [Modified] Mahlon Sweet Field Airport, OR (Lat. 44°07′29″ N., long. 123°12′43″ W.)

    That airspace extending upward from 700 feet above the surface within a 6-mile radius of Mahlon Sweet Field Airport, and that airspace within the area bounded by the airport 098° bearing clockwise to the airport 138° bearing, extending from the 6-mile radius to 18.3 miles southeast of the airport, and within the area bounded by the airport 138° bearing clockwise to the 170° bearing, extending from the 6-mile radius to 13.5 miles southeast of the airport, and within the area bounded by the airport 170° bearing clockwise to the 234° bearing, extending from the 6-mile radius to 18.3 miles southwest of the airport, and that airspace within 3.6 miles east and 8.5 miles west of the airport 008° bearing, extending from the 6-mile radius to 16 miles north of the airport.

    Issued in Seattle, Washington, on October 24, 2016. Tracey Johnson, Manager, Operations Support Group, Western Service Center.
    [FR Doc. 2016-26439 Filed 11-3-16; 8:45 am] BILLING CODE 4910-13-P
    DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 71 [Docket No. FAA-2012-1308; Airspace Docket No. 12-ASO-44] Establishment of Class E Airspace; Camden, AL AGENCY:

    Federal Aviation Administration (FAA), DOT.

    ACTION:

    Final rule.

    SUMMARY:

    This action establishes Class E airspace at Camden, AL to accommodate new Area Navigation (RNAV) Global Positioning System (GPS) Standard Instrument Approach Procedures (SIAPs) serving Camden Municipal Airport. Controlled airspace is necessary for the safety and management of instrument flight rules (IFR) operations at the airport.

    DATES:

    Effective 0901 UTC, January 5, 2017. The Director of the Federal Register approves this incorporation by reference action under title 1, Code of Federal Regulations, part 51, subject to the annual revision of FAA Order 7400.11 and publication of conforming amendments.

    ADDRESSES:

    FAA Order 7400.11A, Airspace Designations and Reporting Points, and subsequent amendments can be viewed online at http://www.faa.gov/air_traffic/publications/. For further information, you can contact the Airspace Policy Group, Federal Aviation Administration, 800 Independence Avenue SW., Washington, DC 20591; telephone: 202-267-8783. The Order is also available for inspection at the National Archives and Records Administration (NARA). For information on the availability of FAA Order 7400.11A at NARA, call 202-741-6030, or go to http://www.archives.gov/federal_register/code_of_federal-regulations/ibr_locations.html.

    FAA Order 7400.11, Airspace Designations and Reporting Points, is published yearly and effective on September 15.

    FOR FURTHER INFORMATION CONTACT:

    John Fornito, Operations Support Group, Eastern Service Center, Federal Aviation Administration, P.O. Box 20636, Atlanta, Georgia 30320; telephone (404) 305-6364.

    SUPPLEMENTARY INFORMATION: Authority for This Rulemaking

    The FAA's authority to issue rules regarding aviation safety is found in Title 49 of the United States Code. Subtitle I, Section 106 describes the authority of the FAA Administrator. Subtitle VII, Aviation Programs, describes in more detail the scope of the agency's authority. This rulemaking is promulgated under the authority described in Subtitle VII, Part, A, Subpart I, Section 40103. Under that section, the FAA is charged with prescribing regulations to assign the use of airspace necessary to ensure the safety of aircraft and the efficient use of airspace. This regulation is within the scope of that authority as it establishes Class E airspace at Camden Municipal Airport, Camden, AL.

    History

    On July 22, 2016, the FAA published in the Federal Register a notice of proposed rulemaking (NPRM) to establish Class E airspace upward from 700 feet above the surface at Camden, AL, (81 FR 47737) Docket No. FAA-2012-1308, providing the controlled airspace required to support the new RNAV (GPS) standard instrument approach procedures for Camden Municipal Airport. Interested parties were invited to participate in this rulemaking effort by submitting written comments on the proposal to the FAA. No comments were received.

    Class E airspace designations are published in Paragraph 6005 of FAA Order 7400.11A dated August 3, 2016, and effective September 15, 2016, which is incorporated by reference in 14 CFR part 71.1. The Class E airspace designations listed in this document will be published subsequently in the Order.

    Availability and Summary of Documents for Incorporation by Reference

    This document amends FAA Order 7400.11A, Airspace Designations and Reporting Points, dated August 3, 2016, and effective September 15, 2016. FAA Order 7400.11A is publicly available as listed in the ADDRESSES section of this document. FAA Order 7400.11A lists Class A, B, C, D, and E airspace areas, air traffic service routes, and reporting points.

    The Rule

    This amendment to Title 14, Code of Federal Regulations (14 CFR) part 71 establishes Class E Airspace at Camden Municipal Airport, Camden, AL. Controlled airspace extending upward from 700 feet above the surface within a 7.7-mile radius of the airport is established for IFR operations.

    Regulatory Notices and Analyses

    The FAA has determined that this regulation only involves an established body of technical regulations for which frequent and routine amendments are necessary to keep them operationally current. It, therefore: (1) Is not a “significant regulatory action” under Executive Order 12866; (2) is not a “significant rule” under DOT Regulatory Policies and Procedures (44 FR 11034; February 26, 1979); and (3) does not warrant preparation of a regulatory evaluation as the anticipated impact is so minimal. Since this is a routine matter that only affects air traffic procedures and air navigation, it is certified that this rule, when promulgated, does not have a significant economic impact on a substantial number of small entities under the criteria of the Regulatory Flexibility Act.

    Environmental Review

    The FAA has determined that this action qualifies for categorical exclusion under the National Environmental Policy Act in accordance with FAA Order 1050.1F, “Environmental Impacts: Policies and Procedures,” paragraph 5-6.5a. This airspace action is not expected to cause any potentially significant environmental impacts, and no extraordinary circumstances exist that warrant preparation of an environmental assessment.

    Lists of Subjects in 14 CFR Part 71

    Airspace, Incorporation by reference, Navigation (air).

    Adoption of the Amendment

    In consideration of the foregoing, the Federal Aviation Administration amends 14 CFR part 71 as follows:

    PART 71—DESIGNATION OF CLASS A, B, C, D, AND E AIRSPACE AREAS; AIR TRAFFIC SERVICE ROUTES; AND REPORTING POINTS 1. The authority citation for Part 71 continues to read as follows: Authority:

    49 U.S.C. 106(f), 106(g); 40103, 40113, 40120, E.O. 10854, 24 FR 9565, 3 CFR, 1959-1963 Comp., p. 389.

    § 71.1 [Amended]
    2. The incorporation by reference in 14 CFR 71.1 of FAA Order 7400.11A, Airspace Designations and Reporting Points, dated August 3, 2016, effective September 15, 2016, is amended as follows: Paragraph 6005. Class E Airspace Areas Extending Upward From 700 Feet or More Above the Surface of the Earth. ASO AL E5 Camden, AL [New] Camden Municipal Airport, AL (Lat. 31°58′47″ N., long. 87°20′21″ W.)

    That airspace extending upward from 700 feet above the surface within a 7.7-mile radius of Camden Municipal Airport.

    Issued in College Park, Georgia, on October 21, 2016. Ryan W. Almasy, Manager, Operations Support Group, Eastern Service Center, Air Traffic Organization.
    [FR Doc. 2016-26455 Filed 11-3-16; 8:45 am] BILLING CODE 4910-13-P
    DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 71 [Docket No. FAA-2016-6775; Airspace Docket No. 16-ASO-9] Establishment of Class E Airspace; Murray, KY AGENCY:

    Federal Aviation Administration (FAA), DOT.

    ACTION:

    Final rule.

    SUMMARY:

    This action establishes Class E airspace at Murray, KY, to accommodate new Area Navigation (RNAV) Global Positioning System (GPS) Standard Instrument Approach Procedures (SIAPs) serving Murray Calloway County Hospital Heliport. Controlled airspace is necessary for the safety and management of instrument flight rules (IFR) operations at the heliport.

    DATES:

    Effective 0901 UTC, January 5, 2017. The Director of the Federal Register approves this incorporation by reference action under title 1, Code of Federal Regulations, part 51, subject to the annual revision of FAA Order 7400.11 and publication of conforming amendments.

    ADDRESSES:

    FAA Order 7400.11A, Airspace Designations and Reporting Points, and subsequent amendments can be viewed online at http://www.faa.gov/air_traffic/publications/. For further information, you can contact the Airspace Policy Group, Federal Aviation Administration, 800 Independence Avenue SW., Washington, DC 20591; telephone: 202-267-8783. The Order is also available for inspection at the National Archives and Records Administration (NARA). For information on the availability of FAA Order 7400.11A at NARA, call 202-741-6030, or go to http://www.archives.gov/federal_register/code_of_federal-regulations/ibr_locations.html.

    FAA Order 7400.11, Airspace Designations and Reporting Points, is published yearly and effective on September 15.

    FOR FURTHER INFORMATION CONTACT:

    John Fornito, Operations Support Group, Eastern Service Center, Federal Aviation Administration, P.O. Box 20636, Atlanta, Georgia 30320; telephone (404) 305-6364.

    SUPPLEMENTARY INFORMATION: Authority for This Rulemaking

    The FAA's authority to issue rules regarding aviation safety is found in Title 49 of the United States Code. Subtitle I, Section 106 describes the authority of the FAA Administrator. Subtitle VII, Aviation Programs, describes in more detail the scope of the agency's authority. This rulemaking is promulgated under the authority described in Subtitle VII, Part, A, Subpart I, Section 40103. Under that section, the FAA is charged with prescribing regulations to assign the use of airspace necessary to ensure the safety of aircraft and the efficient use of airspace. This regulation is within the scope of that authority as it establishes Class E airspace at Murray Calloway County Hospital Heliport, Murray, KY.

    History

    On July 22, 2016, the FAA published in the Federal Register a notice of proposed rulemaking (NPRM) to establish Class E airspace extending upward from 700 feet above the surface at Murray, KY, (81 FR 47738) Docket No. FAA-2016-6775, providing the controlled airspace required to support the new Copter RNAV (GPS) standard instrument approach procedures for Murray Calloway County Hospital Heliport. Interested parties were invited to participate in this rulemaking effort by submitting written comments on the proposal to the FAA. No comments were received.

    Class E airspace designations are published in Paragraph 6005 of FAA Order 7400.11A dated August 3, 2016, and effective September 15, 2016, which is incorporated by reference in 14 CFR part 71.1. The Class E airspace designations listed in this document will be published subsequently in the Order.

    Availability and Summary of Documents for Incorporation by Reference

    This document amends FAA Order 7400.11A, Airspace Designations and Reporting Points, dated August 3, 2016, and effective September 15, 2016. FAA Order 7400.11A is publicly available as listed in the ADDRESSES section of this document. FAA Order 7400.11A lists Class A, B, C, D, and E airspace areas, air traffic service routes, and reporting points.

    The Rule

    This amendment to Title 14, Code of Federal Regulations (14 CFR) part 71 establishes Class E Airspace at Murray Calloway County Hospital Heliport, Murray, KY. Controlled airspace extending upward from 700 feet above the surface within a 6-mile radius of the heliport is established for IFR operations.

    Regulatory Notices and Analyses

    The FAA has determined that this regulation only involves an established body of technical regulations for which frequent and routine amendments are necessary to keep them operationally current. It, therefore: (1) Is not a “significant regulatory action” under Executive Order 12866; (2) is not a “significant rule” under DOT Regulatory Policies and Procedures (44 FR 11034; February 26, 1979); and (3) does not warrant preparation of a regulatory evaluation as the anticipated impact is so minimal. Since this is a routine matter that only affects air traffic procedures and air navigation, it is certified that this rule, when promulgated, does not have a significant economic impact on a substantial number of small entities under the criteria of the Regulatory Flexibility Act.

    Environmental Review

    The FAA has determined that this action qualifies for categorical exclusion under the National Environmental Policy Act in accordance with FAA Order 1050.1F, “Environmental Impacts: Policies and Procedures,” paragraph 5-6.5a. This airspace action is not expected to cause any potentially significant environmental impacts, and no extraordinary circumstances exist that warrant preparation of an environmental assessment.

    Lists of Subjects in 14 CFR Part 71

    Airspace, Incorporation by reference, Navigation (air).

    Adoption of the Amendment

    In consideration of the foregoing, the Federal Aviation Administration amends 14 CFR part 71 as follows:

    PART 71—DESIGNATION OF CLASS A, B, C, D, AND E AIRSPACE AREAS; AIR TRAFFIC SERVICE ROUTES; AND REPORTING POINTS 1. The authority citation for Part 71 continues to read as follows: Authority:

    49 U.S.C. 106(f), 106(g); 40103, 40113, 40120, E.O. 10854, 24 FR 9565, 3 CFR, 1959-1963 Comp., p. 389.

    § 71.1 [Amended]
    2. The incorporation by reference in 14 CFR 71.1 of FAA Order 7400.11A, Airspace Designations and Reporting Points, dated August 3, 2016, effective September 15, 2016, is amended as follows: Paragraph 6005. Class E Airspace Areas Extending Upward From 700 Feet or More Above the Surface of the Earth. ASO KY E5 Murray, KY [New] Murray Calloway County Hospital Heliport, KY (Lat. 36°36′27″ N., long. 88°18′36″ W.)

    That airspace extending upward from 700 feet above the surface within a 6-mile radius of Murray Calloway County Hospital Heliport.

    Issued in College Park, Georgia, on October 21, 2016. Ryan W. Almasy, Manager, Operations Support Group, Eastern Service Center, Air Traffic Organization.
    [FR Doc. 2016-26449 Filed 11-3-16; 8:45 am] BILLING CODE 4910-13-P
    DEPARTMENT OF COMMERCE Bureau of Industry and Security 15 CFR 738, 740, 742 and 746 [Docket No. 160810723-6723-01] RIN 0694-AH07 Amendments to the Export Administration Regulations: Update of Arms Embargoes on Cote d'Ivoire, Liberia, Sri Lanka and Vietnam, and Recognition of India as Member of the Missile Technology Control Regime AGENCY:

    Bureau of Industry and Security, Commerce.

    ACTION:

    Final rule.

    SUMMARY:

    In this rule, the Bureau of Industry and Security (BIS) amends the Export Administration Regulations (EAR) to implement changes in controls on arms and related materiel to Cote d'Ivoire, Liberia, Sri Lanka, and Vietnam. BIS also updates the EAR to recognize the accession of India as a member of the Missile Technology Control Regime (MTCR).

    DATES:

    This rule is effective November 4, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Elan Mitchell-Gee, telephone (202) 482-4252, email [email protected].

    SUPPLEMENTARY INFORMATION: Background

    In this rule, BIS amends the Export Administration Regulations (EAR) to implement certain United Nations Security Council Resolutions (UNSCRs) adopted in 2016 that terminated arms embargoes against Cote d'Ivoire (UNSCR 2283) and Liberia (UNSCR 2288). Further, BIS removes U.S. arms embargo-related controls on Sri Lanka to reflect the Consolidated Appropriations Act, 2016, and on Vietnam pursuant to a determination made by the Secretary of State and announced by the President.

    BIS aims to harmonize the arms embargo-related provisions in the EAR, expressly or by reference, with the Directorate of Defense Trade Control's (DDTC) regulation of arms embargoes in § 126.1 of the International Traffic in Arms Regulations (ITAR), “Prohibited Exports, Imports, and Sales to or from Certain Countries.” These actions further ongoing efforts to harmonize the EAR and the ITAR, and the President's Export Control Reform Initiative. The ITAR list incorporates countries subject to United Nations Security Council (UNSC) and U.S. arms embargoes. BIS primarily implements such controls through Country Group D: 5 “U.S. Arms Embargoed Countries,” in Supplement No. 1 to part 740 of the EAR. BIS also identifies specific countries subject to UNSC arms embargoes in part 746 of the EAR, and maintains controls on certain items pursuant to those embargoes on the Commerce Control List (CCL) in Supplement No. 1 to part 774.

    Countries listed in Country Group D:5 are subject to additional restrictions in the EAR, including on de minimis U.S. content, license exception availability, and licensing policy for certain items. For example, license applications for the export or reexport of items classified under 9x515 or “600 series” Export Control Classification Numbers (ECCNs) to countries in Country Group D:5 are reviewed consistent with the policies in § 126.1 of the ITAR, as provided in paragraph (b)(ii) of § 742.4 of the EAR. Additionally, license applications for items controlled on the CCL for United Nations Embargo reasons and destined to countries specified in § 746.1(b) of the EAR are not approved by BIS if the authorization would be contrary to the relevant UNSCR, to the extent consistent with United States national security and foreign policy interests. As a result of this rule, the relevant additional restrictions described above no longer apply to Cote d'Ivoire, Liberia, Sri Lanka, and Vietnam.

    Finally, on June 27, 2016, India acceded to the Missile Technology Control Regime (MTCR) as the 35th member. In this rule, BIS updates the EAR to recognize the status of India as a member of the Missile Technology Control Regime by amending paragraph (d) of § 742.5 to remove the reference to India as an adherent to the MTCR. This rule includes a conforming amendment.

    Respective Updates to EAR Arms Embargoes and Special Controls by Country Cote d'Ivoire

    The arms embargo against Cote d'Ivoire was initially imposed through UNSCR 1572 (2004). UNSCR 2283 terminated the arms embargo against Cote d'Ivoire on April 28, 2016, in recognition of the progress achieved in the stabilization of the country, including in relation to disarmament, demobilization and reintegration, security sector reform, national reconciliation and the fight against impunity, as well as the successful conduct of the presidential election of October 25, 2015. Accordingly, this rule removes the United Nations Embargo (UN) controls on Cote d'Ivoire by removing that country from the names of UNSC arms embargoed countries in § 746.1(b) and from Country Group D:5 in Supplement No. 1 to part 740 of the EAR.

    Liberia

    The arms embargo against Liberia was initially imposed through UNSCR 788 on November 19, 1992, and was continued through subsequent resolutions, including UNSCR 1903 (2009). The UNSC terminated the arms embargo against Liberia on May 25, 2016, through UNSCR 2288, in recognition of that country's progress in the past 13 years in building stable, effective and resilient national institutions. Accordingly, this rule removes the UN controls on Liberia by removing that country from the names of UNSC arms embargoed countries in § 746.1(b) and from Country Group D:5 in Supplement No. 1 to part 740 of the EAR.

    Sri Lanka

    The Department of State imposed a U.S. arms embargo on Sri Lanka on March 24, 2008, in accordance with the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2008 (Div. J, Pub. L. 110-161). However, licensing restrictions on Sri Lanka articulated in section 7044(e) of the Consolidated Appropriations Act, 2015 (Pub. L. 113-235) and in previous appropriations acts, were not carried forward in section 7044(e) of the Consolidated Appropriations Act, 2016 (Pub. L. 114-113). On May 4, 2016, DDTC announced that it would begin reviewing license applications for Sri Lanka on a case-by-case basis. Accordingly, this rule removes Sri Lanka from Country Group D:5 in Supplement No. 1 to part 740 of the EAR.

    Vietnam

    Starting in the 1960s, the Department of State imposed a lethal arms sales embargo against Vietnam. Pursuant to a determination by the Secretary of State, the President announced the United States' termination of the U.S. arms embargo against Vietnam on May 23, 2016, in furtherance of deepening and broadening ties between the United States and Vietnam since the normalization of diplomatic relations. Accordingly, BIS removes Vietnam from Country Group D:5 in Supplement No.1 to part 740 the EAR.

    International Export Control Regime Update: India, Member MTCR

    On June 27, 2016, India formally acceded to the MTCR as the 35th member. Prior to India's MTCR membership, India's commitment to the U.S.-India bilateral understanding contributed to the country's status as an “MTCR adherent” and placed the country among likeminded MTCR members in Country Group A:2 in Supplement No. 1 to part 740. In this rule, BIS formally recognizes India's status as a member of MTCR by removing the reference to India as only an “MTCR adherent” from paragraph (d) of § 742.5 of the EAR.

    Conforming Amendment

    Footnote notations appear next to countries listed in the Commerce Country Chart in Supplement No. 1 to part 738 of the EAR that are subject to UNSC arms embargos. BIS removes that notation for Cote d'Ivoire and Liberia to conform to the termination of UNSC arms embargoes against those countries.

    Export Administration Act

    Although the Export Administration Act expired on August 20, 2001, the President, through Executive Order 13222 of August 17, 2001, 3 CFR, 2001 Comp., p. 783 (2002), as amended by Executive Order 13637 of March 8, 2013, 78 FR 16129 (March 13, 2013) and as extended by the Notice of August 4, 2016, 81 FR 52587 (August 8, 2016), has continued the Export Administration Regulations in effect under the International Emergency Economic Powers Act. BIS continues to carry out the provisions of the Export Administration Act, as appropriate and to the extent permitted by law, pursuant to Executive Order 13222 as amended by Executive Order 13637.

    Rulemaking Requirements

    1. Executive Orders 13563 and 12866 direct agencies to assess all costs and benefits of available regulatory alternatives and, if regulation is necessary, to select regulatory approaches that maximize net benefits (including potential economic, environmental, public health and safety effects, distribute impacts, and equity). Executive Order 13563 emphasizes the importance of quantifying both costs and benefits, of reducing costs, of harmonizing rules, and of promoting flexibility. This rule has been determined to be not significant for purposes of Executive Order 12866.

    2. Notwithstanding any other provision of law, no person is required to respond to, nor is subject to a penalty for failure to comply with, a collection of information, subject to the requirements of the Paperwork Reduction Act of 1995 (44 U.S.C. 3501 et seq.) (PRA), unless that collection of information displays a currently valid OMB control number. This rule affects two approved collections: (1) The Simplified Network Application Processing + Redesign system (control number 0694-0088), which carries a burden hour estimate of 43.8 minutes, including the time necessary to submit license applications, among other things, as well as miscellaneous and other recordkeeping activities that account for 12 minutes per submission; and (2) License Exceptions and Exclusions (0694-0137). This rule is not expected to increase the number of submissions under these collections.

    3. This rule does not contain policies with Federalism implications as that term is defined under E.O. 13132.

    4. The provisions of the Administrative Procedure Act (5 U.S.C. 553) requiring notice of proposed rulemaking, the opportunity for public participation, and a delay in effective date, are inapplicable because this regulation involves a military or foreign affairs function of the United States under 5 U.S.C. 553(a)(1). This final rule implements U.S. multilateral commitments pursuant to United Nations Security Council arms embargoes. The sanctions against Cote d'Ivoire and Liberia were promulgated in part to fulfill U.S. obligations and serve collective security interests by implementing United Nations Security Council arms embargoes. Furthermore, arms embargoes were imposed on Sri Lanka and Vietnam by the United States to advance national and regional stability and security. Termination of these embargoes under the EAR recognizes progress in the security situations in Cote d'Ivoire and Liberia, changes in legislative mandates related to Sri Lanka, and the evolution of U.S. relations with Vietnam, and updates the EAR to bring it in line with those changes, including with international authorities supported by the United States and which already are in effect. Lastly, these updates and the recognition of India as a member of the MTCR help to prevent confusing the public as to the status of the named destinations for purposes of export controls under the EAR. No other law requires that a notice of proposed rulemaking and an opportunity for public comment be given for this rule. Because a notice of proposed rulemaking and an opportunity for public comment are not required to be given for this rule by 5 U.S.C. 553, or by any other law, the analytical requirements of the Regulatory Flexibility Act, 5 U.S.C. 601 et seq., are not applicable. Therefore, this regulation is issued in final form and is made effective immediately upon publication.

    List of Subjects 15 CFR Part 738

    Exports.

    15 CFR Part 740

    Administrative practice and procedure, Exports, Reporting and recordkeeping requirements.

    15 CFR Part 742

    Exports, Terrorism.

    15 CFR Part 746

    Exports, Reporting and recordkeeping requirements.

    Accordingly, parts 738, 740, 742 and 746 of the Export Administration Regulations (15 CFR parts 730-774) are amended as follows:

    PART 738—[AMENDED] 1. The authority citation for part 738 continues to read as follows: Authority:

    Authority: 50 U.S.C. 4601 et seq.; 50 U.S.C. 1701 et seq.; 10 U.S.C. 7420; 10 U.S.C. 7430(e); 22 U.S.C. 287c; 22 U.S.C. 3201 et seq.; 22 U.S.C. 6004; 42 U.S.C. 2139a; 15 U.S.C. 1824a; 50 U.S.C. 4305; 22 U.S.C. 7201 et seq.; 22 U.S.C. 7210; E.O. 13026, 61 FR 58767, 3 CFR, 1996 Comp., p. 228; E.O. 13222, 66 FR 44025, 3 CFR, 2001 Comp., p. 783; Notice of August 4, 2016, 81 FR 52587 (August 8, 2016).

    Supplement No. 1 to Part 738—[Amended] 2. Supplement No. 1 to part 738 “Commerce Country Chart” is amended by removing the footnote notation number 1 from “Cote d'Ivoire” and “Liberia”. PART 740—[AMENDED] 3. The authority citation for part 740 continues to read as follows: Authority:

    Authority: 50 U.S.C. 4601 et seq.; 50 U.S.C. 1701 et seq.; 22 U.S.C. 7201 et seq.; E.O. 13026, 61 FR 58767, 3 CFR, 1996 Comp., p. 228; E.O. 13222, 66 FR 44025, 3 CFR, 2001 Comp., p. 783; Notice of August 4, 2016, 81 FR 52587 (August 8, 2016).

    Supplement No. 1 to Part 740—[Amended] 4. Supplement No. 1 to part 740, Country Group D, is amended by: a. Removing the entries for “Cote d'Ivoire”, “Liberia” and “Sri Lanka”; and b. Removing the “X” under column D:5 “U.S. Arms Embargoed Countries” for “Vietnam”. PART 742—[AMENDED] 5. The authority citation for part 742 continues to read as follows: Authority:

    50 U.S.C. 4601 et seq.; 50 U.S.C. 1701 et seq.; 22 U.S.C. 3201 et seq.; 42 U.S.C. 2139a; 22 U.S.C. 7201 et seq.; 22 U.S.C. 7210; Sec. 1503, Pub. L. 108-11, 117 Stat. 559; E.O. 12058, 43 FR 20947, 3 CFR, 1978 Comp., p. 179; E.O. 12851, 58 FR 33181, 3 CFR, 1993 Comp., p. 608; E.O. 12938, 59 FR 59099, 3 CFR, 1994 Comp., p. 950; E.O. 13026, 61 FR 58767, 3 CFR, 1996 Comp., p. 228; E.O. 13222, 66 FR 44025, 3 CFR, 2001 Comp., p. 783; Presidential Determination 2003-23, 68 FR 26459, 3 CFR, 2004 Comp., p. 320; Notice of November 12, 2015, 80 FR 70667 (November 13, 2015); Notice of August 4, 2016, 81 FR 52587 (August 8, 2016).

    § 742.5 [Amended]
    6. Section 742.5 is amended by removing the clause “, and India as an MTCR adherent,” from the first sentence of paragraph (d). PART 746—[AMENDED] 7. The authority citation for 15 CFR part 746 continues to read as follows: Authority:

    50 U.S.C. 4601 et seq.; 50 U.S.C. 1701 et seq.; 22 U.S.C. 287c; Sec 1503, Pub. L. 108-11, 117 Stat. 559; 22 U.S.C. 6004; 22 U.S.C. 7201 et seq.; 22 U.S.C. 7210; E.O. 12854, 58 FR 36587, 3 CFR, 1993 Comp., p. 614; E.O. 12918, 59 FR 28205, 3 CFR, 1994 Comp., p. 899; E.O. 13222, 66 FR 44025, 3 CFR, 2001 Comp., p. 783; E.O. 13338, 69 FR 26751, 3 CFR, 2004 Comp., p 168; Presidential Determination 2003-23, 68 FR 26459, 3 CFR, 2004 Comp., p. 320; Presidential Determination 2007-7, 72 FR 1899, 3 CFR, 2006 Comp., p. 325; Notice of May 3, 2016, 81 FR 27293 (May 5, 2016); Notice of August 4, 2016, 81 FR 52587 (August 8, 2016).

    § 746.1 [Amended]
    8. Section 746.1 is amended by removing “Cote d'Ivoire (Ivory Coast),” and “Liberia,” from the list of countries in paragraph (b)(2). Dated: October 28, 2016. Kevin J. Wolf, Assistant Secretary for Export Administration.
    [FR Doc. 2016-26535 Filed 11-3-16; 8:45 am] BILLING CODE 3510-33-P
    DEPARTMENT OF THE TREASURY Office of Foreign Assets Control 31 CFR Parts 501 and 593 and Appendix A to Chapter V Amendments to OFAC Regulations To Remove the Former Liberian Regime of Charles Taylor Sanctions Regulations and References to Fax-on-Demand Service AGENCY:

    Office of Foreign Assets Control, Treasury

    ACTION:

    Final rule.

    SUMMARY:

    The Department of the Treasury's Office of Foreign Assets Control (OFAC) is removing from the Code of Federal Regulations the Former Liberian Regime of Charles Taylor Sanctions Regulations as a result of the termination of the national emergency on which the regulations were based. OFAC also is amending the Reporting, Procedures and Penalties Regulations and Appendix A to chapter V by making technical changes including to remove references to OFAC's fax-on-demand service in order to reflect the discontinuation of that service.

    DATES:

    Effective: November 4, 2016.

    FOR FURTHER INFORMATION CONTACT:

    The Department of the Treasury's Office of Foreign Assets Control: Assistant Director for Licensing, tel.: 202-622-2480, Assistant Director for Regulatory Affairs, tel.: 202-622-4855, Assistant Director for Sanctions Compliance & Evaluation, tel.: 202-622-2490, or the Department of the Treasury's Office of the Chief Counsel (Foreign Assets Control), Office of the General Counsel, tel.: 202-622-2410.

    SUPPLEMENTARY INFORMATION: Electronic Availability

    This document and additional information concerning OFAC are available from OFAC's Web site (www.treasury.gov/ofac).

    Background Removal of the Former Liberian Regime of Charles Taylor Sanctions Regulations

    On July 22, 2004, the President signed Executive Order 13348, “Blocking Property of Certain Persons and Prohibiting the Importation of Certain Goods from Liberia” (E.O. 13348), in which he declared a national emergency to deal with the unusual and extraordinary threat posed to United States foreign policy by the actions and policies of former Liberian President Charles Taylor and other persons, in particular their unlawful depletion of Liberian resources and their removal from Liberia and secreting of Liberian funds and property, which undermined Liberia's transition to democracy and the orderly development of its political, administrative, and economic institutions and resources. The President further noted that the Comprehensive Peace Agreement signed on August 18, 2003, and the related ceasefire had not yet been universally implemented throughout Liberia, and that the illicit trade in round logs and timber products was linked to the proliferation of and trafficking in illegal arms, which perpetuated the Liberian conflict and fueled and exacerbated other conflicts throughout West Africa.

    E.O. 13348 blocked all property and interests in property of the persons listed in the Annex to E.O. 13348 and any person determined: (1) To be or have been an immediate family member of Charles Taylor; (2) to have been a senior official of the former Liberian regime headed by Charles Taylor or otherwise to have been or be a close ally or associate of Charles Taylor or the former Liberian regime; (3) to have materially assisted, sponsored, or provided financial, material, or technological support for, or goods or services in support of, the unlawful depletion of Liberian resources, the removal of Liberian resources from that country, and the secreting of Liberian funds and property by any person whose property or interests in property are blocked pursuant to E.O. 13348; or (4) to be owned or controlled by, or acting or purporting to act for or on behalf of, directly or indirectly, any person whose property or interests in property are blocked pursuant to E.O. 13348. E.O. 13348 also prohibited the direct or indirect importation into the United States of any round log or timber product originating in Liberia.

    On May 23, 2007, OFAC issued the Former Liberian Regime of Charles Taylor Sanctions Regulations, 31 CFR part 593, as a final rule to implement E.O. 13348 (72 FR 28855, May 23, 2007).

    On November 12, 2015, the President issued Executive Order 13710, “Termination of Emergency With Respect to the Actions and Policies of Former Liberian President Charles Taylor” (E.O. 13710). In E.O. 13710, the President found that the situation that gave rise to the declaration of a national emergency in E.O. 13348 had been significantly altered by Liberia's significant advances to promote democracy and the orderly development of its political, administrative, and economic institutions, including presidential elections in 2005 and 2011, which were internationally recognized as freely held; the 2012 conviction of, and 50-year prison sentence for, former Liberian President Charles Taylor and the affirmation on appeal of that conviction and sentence; and the diminished ability of those connected to former Liberian President Charles Taylor to undermine Liberia's progress. As a result, he terminated the national emergency declared in E.O. 13348 and revoked that order.

    Accordingly, OFAC is removing the Former Liberian Regime of Charles Taylor Sanctions Regulations from the Code of Federal Regulations. Pursuant to section 202 of the National Emergencies Act (50 U.S.C. 1622) and section 1 of E.O. 13710, termination of the national emergency declared in E.O. 13348 shall not affect any action taken or proceeding pending that was not fully concluded or determined as of 2:00 p.m. eastern standard time on November 12, 2015 (the effective date of E.O. 13710), any action or proceeding based on any act committed prior to the effective date, or any rights or duties that matured or penalties that were incurred prior to the effective date.

    Technical Changes

    On June 10, 2016, OFAC announced on its Web site that it was terminating its fax-on-demand service due to a lack of user demand. OFAC is making technical changes to its regulations including to reflect the discontinuation of the fax-on-demand service.

    The Reporting, Procedures and Penalties Regulations, 31 CFR part 501 (RPPR), set forth standard reporting and recordkeeping requirements and license application and other procedures relevant to the economic sanctions programs administered by OFAC. OFAC is revising section 501.603 of the RPPR, which covers reports on blocked property, and section 501.801 of the RPPR, which covers licensing, in each case to remove references to OFAC's fax-on-demand service and to make certain other technical changes.

    Appendix A to chapter V (Appendix A) sets forth information pertaining to OFAC's Specially Designated Nationals and Blocked Persons List. OFAC is making two revisions to Appendix A, in each case to remove references to OFAC's fax-on-demand service.

    Public Participation

    Because parts 501 and 593 and Appendix A to 31 CFR chapter V involve a foreign affairs function, the provisions of Executive Order 12866 and the Administrative Procedure Act (5 U.S.C. 553) requiring notice of proposed rulemaking, opportunity for public participation, and delay in effective date are inapplicable. Because no notice of proposed rulemaking is required for this rule, the Regulatory Flexibility Act (5 U.S.C. 601-612) does not apply.

    Paperwork Reduction Act

    Pursuant to the Paperwork Reduction Act of 1995 (44 U.S.C. 3507), the collections of information contained in the RPPR have been approved by the Office of Management and Budget under control number 1505-0164. An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless the collection of information displays a valid control number.

    List of Subjects in 31 CFR Parts 501 and 593 and Appendix A to Chapter V

    Administrative practice and procedure, Banks, Banking, Blocking of assets, Credit, Foreign trade, Imports, Liberia, Penalties, Reporting and recordkeeping requirements, Sanctions, Securities.

    For the reasons set forth in the preamble, and under the authority of 3 U.S.C. 301; 50 U.S.C. 1601-1651; E.O. 13348, 69 FR 44885, 3 CFR, 2004 Comp., p. 189; E.O. 13710, 80 FR 71679, OFAC amends 31 CFR chapter V as follows:

    PART 501—REPORTING, PROCEDURES AND PENALTIES REGULATIONS 1. The authority citation for part 501 continues to read as follows: Authority:

    8 U.S.C. 1189; 18 U.S.C. 2332d, 2339B; 19 U.S.C. 3901-3913; 21 U.S.C. 1901-1908; 22 U.S.C. 287c; 22 U.S.C. 2370(a), 6009, 6032, 7205; 28 U.S.C. 2461 note; 31 U.S.C. 321(b); 50 U.S.C. 1701-1706; 50 U.S.C. App. 1-44.

    Subpart C—Reports
    2. Amend § 501.603 to revise the second sentence of paragraph (b)(2)(ii) to read as follows:
    § 501.603 Reports on blocked property.

    (b) * * *

    (2) * * *

    (ii) * * * Copies of Form TDF 90-22.50 may be obtained directly from the Office of Foreign Assets Control by downloading the form from the OFAC Reporting and License Application Forms page on OFAC's Web site (https://www.treasury.gov/resource-center/sanctions/Pages/forms-index.aspx). * * *

    Subpart E—Procedures 3. Amend § 501.801 to revise the penultimate sentence of paragraph (b)(2) to read as follows:
    § 501.801 Licensing.

    (b) * * *

    (2) * * * The form, which requires information regarding the date of the blocking, the financial institutions involved in the transfer, and the beneficiary and amount of the transfer, may be obtained from the OFAC Reporting and License Application Forms page on OFAC's Web site (https://www.treasury.gov/resource-center/sanctions/Pages/forms-index.aspx) or the Office of Foreign Assets Control, Licensing Division, U.S. Department of the Treasury, 1500 Pennsylvania Avenue NW., Freedman's Bank Building, Washington, DC 20220. * * *

    PART 593—[REMOVED] 4. Remove part 593. Appendix A to Chapter V—[AMENDED] 5. The authority citation for appendix A to chapter V continues to read as follows: Authority:

    3 U.S.C. 301; 8 U.S.C. 1182, 1189; 18 U.S.C. 2339 B; 21 U.S.C. 1901-1908; 22 U.S.C. 287 c; 31 U.S.C. 321(b); 50 U.S.C. App. 1-44; Public Law 110-286, 122 Stat. 2632 (50 U.S.C. 1701 note); Public Law 111-195, 124 Stat. 1312 (22 U.S.C. 8501-8551); Public Law 112-81, 125 Stat. 1298 (22 U.S.C. 8513a); Public Law 112-158, 126 Stat. 1214 (22 U.S.C. 8701-8795); Public Law 112-208, 126 Stat. 1502; Public Law 113-278, 128 Stat. 3011 (50 U.S.C. 1701 note).

    6. Amend appendix A to chapter V as follows: a. Remove the third introductory paragraph, which states: “Finally, the public may obtain information on blocking, designation, identification, and delisting actions through OFAC's fax-on-demand service, at 202/622-0077.” b. Remove the fifth sentence of note 5, which states: “Information also is available by fax through OFAC's fax-on-demand service, at 202/622-0077.” John E. Smith, Acting Director, Office of Foreign Assets Control.
    [FR Doc. 2016-26717 Filed 11-3-16; 8:45 am] BILLING CODE 4810-AL-P
    DEPARTMENT OF THE TREASURY Financial Crimes Enforcement Network 31 CFR Parts 1010, 1020, 1021, 1022, 1023, 1024, 1025, and 1026 RIN 1506-AB32 Technical Amendments to Various Bank Secrecy Act Regulations AGENCY:

    Financial Crimes Enforcement Network (“FinCEN”), Treasury.

    ACTION:

    Final rule.

    SUMMARY:

    FinCEN is issuing this final rule to make a number of technical amendments. This final rule updates various sections of the regulations implementing the Bank Secrecy Act (“BSA”) by removing or replacing outdated references to obsolete BSA forms, removing references to outdated recordkeeping storage media, and replacing several other outdated terms and references.

    DATES:

    Effective November 4, 2016.

    FOR FURTHER INFORMATION CONTACT:

    FinCEN Resource Center at 1-800-767-2825 or 1-703-905-3591 (not a toll free number) and select option 3 for regulatory questions. Email inquiries can be sent to [email protected].

    SUPPLEMENTARY INFORMATION: I. Statutory and Regulatory Background

    The BSA, Titles I and II of Public Law 91-508, as amended, codified at 12 U.S.C. 1829b, 12 U.S.C. 1951-1959, and 31 U.S.C. 5311-5314 and 5316-5332, authorizes the Secretary of the Treasury (“Secretary”), among other things, to issue regulations requiring persons to keep records and file reports that are determined to have a high degree of usefulness in criminal, tax, regulatory, and counter-terrorism matters. The regulations implementing the BSA appear at 31 CFR chapter X. The Secretary's authority to administer the BSA has been delegated to the Director of FinCEN.1

    1 Treasury Order 180-01 (Sept. 26, 2002).

    II. Discussion of Changes

    In 2010, FinCEN reorganized the BSA's implementing regulations previously appearing in part 103 of title 31 of the Code of Federal Regulations by transferring them to a new chapter in title 31—chapter X.2 When chapter X was published, BSA reporting forms were specific to particular segments of the financial industry, and the names of those industry-specific forms currently appear in chapter X. FinCEN has since revised a number of forms so that they may be used by a range of industry segments and no longer carry industry-specific designations. The transition from industry-specific forms began by replacing the various currency transaction reports previously used by different industry segments, with an industry-wide, single BSA form for currency transactions—the Bank Secrecy Act Currency Transaction Report. FinCEN has also replaced the five industry-specific suspicious activity reports with a combined suspicious activity report, the Bank Secrecy Act Suspicious Activity Report, which is now used by various financial industry segments. This final rule revises the BSA regulations by updating them to reflect the names of the new reports.

    2See 75 FR 5806 (Oct. 26, 2010).

    A number of recordkeeping requirements in 31 CFR chapter X refer to the use of a type of data storage media—microfilm—that is no longer in wide use (or in many cases not even available) for copies of records required to be kept. This final rule removes those outdated references. If, however, a financial institution continues to use microfilm for copies, the rule change does not require the financial institution to use a different type of media for copies, nor does it require existing copies that were made on microfilm to be transferred to newer types of media.

    Finally, this final rule replaces several other outdated terms or references where appropriate such as the reference to filing reports with the Commissioner of Internal Revenue (“IRS”). Effective July 1, 2011, all BSA reports are electronically filed with FinCEN, not the IRS.

    III. Administrative Procedure Act and Effective Date

    Under 5 U.S.C. 553(b)(3)(B) of the Administrative Procedure Act (“APA”), an agency may, for good cause, find (and incorporate the finding and a brief statement of reasons in the rules issued) that notice and public comment procedure on a rule is impracticable, unnecessary, or contrary to the public interest. Currently, 31 CFR chapter X contains references to outdated forms/reports and dated terminology that may be confusing to the public. The rule solely clarifies those terms and references and makes no substantive change to any reporting requirement. For these reasons, the agency has determined that publishing a notice of proposed rulemaking and providing opportunity for public comment is unnecessary.

    Under 5 U.S.C. 553(d)(3) of the APA, the required publication or service of a substantive rule shall be made not less than 30 days before its effective date, except, among other things, as provided by the agency for good cause found and published with the rule. FinCEN finds that there is good cause for shortened notice since the revisions made by this final rule are minor, non-substantive, and technical. This final rule takes effect November 4, 2016.

    IV. Regulatory Flexibility Act

    The Regulatory Flexibility Act (“RFA”) does not apply to a rulemaking where a general notice of proposed rulemaking is not required.3 As noted previously, FinCEN has determined that it is unnecessary to publish a notice of proposed rulemaking for this final rule. Accordingly, the RFA's requirements relating to an initial and final regulatory flexibility analysis do not apply.

    3See 5 U.S.C. 603 and 604.

    V. Executive Order 13563 and 12866

    FinCEN has determined that Executive Orders 13563 and 12866 do not apply to this final rulemaking.

    VI. Paperwork Reduction Act Notices

    There are no collection of information requirements in this final rule.

    VII. Unfunded Mandates Act of 1995 Statement

    Section 202 of the Unfunded Mandates Reform Act of 1995, 2 U.S.C. 1532 (“Unfunded Mandates Act”), requires that an agency must prepare a budgetary impact statement before promulgating any rule likely to result in a Federal mandate that may result in the expenditure by State, local, and tribal governments, in the aggregate, or by the private sector of $100 million or more in any one year. If a budgetary impact statement is required, section 205 of the Unfunded Mandates Act also requires an agency to identify and consider a reasonable number of regulatory alternatives before promulgating a rule. FinCEN has determined that no portion of this final rule will result in expenditures by State, local, and tribal governments, or by the private sector, of $100 million or more in any one year. Accordingly, this final rule is not subject to section 202 of the Unfunded Mandates Act.

    List of Subjects in 31 CFR Parts 1010, 1020, 1021, 1022, 1023, 1024, 1025, and 1026

    Administrative practice and procedure, Banks, banking, Brokers, Currency, Foreign banking, Foreign currencies, Gambling, Investigations, Penalties, Reporting and recordkeeping requirements, Securities, Terrorism.

    Authority and Issuance

    For the reasons set forth in the preamble, chapter X of title 31 of the Code of Federal Regulations is amended as follows:

    PART 1010—GENERAL PROVISIONS 1. The authority citation for part 1010 is revised to read as follows: Authority:

    12 U.S.C. 1829b and 1951-1959; 31 U.S.C. 5311-5314 and 5316-5332; title III, sec. 314, Pub. L. 107-56, 115 Stat. 307; sec. 701, Pub. L. 114-74, 129 Stat. 599.

    § 1010.306 [Amended]
    2. Amend § 1010.306 as follows: a. In paragraph (a)(3), remove the words “the Commissioner of Internal Revenue” and add the word “FinCEN” in their place. b. In paragraph (c), remove the words “the Commissioner of Internal Revenue” and add the word “FinCEN” in their place. c. In paragraph (e), in the first sentence, remove the words “the Internal Revenue Service” and add the words “BSA E-Filing System” in their place and, in the second sentence, add the words “or FinCEN” after the words “U.S. Customs and Border Protection.”
    § 1010.410 [Amended]
    3. Amend § 1010.410 by removing the words “microfilm or other” from the introductory text.
    § 1010.430 [Amended]
    4. Amend § 1010.430 by removing the words “microfilm or other” in paragraph (a).
    § 1010.715 [Amended]
    5. Amend § 1010.715 by removing “1506-0009” and adding “1506-0050” in its place.
    § 1010.940 [Amended]
    6. Amend § 1010.940 in the introductory text by removing the words “microfilming or other.” PART 1020—RULES FOR BANKS 7. The authority citation for part 1020 is revised to read as follows: Authority:

    12 U.S.C. 1829b and 1951-1959; 31 U.S.C. 5311-5314 and 5316-5332; title III, sec. 314, Pub. L. 107-56, 115 Stat. 307; sec. 701, Pub. L. 114-74, 129 Stat. 599.

    8. Revise the heading for part 1020 to read as set forth above.
    § 1020.410 [Amended]
    9. Amend § 1020.410 as follows: a. In paragraph (a) introductory text by removing the words “microfilm or other.” b. In paragraph (a)(1) by removing the words “microfilm, other” each place they appear. c. In paragraph (c) introductory text by removing the words “microfilm or other.” PART 1021—RULES FOR CASINOS AND CARD CLUBS 10. The authority citation for part 1021 is revised to read as follows: Authority:

    12 U.S.C. 1829b and 1951-1959; 31 U.S.C. 5311-5314 and 5316-5332; title III, sec. 314, Pub. L. 107-56, 115 Stat. 307; sec. 701, Pub. L. 114-74, 129 Stat. 599.

    § 1021.320 [Amended]
    11. Amend § 1021.320 as follows: a. In paragraph (b)(1) by removing the words “by Casinos (“SARC”)” and adding the word “(“SAR”)” in their place. b. In paragraph (b)(2) by removing the word “SARC” each place it appears and adding in its place the word “SAR.” c. In paragraph (b)(3) by removing the word “SARC” each place it appears and adding in its place the word “SAR.” d. In paragraph (c) by removing the word “SARC” and adding in its place the word “SAR.” e. In paragraph (d) by removing the word “SARC” each place it appears and adding in its place the word “SAR.”
    § 1021.410 [Amended]
    12. Amend § 1021.410 in paragraph (b) introductory text by removing the words “microfilm or other.” PART 1022—RULES FOR MONEY SERVICES BUSINESSES 13. The authority citation for part 1022 is revised to read as follows: Authority:

    12 U.S.C. 1829b and 1951-1959; 31 U.S.C. 5311-5314 and 5316-5332; title III, sec. 314, Pub. L. 107-56, 115 Stat. 307; sec. 701, Pub. L. 114-74, 129 Stat. 599.

    § 1022.320 [Amended]
    14. Amend § 1022.320 as follows: a. In paragraph (b)(1) by removing the words “-MSB (“SAR-MSB”)” and adding the word “(“SAR”)” in their place. b. In paragraph (b)(2) by removing the word “SAR-MSB” each place it appears and adding in its place the word “SAR.” c. In paragraph (b)(3) by removing the word “SAR-MSB” each place it appears and adding in its place the word “SAR.” d. In paragraph (c) by removing the word “SAR-MSB” each place it appears and adding in its place the word “SAR.”
    § 1022.380 [Amended]
    15. Amend § 1022.380(b)(1)(i) by removing the words “the Enterprise Computing Center in Detroit of the Internal Revenue Service” and adding in their place the word “FinCEN.” PART 1023—RULES FOR BROKERS OR DEALERS IN SECURITIES 16. The authority citation for part 1023 is revised to read as follows: Authority:

    12 U.S.C. 1829b and 1951-1959; 31 U.S.C. 5311-5314 and 5316-5332; title III, sec. 314, Pub. L. 107-56, 115 Stat. 307; sec. 701, Pub. L. 114-74, 129 Stat. 599.

    § 1023.320 [Amended]
    17. Amend § 1023.320 as follows: a. In paragraph (b)(1) by removing the words “by the Securities and Futures Industry (“SAR-SF”)” and adding the word “(“SAR”)” in their place. b. In paragraph (b)(2) by removing the word “SAR-SF” each place it appears and adding in its place the word “SAR.” c. In paragraph (b)(3) by removing the word “SAR-SF” each place it appears and adding in its place the word “SAR.” d. In paragraph (c) introductory text by removing the word “SAR-SF” and adding in its place the word “SAR.” e. In paragraph (d) by removing the word “SAR-SF” each place it appears and adding in its place the word “SAR.” PART 1024—RULES FOR MUTUAL FUNDS 18. The authority citation for part 1024 is revised to read as follows: Authority:

    12 U.S.C. 1829b and 1951-1959; 31 U.S.C. 5311-5314 and 5316-5332; title III, sec. 314, Pub. L. 107-56, 115 Stat. 307; sec. 701, Pub. L. 114-74, 129 Stat. 599.

    § 1024.320 [Amended]
    19. Amend § 1024.320 as follows: a. In paragraph (b)(1) by removing the words “by Securities and Futures Industries (“SAR-SF”)” and adding the word “(“SAR”)” in their place. b. In paragraph (b)(2) by removing the word “SAR-SF” each place it appears and adding in its place the word “SAR.” c. In paragraph (b)(3) by removing the word “SAR-SF” each place it appears and adding in its place the word “SAR.” d. In paragraph (b)(4) by removing the word “SAR-SF” and adding in its place the word “SAR.” e. In paragraph (b)(5) by removing the word “SAR-SF” and adding in its place the word “SAR.” f. In paragraph (c) by removing the word “SAR-SF” each place it appears and adding in its place the word “SAR.” PART 1025—RULES FOR INSURANCE COMPANIES 20. The authority citation for part 1025 is revised to read as follows: Authority:

    12 U.S.C. 1829b and 1951-1959; 31 U.S.C. 5311-5314 and 5316-5332; title III, sec. 314, Pub. L. 107-56, 115 Stat. 307; sec. 701, Pub. L. 114-74, 129 Stat. 599.

    § 1025.320 [Amended]
    21. Amend § 1025.320 as follows: a. In paragraph (b)(1) by removing the words “by Insurance Companies (SAR-IC)” and adding the word “(“SAR”)” in their place. b. In paragraph (b)(2) by removing the word “SAR-IC” each place it appears and adding in its place the word “SAR.” c. In paragraph (b)(3) by removing the word “SAR-IC” each place it appears and adding in its place the word “SAR.” d. In paragraph (c) by removing the word “SAR-IC” and adding in its place the word “SAR.” e. In paragraph (d) by removing the word “SAR-IC” each place it appears and adding in its place the word “SAR.” PART 1026—RULES FOR FUTURES COMMISSION MERCHANTS AND INTRODUCING BROKERS IN COMMODITIES 22. The authority citation for part 1026 is revised to read as follows: Authority:

    12 U.S.C. 1829b and 1951-1959; 31 U.S.C. 5311-5314 and 5316-5332; title III, sec. 314, Pub. L. 107-56, 115 Stat. 307; sec. 701, Pub. L. 114-74, 129 Stat. 599.

    § 1026.320 [Amended]
    23. Amend § 1026.320 as follows: a. In paragraph (b)(1) by removing the words “by Securities and Futures Industries (“SAR-SF”)” and adding the word “(“SAR”)” in their place. b. In paragraph (b)(2) by removing the word “SAR-SF” each place it appears and adding in its place the word “SAR.” c. In paragraph (b)(3) by removing the word “SAR-SF” each place it appears and adding in its place the word “SAR.” d. In paragraph (c) introductory text by removing the word “SAR-SF” and adding in its place the word “SAR.” e. In paragraph (d) by removing the word “SAR-SF” each place it appears and adding in its place the word “SAR.” Dated: October 31, 2016. Jamal El Hindi, Deputy Director, Financial Crimes Enforcement Network.
    [FR Doc. 2016-26557 Filed 11-3-16; 8:45 am] BILLING CODE 4810-02-P
    DEPARTMENT OF HOMELAND SECURITY Coast Guard 33 CFR Part 100 [Docket No. USCG-2016-0959] Special Local Regulations; Key West World Championship, Key West, FL AGENCY:

    Coast Guard, DHS.

    ACTION:

    Notice of enforcement of regulation.

    SUMMARY:

    The Coast Guard will enforce the Key West World Championship Special Local Regulation from 9:30 a.m. until 4:30 p.m. on November 9, 11, and 13, 2016. This action is necessary to ensure safety of life on navigable waters of the United States and to protect race participants, participant vessels, spectators, and the general public from the hazards associated with high-speed boat races. During the enforcement period, and in accordance with previously issued special local regulations, no person or vessel may enter the regulated area without permission from the Captain of the Port Key West or a designated representative.

    DATES:

    The regulations in 33 CFR 100.701 will be enforced from 9:30 a.m. until 4:30 p.m. on November 9, 11, and 13, 2016, for the marine event listed in item (c)(9) in the Table to § 100.701.

    FOR FURTHER INFORMATION CONTACT:

    If you have questions on this notice, call or email Lieutenant Scott Ledee, Sector Key West Waterways Management Department, Coast Guard; telephone (305) 292-8768, email [email protected].

    SUPPLEMENTARY INFORMATION:

    The Coast Guard will enforce the Special Local Regulation for the annual Key West World Championship Super Boat Race in Table to § 100.701, item (c)(9) from 9:30 a.m. until 4:30 p.m. on November 9, 11, and 13, 2016.

    On November 9, 11, and 13, 2016, Super Boat International Productions, Inc. is hosting the Key West World Championship, a series of high-speed boat races.

    Under the provisions of 33 CFR 100.701, no unauthorized person or vessel may enter, transit through, anchor within, or remain in the established regulated areas unless permission to enter has been granted by the Captain of the Port Key West or designated representative. This action is to provide enforcement action of regulated area that will encompass portions of the waters of the Atlantic Ocean located southwest of Key West, Florida. The Coast Guard may be assisted by other Federal, State, or local law enforcement agencies in enforcing this regulation.

    This notice of enforcement is issued under authority of 33 CFR 100.701 and 5 U.S.C. 552(a). The Coast Guard will provide notice of the regulated area by Local Notice to Mariners, Broadcast Notice to Mariners, and on-scene designated representatives. If the Captain of the Port Key West determines that the regulated area need not be enforced for the full duration stated in this publication, he or she may use a Broadcast Notice to Mariners to grant general permission to enter the regulated area.

    Dated: October 25, 2016. J.A. Janszen, Captain, U.S. Coast Guard, Captain of the Port Key West.
    [FR Doc. 2016-26695 Filed 11-3-16; 8:45 am] BILLING CODE 9110-04-P
    DEPARTMENT OF HOMELAND SECURITY Coast Guard 33 CFR Part 117 [Docket No. USCG-2016-0963] RIN 1625-AA09 Drawbridge Operation Regulations; Tchefuncta River, Madisonville, LA AGENCY:

    Coast Guard, DHS.

    ACTION:

    Notice of temporary deviation from regulations; request for comments.

    SUMMARY:

    The Coast Guard has issued a temporary deviation from the operating schedule that governs the State Route 22 Bridge (Madisonville (SR22) swing span bridge) across the Tchefuncta River, mile 2.5, at Madisonville, St. Tammany Parish, Louisiana. This deviation will test a change to the drawbridge operation schedule to determine whether a permanent change to the schedule is needed. This deviation will allow the bridge to only open for vessels on the hour during the day and to not open for vessels during the weekday peak traffic hours.

    DATES:

    This deviation is effective from 6 a.m. on November 21, 2016 through midnight on May 18, 2017.

    Comments and related material must reach the Coast Guard on or before January 18, 2017.

    ADDRESSES:

    You may submit comments identified by docket number USCG-2016-0963 using the Federal eRulemaking Portal at http://www.regulations.gov. See the “Public Participation and Request for Comments” portion of the SUPPLEMENTARY INFORMATION section below for instructions on submitting comments.

    FOR FURTHER INFORMATION CONTACT:

    If you have questions on this test deviation, call or email David Frank, Bridge Administrator at 504-671-2128, email [email protected].

    SUPPLEMENTARY INFORMATION: I. Background, Purpose and Legal Basis

    The State Route 22 (SR 22) swing span bridge across Tchefuncta River, mile 2.5, at Madisonville, St. Tammany Parish, Louisiana presently operates under 33 CFR 117.500. The SR 22 swing bridge has a vertical clearance of 6.2 feet above Mean High Water (MHW) in the closed-to-navigation position and unlimited clearance in the open-to-navigation position.

    Local governmental officials from St. Tammany Parish and the City of Madisonville, in conjunction with the Louisiana Department of Transportation and Development (LDOTD) have requested that the operating regulation of the SR 22 swing span bridge be changed in order to better accommodate the increased vehicular traffic crossing the bridge especially during the peak, weekday rush hours. A traffic study conducted by the LDOTD has determined that the existing vehicular traffic at the intersection of SR 22 and SR 21/SR 1077 is over capacity at peak hours and causes unacceptable levels of delay to roadway traffic. This situation is compounded by the opening of the bridge during these peak hours. A combination of modifications to the operating schedule of the bridge and modifications to the traffic controls at this intersection will improve traffic flow and reduce traffic delays.

    Approximately 7,500 vehicles cross the bridge daily between the hours of 6 a.m. and 7 p.m. Vessel openings for the month of July indicate that the bridge opened to pass vessels 118 times during the week and 202 times during the weekend. Vessel openings for the month of August dropped to 68 openings during the week and 85 openings during the weekend.

    Concurrent with the publication of the Test Deviation, a Notice of Proposed Rulemaking (NPRM) [USCG-2016-0963] has been issued to allow the LDOTD to test the proposed schedule and to obtain data and public comments. The test period will be in effect during the entire NPRM comment period. The Coast Guard will review the logs of the drawbridge, the traffic counts provided by LDOTD, and evaluate public comments from this NPRM and the above referenced Temporary Deviation to determine if the requested change to the permanent special drawbridge operating regulation is warranted.

    The deviation to test the proposed schedule will allow the SR 22 Bridge, mile 2.5 at Madisonville to operate as follows: The draw of the SR22 Bridge shall open on signal from 7 p.m. to 6 a.m. From 6 a.m. to 7 p.m., the draw need only open on the hour, except that the draw need not open for the passage of vessels at 8 a.m., 5 p.m. and 6 p.m. Monday through Friday except federal holidays. The bridge will open at any time an emergency.

    During the 180-day deviation, LDOTD will continue to monitor vehicular traffic counts and work to make roadway traffic control improvements to further reduce vehicular traffic delays.

    There are no alternate routes available for vessels that wish to transit the bridge site; however, if vessels have a vertical clearance requirement of less than 6.2 feet above MHW, they may transit the bridge site at any time.

    II. Public Participation and Request for Comments

    We view public participation as essential to effective rulemaking, and will consider all comments and material received during the comment period. Your comment can help shape the outcome of this rulemaking. If you submit a comment, please include the docket number for this rulemaking, indicate the specific section of this document to which each comment applies, and provide a reason for each suggestion or recommendation.

    We encourage you to submit comments through the Federal eRulemaking Portal at http://www.regulations.gov. If your material cannot be submitted using http://www.regulations.gov, contact the person in the FOR FURTHER INFORMATION CONTACT section of this document for alternate instructions.

    We accept anonymous comments. All comments received will be posted without change to http://www.regulations.gov and will include any personal information you have provided. For more about privacy and the docket, you may review a Privacy Act notice regarding the Federal Docket Management System in the March 24, 2005, issue of the Federal Register (70 FR 15086).

    Documents mentioned in this notice of temporary deviation from regulations, and all public comments, are in our online docket at http://www.regulations.gov and can be viewed by following that Web site's instructions. Additionally, if you go to the online docket and sign up for email alerts, you will be notified when comments are posted or a final rule is published.

    Dated: October 31, 2016. David M. Frank, Bridge Administrator, Eighth Coast Guard District.
    [FR Doc. 2016-26655 Filed 11-3-16; 8:45 am] BILLING CODE 9110-04-P
    DEPARTMENT OF COMMERCE Patent and Trademark Office 37 CFR Part 6 [Docket No. PTO-T-2016-0038] RIN 0651-AD12 International Trademark Classification Changes AGENCY:

    United States Patent and Trademark Office, Commerce.

    ACTION:

    Final rule.

    SUMMARY:

    The United States Patent and Trademark Office (USPTO) issues a final rule to incorporate classification changes adopted by the Nice Agreement Concerning the International Classification of Goods and Services for the Purposes of the Registration of Marks (Nice Agreement). These changes are effective January 1, 2017, and are listed in the International Classification of Goods and Services for the Purposes of the Registration of Marks (11th ed., ver. 2017), which is published by the World Intellectual Property Organization (WIPO).

    DATES:

    This rule is effective on January 1, 2017.

    FOR FURTHER INFORMATION CONTACT:

    Catherine Cain, Office of the Deputy Commissioner for Trademark Examination Policy, at (571) 272-8946 or [email protected].

    SUPPLEMENTARY INFORMATION:

    Purpose: As noted above, this final rule incorporates classification changes adopted by the Nice Agreement that will become effective on January 1, 2017. This rule benefits the public by providing notice regarding these changes.

    Summary of Major Provisions: The USPTO is revising § 6.1 in part 6 of title 37 of the Code of Federal Regulations to incorporate classification changes and modifications that will become effective January 1, 2017, as listed in the International Classification of Goods and Services for the Purposes of the Registration of Marks (11th ed., 2017) (Nice Classification), published by WIPO.

    The Nice Agreement is a multilateral treaty, administered by WIPO, which establishes the international classification of goods and services for the purposes of registering trademarks and service marks. As of September 1, 1973, this international classification system is the controlling system used by the United States, and it applies to all applications filed on or after September 1, 1973, and their resulting registrations, for all statutory purposes. See 37 CFR 2.85(a). Every signatory to the Nice Agreement must utilize the international classification system.

    Each state party to the Nice Agreement is represented in the Committee of Experts of the Nice Union (Committee of Experts), which meets annually to vote on proposed changes to the Nice Classification. Any state that is a party to the Nice Agreement may submit proposals for consideration by the other members in accordance with agreed-upon rules of procedure. Proposals are currently submitted on an annual basis to an electronic forum on the WIPO Web site, commented upon, modified, and compiled by WIPO for further discussion and voting at the annual Committee of Experts meeting.

    In 2013, the Committee of Experts began annual revisions to the Nice Classification. The annual revisions, which are published electronically and enter into force on January 1 each year, are referred to as versions and identified by edition number and year of the effective date (e.g., “Nice Classification, 10th edition, version 2013” or “NCL 10-2013”). Each annual version includes all changes adopted by the Committee of Experts since the adoption of the previous version. The changes consist of the addition of new goods and services to, and deletion of goods and services from, the Alphabetical List, and any modifications to the wording in the Alphabetical List, the class headings, and the explanatory notes that do not involve the transfer of goods or services from one class to another. New editions of the Nice Classification continue to be published electronically and include all changes adopted annually since the previous version, as well as goods or services transferred from one class to another or new classes that are created.

    The annual revisions contained in this final rule consist of modifications to the class headings that have been incorporated into the Nice Agreement by the Committee of Experts. Under the Nice Classification, there are 34 classes of goods and 11 classes of services, each with a class heading. Class headings generally indicate the fields to which goods and services belong. Specifically, this rule adds new, or deletes existing, goods and services from 15 class headings and revises spelling in one class heading. The changes to the class headings further define the types of goods and/or services appropriate to the class. As a signatory to the Nice Agreement, the United States adopts these revisions pursuant to Article 1.

    Costs and Benefits: This rulemaking is not economically significant under Executive Order 12866 (Sept. 30, 1993).

    Discussion of Regulatory Changes

    The USPTO is revising § 6.1 as follows:

    In Class 3, the wording “soaps; perfumery, essential oils, cosmetics, hair lotions; dentifrices” is amended to “non-medicated soaps; perfumery, essential oils, non-medicated cosmetics, non-medicated hair lotions; non-medicated dentifrices.”

    In Class 6, the wording “Common metals and their alloys” is amended to “Common metals and their alloys, ores,” and the separate clause “ores” at the end of the class heading is deleted. The wording “metal building materials” is amended to “metal materials for building and construction.” The wording “materials of metal for railway tracks,” “ironmongery,” and “pipes and tubes of metal” is deleted. The wording “metal containers for storage or transport” is inserted before “safes.”

    In Class 10, the spelling of “orthopedic” is amended to “orthopaedic.” A semi-colon is added after the wording “suture materials,” and the following wording is added: “therapeutic and assistive devices adapted for the disabled; massage apparatus; apparatus, devices and articles for nursing infants; sexual activity apparatus, devices and articles.”

    The wording “precious stones” is amended to “precious and semi-precious stones” in Class 14.

    In Class 16, the wording “and office requisites, except furniture” is added after the term “stationery. The wording “artists' materials” is amended to “artists' and drawing materials.” The wording “typewriters and office requisites (except furniture)” is deleted. The wording “instructional and teaching material (except apparatus)” is changed to “instructional and teaching materials,” and the wording “plastic materials for packaging” is amended to “plastic sheets, films and bags for wrapping and packaging.” The semi-colon after “printers' type” is replaced with a comma.

    In Class 17, the wording “plastics in extruded form for use in manufacture” is amended to “plastics and resins in extruded form for use in manufacture.” The wording “flexible pipes, not of metal” is changed to “flexible pipes, tubes and hoses, not of metal.”

    In Class 18, “animal skins, hides” is amended to “animal skins and hides,” and “trunks and travelling bags” is amended to “luggage and carrying bags.” A semi-colon is added after the term “saddlery,” and the wording “collars, leashes and clothing for animals” is added thereafter.

    The wording “containers, not of metal, for storage or transport” is added in Class 20 after “Furniture, mirrors, picture frames.” The term “ivory” is deleted.

    “[B]rushes (except paintbrushes)” is amended to “brushes, except paintbrushes” in Class 21. The term “steelwool” is deleted. The wording “unworked or semi-worked glass (except glass used in building)” is amended to “unworked or semi-worked glass, except building glass.”

    In Class 22, “tents, awnings, and tarpaulins” is amended to “tents and tarpaulins; awnings of textile or synthetic materials.” The wording “sacks” is amended to “sacks for the transport and storage of materials in bulk,” and “padding and stuffing materials (except of paper, cardboard, rubber or plastics)” is changed to “padding, cushioning and stuffing materials, except of paper, cardboard, rubber or plastics.” The wording “and substitutes therefor” is inserted after “raw fibrous textile materials.”

    The wording “bed covers; table covers” is deleted, and the wording “household linen; curtains of textile or plastic” is inserted after “Textiles and substitutes for textiles” in Class 24.

    In Class 26, the wording “hair decorations; false hair” is added.

    “Games and playthings” in Class 28 is changed to “Games, toys and playthings,” and the wording “video game apparatus” is added.

    In Class 31, the wording “Agricultural, horticultural and forestry products” is amended to “Raw and unprocessed agricultural, aquacultural, horticultural and forestry products,” and “fresh fruits and vegetables” is amended to “fresh fruits and vegetables, fresh herbs.” The wording “bulbs, seedlings and seeds for planting” is inserted after “natural plants and flowers.” “[F]oodstuffs for animals” is amended to “foodstuffs and beverages for animals.”

    “[S]ecurity services for the protection of property and individuals” is amended to “security services for the physical protection of tangible property and individuals” in Class 45.

    Rulemaking Requirements

    Administrative Procedure Act: The changes in this rulemaking involve rules of agency practice and procedure, and/or interpretive rules. See Perez v. Mortg. Bankers Ass'n, 135 S. Ct. 1199, 1204 (2015) (Interpretive rules “advise the public of the agency's construction of the statutes and rules which it administers.” (citation and internal quotation marks omitted)); Nat'l Org. of Veterans' Advocates v. Sec'y of Veterans Affairs, 260 F.3d 1365, 1375 (Fed. Cir. 2001) (Rule that clarifies interpretation of a statute is interpretive.); Bachow Commc'ns Inc. v. FCC, 237 F.3d 683, 690 (D.C. Cir. 2001) (Rules governing an application process are procedural under the Administrative Procedure Act.); Inova Alexandria Hosp. v. Shalala, 244 F.3d 342, 350 (4th Cir. 2001) (Rules for handling appeals were procedural where they did not change the substantive standard for reviewing claims.).

    Accordingly, prior notice and opportunity for public comment for the changes in this rulemaking are not required pursuant to 5 U.S.C. 553(b) or (c), or any other law. See Perez, 135 S. Ct. at 1206 (Notice-and-comment procedures are required neither when an agency “issue[s] an initial interpretive rule” nor “when it amends or repeals that interpretive rule.”); Cooper Techs. Co. v. Dudas, 536 F.3d 1330, 1336-37 (Fed. Cir. 2008) (stating that 5 U.S.C. 553, and thus 35 U.S.C. 2(b)(2)(B), does not require notice and comment rulemaking for “interpretative rules, general statements of policy, or rules of agency organization, procedure, or practice” (quoting 5 U.S.C. 553(b)(A))).

    Regulatory Flexibility Act: As prior notice and an opportunity for public comment are not required pursuant to 5 U.S.C. 553 or any other law, neither a Regulatory Flexibility Act analysis, nor a certification under the Regulatory Flexibility Act (5 U.S.C. 601, et seq.), is required. See 5 U.S.C. 603.

    Executive Order 12866 (Regulatory Planning and Review): This rulemaking has been determined to be not significant for purposes of Executive Order 12866 (Sept. 30, 1993).

    Executive Order 13563 (Improving Regulation and Regulatory Review): The USPTO has complied with Executive Order 13563 (Jan. 18, 2011). Specifically, the USPTO has, to the extent feasible and applicable: (1) Made a reasoned determination that the benefits justify the costs of the rule changes; (2) tailored the rules to impose the least burden on society consistent with obtaining the regulatory objectives; (3) selected a regulatory approach that maximizes net benefits; (4) specified performance objectives; (5) identified and assessed available alternatives; (6) provided the public with a meaningful opportunity to participate in the regulatory process, including soliciting the views of those likely affected prior to issuing a notice of proposed rulemaking, and provided on-line access to the rulemaking docket; (7) attempted to promote coordination, simplification, and harmonization across government agencies and identified goals designed to promote innovation; (8) considered approaches that reduce burdens and maintain flexibility and freedom of choice for the public; and (9) ensured the objectivity of scientific and technological information and processes, to the extent applicable.

    Executive Order 13132 (Federalism): This rulemaking does not contain policies with federalism implications sufficient to warrant preparation of a Federalism Assessment under Executive Order 13132 (Aug. 4, 1999).

    Unfunded Mandates Reform Act of 1995: The changes set forth in this rulemaking do not involve a Federal intergovernmental mandate that will result in the expenditure by State, local, and tribal governments, in the aggregate, of 100 million dollars (as adjusted) or more in any one year, or a Federal private sector mandate that will result in the expenditure by the private sector of 100 million dollars (as adjusted) or more in any one year, and will not significantly or uniquely affect small governments. Therefore, no actions are necessary under the provisions of the Unfunded Mandates Reform Act of 1995. See 2 U.S.C. 1501 et seq.

    Paperwork Reduction Act: This final rule does not involve information collection requirements which are subject to review by the Office of Management and Budget (OMB) under the Paperwork Reduction Act of 1995 (44 U.S.C. 3501 et seq.).

    List of Subjects in 37 CFR Part 6

    Administrative practice and procedure, Classification, Trademarks.

    For the reasons given in the preamble and under the authority contained in 15 U.S.C. 1112, 1123 and 35 U.S.C. 2, as amended, the USPTO is amending part 6 of title 37 as follows:

    PART 6—CLASSIFICATION OF GOODS AND SERVICES UNDER THE TRADEMARK ACT 1. The authority citation for 37 CFR part 6 continues to read as follows: Authority:

    Secs. 30, 41, 60 Stat. 436, 440; 15 U.S.C. 1112, 1123; 35 U.S.C. 2, unless otherwise noted.

    2. Revise § 6.1 to read as follows:
    § 6.1 International schedule of classes of goods and services. Goods

    1. Chemicals used in industry, science and photography, as well as in agriculture, horticulture and forestry; unprocessed artificial resins, unprocessed plastics; manures; fire extinguishing compositions; tempering and soldering preparations; chemical substances for preserving foodstuffs; tanning substances; adhesives used in industry.

    2. Paints, varnishes, lacquers; preservatives against rust and against deterioration of wood; colorants; mordants; raw natural resins; metals in foil and powder form for use in painting, decorating, printing and art.

    3. Bleaching preparations and other substances for laundry use; cleaning, polishing, scouring and abrasive preparations; non-medicated soaps; perfumery, essential oils, non-medicated cosmetics, non-medicated hair lotions; non-medicated dentifrices.

    4. Industrial oils and greases; lubricants; dust absorbing, wetting and binding compositions; fuels (including motor spirit) and illuminants; candles and wicks for lighting.

    5. Pharmaceuticals, medical and veterinary preparations; sanitary preparations for medical purposes; dietetic food and substances adapted for medical use or veterinary use, food for babies; dietary supplements for humans and animals; plasters, materials for dressings; material for stopping teeth, dental wax; disinfectants; preparations for destroying vermin; fungicides, herbicides.

    6. Common metals and their alloys, ores; metal materials for building and construction; transportable buildings of metal; non-electric cables and wires of common metal; small items of metal hardware; metal containers for storage or transport; safes.

    7. Machines and machine tools; motors and engines (except for land vehicles); machine coupling and transmission components (except for land vehicles); agricultural implements other than hand-operated; incubators for eggs; automatic vending machines.

    8. Hand tools and implements (hand-operated); cutlery; side arms; razors.

    9. Scientific, nautical, surveying, photographic, cinematographic, optical, weighing, measuring, signalling, checking (supervision), life-saving and teaching apparatus and instruments; apparatus and instruments for conducting, switching, transforming, accumulating, regulating or controlling electricity; apparatus for recording, transmission or reproduction of sound or images; magnetic data carriers, recording discs; compact discs, DVDs and other digital recording media; mechanisms for coin-operated apparatus; cash registers, calculating machines, data processing equipment, computers; computer software; fire-extinguishing apparatus.

    10. Surgical, medical, dental and veterinary apparatus and instruments; artificial limbs, eyes and teeth; orthopaedic articles; suture materials; therapeutic and assistive devices adapted for the disabled; massage apparatus; apparatus, devices and articles for nursing infants; sexual activity apparatus, devices and articles.

    11. Apparatus for lighting, heating, steam generating, cooking, refrigerating, drying, ventilating, water supply and sanitary purposes.

    12. Vehicles; apparatus for locomotion by land, air or water.

    13. Firearms; ammunition and projectiles; explosives; fireworks.

    14. Precious metals and their alloys; jewellery, precious and semi-precious stones; horological and chronometric instruments.

    15. Musical instruments.

    16. Paper and cardboard; printed matter; bookbinding material; photographs; stationery and office requisites, except furniture; adhesives for stationery or household purposes; artists' and drawing materials; paintbrushes; instructional and teaching materials; plastic sheets, films and bags for wrapping and packaging; printers' type, printing blocks.

    17. Unprocessed and semi-processed rubber, gutta-percha, gum, asbestos, mica and substitutes for all these materials; plastics and resins in extruded form for use in manufacture; packing, stopping and insulating materials; flexible pipes, tubes and hoses, not of metal.

    18. Leather and imitations of leather; animal skins and hides; luggage and carrying bags; umbrellas and parasols; walking sticks; whips, harness and saddlery; collars, leashes and clothing for animals.

    19. Building materials (non-metallic); non-metallic rigid pipes for building; asphalt, pitch and bitumen; non-metallic transportable buildings; monuments, not of metal.

    20. Furniture, mirrors, picture frames; containers, not of metal, for storage or transport; unworked or semi-worked bone, horn, whalebone or mother-of-pearl; shells; meerschaum; yellow amber.

    21. Household or kitchen utensils and containers; combs and sponges; brushes, except paintbrushes; brush-making materials; articles for cleaning purposes; unworked or semi-worked glass, except building glass; glassware, porcelain and earthenware.

    22. Ropes and string; nets; tents and tarpaulins; awnings of textile or synthetic materials; sails; sacks for the transport and storage of materials in bulk; padding, cushioning and stuffing materials, except of paper, cardboard, rubber or plastics; raw fibrous textile materials and substitutes therefor.

    23. Yarns and threads, for textile use.

    24. Textiles and substitutes for textiles; household linen; curtains of textile or plastic.

    25. Clothing, footwear, headgear.

    26. Lace and embroidery, ribbons and braid; buttons, hooks and eyes, pins and needles; artificial flowers; hair decorations; false hair.

    27. Carpets, rugs, mats and matting, linoleum and other materials for covering existing floors; wall hangings (non-textile).

    28. Games, toys and playthings; video game apparatus; gymnastic and sporting articles; decorations for Christmas trees.

    29. Meat, fish, poultry and game; meat extracts; preserved, frozen, dried and cooked fruits and vegetables; jellies, jams, compotes; eggs; milk and milk products; edible oils and fats.

    30. Coffee, tea, cocoa and artificial coffee; rice; tapioca and sago; flour and preparations made from cereals; bread, pastries and confectionery; edible ices; sugar, honey, treacle; yeast, baking-powder; salt; mustard; vinegar, sauces (condiments); spices; ice.

    31. Raw and unprocessed agricultural, aquacultural, horticultural and forestry products; raw and unprocessed grains and seeds; fresh fruits and vegetables, fresh herbs; natural plants and flowers; bulbs, seedlings and seeds for planting; live animals; foodstuffs and beverages for animals; malt.

    32. Beers; mineral and aerated waters and other non-alcoholic beverages; fruit beverages and fruit juices; syrups and other preparations for making beverages.

    33. Alcoholic beverages (except beers).

    34. Tobacco; smokers' articles; matches.

    Services

    35. Advertising; business management; business administration; office functions.

    36. Insurance; financial affairs; monetary affairs; real estate affairs.

    37. Building construction; repair; installation services.

    38. Telecommunications.

    39. Transport; packaging and storage of goods; travel arrangement.

    40. Treatment of materials.

    41. Education; providing of training; entertainment; sporting and cultural activities.

    42. Scientific and technological services and research and design relating thereto; industrial analysis and research services; design and development of computer hardware and software.

    43. Services for providing food and drink; temporary accommodation.

    44. Medical services; veterinary services; hygienic and beauty care for human beings or animals; agriculture, horticulture and forestry services.

    45. Legal services; security services for the physical protection of tangible property and individuals; personal and social services rendered by others to meet the needs of individuals.

    Dated: October 31, 2016. Michelle K. Lee, Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office.
    [FR Doc. 2016-26682 Filed 11-3-16; 8:45 am] BILLING CODE 3510-16-P
    DEPARTMENT OF HOMELAND SECURITY Federal Emergency Management Agency 44 CFR Part 64 [Docket ID FEMA-2016-0002; Internal Agency Docket No. FEMA-8453] Suspension of Community Eligibility AGENCY:

    Federal Emergency Management Agency, DHS.

    ACTION:

    Final rule.

    SUMMARY:

    This rule identifies communities where the sale of flood insurance has been authorized under the National Flood Insurance Program (NFIP) that are scheduled for suspension on the effective dates listed within this rule because of noncompliance with the floodplain management requirements of the program. If the Federal Emergency Management Agency (FEMA) receives documentation that the community has adopted the required floodplain management measures prior to the effective suspension date given in this rule, the suspension will not occur and a notice of this will be provided by publication in the Federal Register on a subsequent date. Also, information identifying the current participation status of a community can be obtained from FEMA's Community Status Book (CSB). The CSB is available at http://www.fema.gov/fema/csb.shtm.

    DATES:

    The effective date of each community's scheduled suspension is the third date (“Susp.”) listed in the third column of the following tables.

    FOR FURTHER INFORMATION CONTACT:

    If you want to determine whether a particular community was suspended on the suspension date or for further information, contact Patricia Suber, Federal Insurance and Mitigation Administration, Federal Emergency Management Agency, 400 C Street SW., Washington, DC 20472, (202) 646-4149.

    SUPPLEMENTARY INFORMATION:

    The NFIP enables property owners to purchase Federal flood insurance that is not otherwise generally available from private insurers. In return, communities agree to adopt and administer local floodplain management measures aimed at protecting lives and new construction from future flooding. Section 1315 of the National Flood Insurance Act of 1968, as amended, 42 U.S.C. 4022, prohibits the sale of NFIP flood insurance unless an appropriate public body adopts adequate floodplain management measures with effective enforcement measures. The communities listed in this document no longer meet that statutory requirement for compliance with program regulations, 44 CFR part 59. Accordingly, the communities will be suspended on the effective date in the third column. As of that date, flood insurance will no longer be available in the community. We recognize that some of these communities may adopt and submit the required documentation of legally enforceable floodplain management measures after this rule is published but prior to the actual suspension date. These communities will not be suspended and will continue to be eligible for the sale of NFIP flood insurance. A notice withdrawing the suspension of such communities will be published in the Federal Register.

    In addition, FEMA publishes a Flood Insurance Rate Map (FIRM) that identifies the Special Flood Hazard Areas (SFHAs) in these communities. The date of the FIRM, if one has been published, is indicated in the fourth column of the table. No direct Federal financial assistance (except assistance pursuant to the Robert T. Stafford Disaster Relief and Emergency Assistance Act not in connection with a flood) may be provided for construction or acquisition of buildings in identified SFHAs for communities not participating in the NFIP and identified for more than a year on FEMA's initial FIRM for the community as having flood-prone areas (section 202(a) of the Flood Disaster Protection Act of 1973, 42 U.S.C. 4106(a), as amended). This prohibition against certain types of Federal assistance becomes effective for the communities listed on the date shown in the last column. The Administrator finds that notice and public comment procedures under 5 U.S.C. 553(b), are impracticable and unnecessary because communities listed in this final rule have been adequately notified.

    Each community receives 6-month, 90-day, and 30-day notification letters addressed to the Chief Executive Officer stating that the community will be suspended unless the required floodplain management measures are met prior to the effective suspension date. Since these notifications were made, this final rule may take effect within less than 30 days. National Environmental Policy Act. FEMA has determined that the community suspension(s) included in this rule is a non-discretionary action and therefore the National Environmental Policy Act of 1969 (42 U.S.C. 4321 et seq.) does not apply.

    Regulatory Flexibility Act. The Administrator has determined that this rule is exempt from the requirements of the Regulatory Flexibility Act because the National Flood Insurance Act of 1968, as amended, Section 1315, 42 U.S.C. 4022, prohibits flood insurance coverage unless an appropriate public body adopts adequate floodplain management measures with effective enforcement measures. The communities listed no longer comply with the statutory requirements, and after the effective date, flood insurance will no longer be available in the communities unless remedial action takes place.

    Regulatory Classification. This final rule is not a significant regulatory action under the criteria of section 3(f) of Executive Order 12866 of September 30, 1993, Regulatory Planning and Review, 58 FR 51735.

    Executive Order 13132, Federalism. This rule involves no policies that have federalism implications under Executive Order 13132.

    Executive Order 12988, Civil Justice Reform. This rule meets the applicable standards of Executive Order 12988.

    Paperwork Reduction Act. This rule does not involve any collection of information for purposes of the Paperwork Reduction Act, 44 U.S.C. 3501 et seq.

    List of Subjects in 44 CFR Part 64

    Flood insurance, Floodplains.

    Accordingly, 44 CFR part 64 is amended as follows:

    PART 64—[AMENDED] 1. The authority citation for Part 64 continues to read as follows: Authority:

    42 U.S.C. 4001 et seq.; Reorganization Plan No. 3 of 1978, 3 CFR, 1978 Comp.; p. 329; E.O. 12127, 44 FR 19367, 3 CFR, 1979 Comp.; p. 376.

    § 64.6 [Amended]
    2. The tables published under the authority of § 64.6 are amended as follows: State and location Community No. Effective date authorization/cancellation of sale of flood insurance in community Current effective map date Date certain
  • federal
  • assistance
  • no longer
  • available in SFHAs
  • Region I Massachusetts: Marshfield, Town of, Plymouth County 250273 January 14, 1972, Emerg; October 14, 1977, Reg; November 4, 2016, Susp November 4, 2016 November 4, 2016 Plymouth, Town of, Plymouth County 250278 February 5, 1974, Emerg; July 17, 1986, Reg; November 4, 2016, Susp ......*do   Do. Region II New Jersey: Franklin, Township of, Somerset County 340434 April 6, 1973, Emerg; May 15, 1980, Reg; November 4, 2016, Susp ......*do   Do. Hillsborough, Township of, Somerset County 340436 June 18, 1974, Emerg; March 2, 1981, Reg; November 4, 2016, Susp ......*do   Do. Manville, Borough of, Somerset County 340437 December 15, 1972, Emerg; February 15, 1978, Reg; November 4, 2016, Susp ......*do   Do. Millstone, Borough of, Somerset County 340438 October 29, 1973, Emerg; April 3, 1978, Reg; November 4, 2016, Susp ......*do   Do. Montgomery, Township of, Somerset County 340439 August 20, 1974, Emerg; April 1, 1981, Reg; November 4, 2016, Susp ......*do   Do. Rocky Hill, Borough of, Somerset County 340443 July 15, 1975, Emerg; December 16, 1980, Reg; November 4, 2016, Susp ......*do   Do. New York: Baldwinsville, Village of, Onondaga County 360569 August 12, 1974, Emerg; August 16, 1982, Reg; November 4, 2016, Susp ......*do   Do. Camillus, Town of, Onondaga County 360570 July 23, 1975, Emerg; June 15, 1981, Reg; November 4, 2016, Susp ......*do   Do. Camillus, Village of, Onondaga County 360571 July 17, 1974, Emerg; August 3, 1981, Reg; November 4, 2016, Susp ......*do   Do. Cicero, Town of, Onondaga County 360572 May 23, 1974, Emerg; April 4, 1983, Reg; November 4, 2016, Susp ......*do   Do. Clay, Town of, Onondaga County 360573 May 15, 1973, Emerg; April 1, 1980, Reg; November 4, 2016, Susp ......*do   Do. DeWitt, Town of, Onondaga County 360973 November 8, 1973, Emerg; March 1, 1979, Reg; November 4, 2016, Susp ......*do   Do. East Syracuse, Village of, Onondaga County 360574 July 23, 1975, Emerg; August 3, 1981, Reg; November 4, 2016, Susp ......*do   Do. Elbridge, Town of, Onondaga County 360575 April 8, 1974, Emerg; August 16, 1982, Reg; November 4, 2016, Susp ......*do   Do. Elbridge, Village of, Onondaga County 360576 February 18, 1975, Emerg; August 16, 1982, Reg; November 4, 2016, Susp ......*do   Do. Fabius, Town of, Onondaga County 360577 November 12, 1974, Emerg; April 30, 1986, Reg; November 4, 2016, Susp ......*do   Do. Fayetteville, Village of, Onondaga County 360578 August 22, 1974, Emerg; August 2, 1982, Reg; November 4, 2016, Susp ......*do   Do. Geddes, Town of, Onondaga County 360579 May 19, 1975, Emerg; February 17, 1982, Reg; November 4, 2016, Susp ......*do   Do. Jordan, Village of, Onondaga County 360580 December 3, 1974, Emerg; August 16, 1982, Reg; November 4, 2016, Susp ......*do   Do. LaFayette, Town of, Onondaga County 360581 September 13, 1974, Emerg; April 3, 1985, Reg; November 4, 2016, Susp ......*do   Do. Liverpool, Village of, Onondaga County 360582 December 26, 1974, Emerg; February 4, 1981, Reg; November 4, 2016, Susp ......*do   Do. Lysander, Town of, Onondaga County 360583 October 15, 1974, Emerg; January 6, 1982, Reg; November 4, 2016, Susp ......*do   Do. Manlius, Town of, Onondaga County 360584 November 8, 1973, Emerg; December 15, 1982, Reg; November 4, 2016, Susp ......*do   Do. Manlius, Village of, Onondaga County 360977 January 23, 1974, Emerg; September 29, 1978, Reg; November 4, 2016, Susp ......*do   Do. Marcellus, Town of, Onondaga County 360585 March 19, 1975, Emerg; August 16, 1982, Reg; November 4, 2016, Susp ......*do   Do. Marcellus, Village of, Onondaga County 360586 July 25, 1974, Emerg; June 1, 1982, Reg; November 4, 2016, Susp ......*do   Do. Minoa, Village of, Onondaga County 361017 April 18, 1975, Emerg; September 2, 1982, Reg; November 4, 2016, Susp ......*do   Do. North Syracuse, Village of, Onondaga County 360587 September 8, 1975, Emerg; November 20, 1985, Reg; November 4, 2016, Susp ......*do   Do. Onondaga, Town of, Onondaga County 360588 July 25, 1974, Emerg; January 18, 1984, Reg; November 4, 2016, Susp ......*do   Do. Otisco, Town of, Onondaga County 360589 June 1, 1976, Emerg; June 3, 1986, Reg; November 4, 2016, Susp ......*do   Do. Pompey, Town of, Onondaga County 360590 April 20, 1973, Emerg; January 3, 1979, Reg; November 4, 2016, Susp ......*do   Do. Salina, Town of, Onondaga County 360591 July 30, 1974, Emerg; August 16, 1982, Reg; November 4, 2016, Susp ......*do   Do. Skaneateles, Town of, Onondaga County 360592 September 19, 1974, Emerg; June 1, 1982, Reg; November 4, 2016, Susp ......*do   Do. Skaneateles, Village of, Onondaga County 360593 August 7, 1974, Emerg; February 17, 1982, Reg; November 4, 2016, Susp ......*do   Do. Solvay, Village of, Onondaga County 361564 January 16, 1975, Emerg; January 31, 1983, Reg; November 4, 2016, Susp ......*do   Do. Spafford, Town of, Onondaga County 360594 August 19, 1974, Emerg; April 30, 1986, Reg; November 4, 2016, Susp ......*do   Do. Syracuse, City of, Onondaga County 360595 August 2, 1974, Emerg; May 3, 1982, Reg; November 4, 2016, Susp ......*do   Do. Tully, Town of, Onondaga County 361296 November 3, 1975, Emerg; April 30, 1986, Reg; November 4, 2016, Susp ......*do   Do. Tully, Village of, Onondaga County 361552 June 27, 1975, Emerg; January 19, 1983, Reg; November 4, 2016, Susp ......*do   Do. Van Buren, Town of, Onondaga County 360596 March 16, 1973, Emerg; July 17, 1978, Reg; November 4, 2016, Susp ......*do   Do. Region V Minnesota: Bloomington, City of, Hennepin County 275230 March 12, 1971, Emerg; September 8, 1972, Reg; November 4, 2016, Susp ......*do   Do. Brooklyn Center, City of, Hennepin County 270151 July 29, 1974, Emerg; February 17, 1982, Reg; November 4, 2016, Susp ......*do   Do. Brooklyn Park, City of, Hennepin County 270152 February 5, 1974, Emerg; May 17, 1982, Reg; November 4, 2016, Susp ......*do   Do. Champlin, City of, Hennepin County 270153 March 30, 1973, Emerg; July 18, 1977, Reg; November 4, 2016, Susp ......*do   Do. Corcoran, City of, Hennepin County 270155 September 8, 1975, Emerg; January 16, 1981, Reg; November 4, 2016, Susp ......*do   Do. Crystal, City of, Hennepin County 270156 May 13, 1974, Emerg; June 1, 1978, Reg; November 4, 2016, Susp ......*do   Do. Dayton, City of, Hennepin and Wright Counties 270157 September 25, 1973, Emerg; February 1, 1978, Reg; November 4, 2016, Susp ......*do   Do. Deephaven, City of, Hennepin County 270158 September 4, 1974, Emerg; December 26, 1978, Reg; November 4, 2016, Susp ......*do   Do. Eden Prairie, City of, Hennepin County 270159 May 16, 1975, Emerg; September 27, 1985, Reg; November 4, 2016, Susp ......*do   Do. Edina, City of, Hennepin County 270160 July 27, 1973, Emerg; May 1, 1980, Reg; November 4, 2016, Susp ......*do   Do. Excelsior, City of, Hennepin County 270161 May 20, 1974, Emerg; March 20, 1981, Reg; November 4, 2016, Susp ......*do   Do. Golden Valley, City of, Hennepin County 270162 April 23, 1974, Emerg; February 4, 1981, Reg; November 4, 2016, Susp ......*do   Do. Greenfield, City of, Hennepin County 270673 December 26, 1974, Emerg; April 15, 1981, Reg; November 4, 2016, Susp ......*do   Do. Greenwood, City of, Hennepin County 270164 July 25, 1975, Emerg; December 26, 1978, Reg; November 4, 2016, Susp ......*do   Do. Hanover, City of, Hennepin and Wright Counties 270540 October 25, 1974, Emerg; May 5, 1981, Reg; November 4, 2016, Susp ......*do   Do. Hopkins, City of, Hennepin County 270166 May 2, 1974, Emerg; May 5, 1981, Reg; November 4, 2016, Susp ......*do   Do. Independence, City of, Hennepin County 270167 January 28, 1975, Emerg; January 6, 1983, Reg; November 4, 2016, Susp ......*do   Do. Long Lake, City of, Hennepin County 270168 May 2, 1975, Emerg; February 20, 1979, Reg; November 4, 2016, Susp ......*do   Do. Maple Grove, City of, Hennepin County 270169 July 1, 1974, Emerg; April 17, 1978, Reg; November 4, 2016, Susp ......*do   Do. Maple Plain, City of, Hennepin County 270170 October 24, 1975, Emerg; June 22, 1984, Reg; November 4, 2016, Susp ......*do   Do. Medicine Lake, City of, Hennepin County 270690 December 21, 1978, Emerg; April 15, 1982, Reg; November 4, 2016, Susp ......*do   Do. Medina, City of, Hennepin County 270171 July 18, 1975, Emerg; September 3, 1980, Reg; November 4, 2016, Susp ......*do   Do. Minneapolis, City of, Hennepin County 270172 March 23, 1973, Emerg; February 18, 1981, Reg; November 4, 2016, Susp ......*do   Do. Minnetonka, City of, Hennepin County 270173 April 9, 1975, Emerg; May 19, 1981, Reg; November 4, 2016, Susp ......*do   Do. Mound, City of, Hennepin County 270176 April 16, 1974, Emerg; September 29, 1978, Reg; November 4, 2016, Susp ......*do   Do. New Hope, City of, Hennepin County 270177 July 2, 1975, Emerg; January 2, 1981, Reg; November 4, 2016, Susp ......*do   Do. Plymouth, City of, Hennepin County 270179 April 15, 1974, Emerg; May 15, 1978, Reg; November 4, 2016, Susp ......*do   Do. Richfield, City of, Hennepin County 270180 April 22, 1975, Emerg; August 24, 1981, Reg; November 4, 2016, Susp ......*do   Do. Robbinsdale, City of, Hennepin County 270181 May 9, 1974, Emerg; August 1, 1977, Reg; November 4, 2016, Susp ......*do   Do. Rockford, City of, Hennepin and Wright Counties 270182 February 5, 1975, Emerg; November 1, 1979, Reg; November 4, 2016, Susp ......*do   Do. Rogers, City of, Hennepin County 270775 N/A, Emerg; July 12, 2012, Reg; November 4, 2016, Susp ......*do   Do. Saint Bonifacius, City of, Hennepin County 270183 April 22, 1976, Emerg; December 26, 1978, Reg; November 4, 2016, Susp ......*do   Do. Saint Louis Park, City of, Hennepin County 270184 December 22, 1972, Emerg; June 1, 1977, Reg; November 4, 2016, Susp ......*do   Do. Shorewood, City of, Hennepin County 270185 April 8, 1975, Emerg; December 4, 1979, Reg; November 4, 2016, Susp ......*do   Do. Spring Park, City of, Hennepin County 270186 July 16, 1975, Emerg; May 1, 1979, Reg; November 4, 2016, Susp ......*do   Do. Tonka Bay, City of, Hennepin County 270187 January 17, 1975, Emerg; May 1, 1979, Reg; November 4, 2016, Susp ......*do   Do. Wayzata, City of, Hennepin County 270188 November 25, 1974, Emerg; November 1, 1979, Reg; November 4, 2016, Susp ......*do   Do. Woodland, City of, Hennepin County 270189 June 11, 1975, Emerg; August 1, 1979, Reg; November 4, 2016, Susp ......*do   Do. Region VI New Mexico: Albuquerque, City of, Bernalillo County 350002 September 9, 1974, Emerg; October 14, 1983, Reg; November 4, 2016, Susp ......*do   Do. Bernalillo County Unincorporated Areas 350001 August 26, 1974, Emerg; September 15, 1983, Reg; November 4, 2016, Susp ......*do   Do. Texas: Gregory, City of, San Patricio County 480555 May 16, 1975, Emerg; April 15, 1981, Reg; November 4, 2016, Susp ......*do   Do. Mathis, City of, San Patricio County 480557 June 11, 1975, Emerg; October 23, 1979, Reg; November 4, 2016, Susp ......*do   Do. San Patricio, City of, San Patricio County 481556 March 15, 2012, Emerg; April 1, 2012, Reg; November 4, 2016, Susp ......*do   Do. Taft, City of, San Patricio County 481506 July 11, 1995, Emerg; N/A, Reg; November 4, 2016, Susp ......*do   Do. Region VII Iowa: Muscatine, City of, Muscatine County 190213 January 15, 1974, Emerg; January 5, 1978, Reg; November 4, 2016, Susp ......*do   Do. Region IX California: Humboldt County Unincorporated Areas 060060 September 11, 1974, Emerg; July 19, 1982, Reg; November 4, 2016, Susp ......*do   Do. Region X Oregon: Beaverton, City of, Washington County 410240 October 30, 1974, Emerg; September 28, 1984, Reg; November 4, 2016, Susp ......*do   Do. Cornelius, City of, Washington County 410261 April 19, 1978, Emerg; January 6, 1982, Reg; November 4, 2016, Susp ......*do   Do. Durham, City of, Washington County 410263 November 7, 1979, Emerg; January 6, 1982, Reg; November 4, 2016, Susp ......*do   Do. Forest Grove, City of, Washington County 410241 June 4, 1975, Emerg; March 15, 1982, Reg; November 4, 2016, Susp ......*do   Do. Gaston, City of, Washington County 410242 November 24, 1981, Emerg; July 5, 1982, Reg; November 4, 2016, Susp ......*do   Do. Hillsboro, City of, Washington County 410243 January 20, 1975, Emerg; May 17, 1982, Reg; November 4, 2016, Susp ......*do   Do. King City, City of, Washington County 410269 November 14, 1974, Emerg; February 11, 1976, Reg; November 4, 2016, Susp ......*do   Do. North Plains, City of, Washington County 410270 March 25, 1977, Emerg; April 1, 1982, Reg; November 4, 2016, Susp ......*do   Do. Sherwood, City of, Washington County 410273 February 4, 1981, Emerg; January 6, 1982, Reg; November 4, 2016, Susp ......*do   Do. Tualatin, City of, Clackamas and Washington Counties 410277 July 3, 1974, Emerg; February 17, 1982, Reg; November 4, 2016, Susp ......*do   Do. Washington County Unincorporated Areas 410238 April 10, 1973, Emerg; September 30, 1982, Reg; November 4, 2016, Susp ......*do   Do. *do = Ditto. Code for reading third column: Emerg.—Emergency; Reg.—Regular; Susp.—Suspension. Dated: October 31, 2016.
    Eric Letvin, Deputy Assistant Administrator for Mitigation, Federal Insurance and Mitigation Administration, Department of Homeland Security, Federal Emergency Management Agency.
    [FR Doc. 2016-26679 Filed 11-3-16; 8:45 am] BILLING CODE 9110-12-P
    DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration 50 CFR Part 635 [Docket No. 150121066-5717-02] RIN 0648-XF011 Atlantic Highly Migratory Species; Atlantic Bluefin Tuna Fisheries; 2016 General Category Fishery AGENCY:

    National Marine Fisheries Service (NMFS), National Oceanic and Atmospheric Administration (NOAA), Commerce.

    ACTION:

    Temporary rule; closure.

    SUMMARY:

    NMFS closes the coastwide General category fishery for large medium and giant Atlantic bluefin tuna (BFT) for 2016. This action is being taken to prevent any further overharvest of the available adjusted General category quota of 676.7 metric tons (mt).

    DATES:

    Effective 11:30 p.m., local time, November 4, 2016, through December 31, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Sarah McLaughlin or Brad McHale, 978-281-9260.

    SUPPLEMENTARY INFORMATION:

    Regulations implemented under the authority of the Atlantic Tunas Convention Act (ATCA; 16 U.S.C. 971 et seq.) and the Magnuson-Stevens Fishery Conservation and Management Act (Magnuson-Stevens Act; 16 U.S.C. 1801 et seq.) governing the harvest of BFT by persons and vessels subject to U.S. jurisdiction are found at 50 CFR part 635. Section 635.27 subdivides the U.S. BFT quota recommended by the International Commission for the Conservation of Atlantic Tunas (ICCAT) among the various domestic fishing categories, per the allocations established in the 2006 Consolidated Highly Migratory Species Fishery Management Plan (2006 Consolidated HMS FMP) (71 FR 58058, October 2, 2006), as amended by Amendment 7 to the 2006 Consolidated HMS FMP (Amendment 7) (79 FR 71510, December 2, 2014).

    NMFS is required, under § 635.28(a)(1), to file a closure notice with the Office of the Federal Register for publication when a BFT quota is reached or is projected to be reached. On and after the effective date and time of such notification, for the remainder of the fishing year or for a specified period as indicated in the notification, retaining, possessing, or landing BFT under that quota category is prohibited until the opening of the subsequent quota period or until such date as specified in the notice.

    The base quota for the General category is 466.7 mt. See § 635.27(a). To date this year, NMFS has adjusted the General category base quota for 2016 twice, including a transfer of 125 mt from the Reserve category effective October 6 (81 FR 70369, October 12, 2016), and a transfer of 85 mt (18 mt from the Harpoon category quota and 67 mt from the Reserve category) effective October 14 (81 FR 71639, October 18, 2016), resulting in an adjusted quota of 676.7 mt.

    Based on the best available landings information for the General category BFT fishery, NMFS has determined that the adjusted General category quota of 676.7 mt has been reached (i.e., as of October 31, reported landings total approximately 677.4 mt). Therefore, retaining, possessing, or landing large medium or giant BFT by persons aboard vessels permitted in the Atlantic tunas General and HMS Charter/Headboat categories (while fishing commercially) must cease at 11:30 p.m. local time on November 4, 2016. The General category will reopen automatically on January 1, 2017, for the January through March 2017 subperiod. This action applies to Atlantic tunas General category (commercial) permitted vessels and Highly Migratory Species (HMS) Charter/Headboat category permitted vessels when fishing commercially for BFT, and is taken consistent with the regulations at § 635.28(a)(1). The intent of this closure is to prevent any further overharvest of the available 2016 General category quota.

    Fishermen may catch and release (or tag and release) BFT of all sizes, subject to the requirements of the catch-and-release and tag-and-release programs at § 635.26. Anglers are also reminded that all BFT that are released must be handled in a manner that will maximize survival, and without removing the fish from the water, consistent with requirements at § 635.21(a)(1). For additional information on safe handling, see the “Careful Catch and Release” brochure available atwww.nmfs.noaa.gov/sfa/hms/.

    Classification

    The Assistant Administrator for NMFS (AA) finds that it is impracticable and contrary to the public interest to provide prior notice of, and an opportunity for public comment on, this action for the following reasons:

    The regulations implementing the 2006 Consolidated HMS FMP and amendments provide for inseason retention limit adjustments and fishery closures to respond to the unpredictable nature of BFT availability on the fishing grounds, the migratory nature of this species, and the regional variations in the BFT fishery. These fisheries are currently underway and delaying this action would be contrary to the public interest as it could result in excessive BFT landings that may result in future potential quota reductions for the General category. NMFS must close the General category fishery for 2016 to prevent the available quota from being exceeded any further. Therefore, the AA finds good cause under 5 U.S.C. 553(b)(B) to waive prior notice and the opportunity for public comment. For all of the above reasons, there is good cause under 5 U.S.C. 553(d) to waive the 30-day delay in effectiveness.

    This action is being taken under § 635.28(a)(1), and is exempt from review under Executive Order 12866.

    Authority:

    16 U.S.C. 971 et seq. and 1801 et seq.

    Dated: November 1, 2016. Emily H. Menashes, Acting Director, Office of Sustainable Fisheries, National Marine Fisheries Service.
    [FR Doc. 2016-26718 Filed 11-1-16; 4:15 pm] BILLING CODE 3510-22-P
    DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration 50 CFR Part 679 [Docket No. 150916863-6211-02] RIN 0648-XF010 Fisheries of the Exclusive Economic Zone Off Alaska; Exchange of Flatfish in the Bering Sea and Aleutian Islands Management Area AGENCY:

    National Marine Fisheries Service (NMFS), National Oceanic and Atmospheric Administration (NOAA), Commerce.

    ACTION:

    Temporary rule; reallocation.

    SUMMARY:

    NMFS is exchanging unused flathead sole and rock sole Amendment 80 allocations of the total allowable catch for yellowfin sole Amendment 80 acceptable biological catch (ABC) reserves. This action is necessary to allow the 2016 total allowable catch of flathead sole, rock sole, and yellowfin sole in the Bering Sea and Aleutian Islands management area to be harvested.

    DATES:

    Effective November 4, 2016, through December 31, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Steve Whitney, 907-586-7228.

    SUPPLEMENTARY INFORMATION:

    NMFS manages the groundfish fishery in the Bering Sea and Aleutian Islands management area (BSAI) according to the Fishery Management Plan for Groundfish of the Bering Sea and Aleutian Islands Management Area (FMP) prepared by the North Pacific Fishery Management Council under authority of the Magnuson-Stevens Fishery Conservation and Management Act. Regulations governing fishing by U.S. vessels in accordance with the FMP appear at subpart H of 50 CFR part 600 and 50 CFR part 679.

    The 2016 flathead sole, rock sole, and yellowfin sole Amendment 80 allocations of the total allowable catch (TAC) specified in the BSAI are 9,853 metric tons (mt), 43,965 mt, and 115,038 mt as established by the final 2016 and 2017 harvest specifications for groundfish in the BSAI (81 FR 14773, March 18, 2016) and as revised (81 FR 75740, November 1, 2016). The 2016 flathead sole, rock sole, and yellowfin sole Amendment 80 ABC reserves are 44,308 mt, 93,897 mt, and 55,531 mt as established by the final 2016 and 2017 harvest specifications for groundfish in the BSAI (81 FR 14773, March 18, 2016).

    The Alaska Seafood cooperative has requested that NMFS exchange 850 mt of flathead sole and 1,670 mt of rock sole Amendment 80 allocations of the TAC for 2,520 mt of yellowfin sole Amendment 80 ABC reserves under § 679.91(i). Therefore, in accordance with § 679.91(i), NMFS exchanges 850 mt of flathead sole and 1,670 mt of rock sole Amendment 80 allocations of the TAC for 2,520 mt of yellowfin sole Amendment 80 ABC reserves in the BSAI. This action also decreases and increases the TACs and Amendment 80 ABC reserves by the corresponding amounts. Tables 11 and 13 of the final 2016 and 2017 harvest specifications for groundfish in the BSAI (81 FR 14773, March 18, 2016) and as revised (81 FR 75740, November 1, 2016) are further revised as follows:

    Table 11—Final 2016 Community Development Quota (CDQ) Reserves, Incidental Catch Amounts (ICAS), and Amendment 80 Allocations of the Aleutian Islands Pacific Ocean Perch, and BSAI Flathead Sole, Rock Sole, and Yellowfin Sole TACS [Amounts are in metric tons] Sector Pacific ocean perch Eastern
  • Aleutian
  • District
  • Central
  • Aleutian
  • District
  • Western
  • Aleutian
  • District
  • Flathead sole BSAI Rock sole BSAI Yellowfin sole BSAI
    TAC 7,900 7,000 9,000 15,163 52,659 154,278 CDQ 845 749 963 1,160 4,364 18,241 ICA 200 75 10 5,000 6,000 3,500 BSAI trawl limited access 685 618 161 0 0 14,979 Amendment 80 6,169 5,558 7,866 9,003 42,295 117,558 Alaska Groundfish Cooperative 3,271 2,947 4,171 1,411 11,129 43,748 Alaska Seafood Cooperative 2,898 2,611 3,695 7,592 31,166 73,810 Note: Sector apportionments may not total precisely due to rounding.
    Table 13—Final 2016 and 2017 ABC Surplus, Community Development Quota (CDQ) ABC Reserves, and Amendment 80 ABC Reserves in the BSAI for Flathead Sole, Rock Sole, and Yellowfin Sole [Amounts are in metric tons] Sector 2016 Flathead sole 2016 Rock sole 2016 Yellowfin sole 2017 Flathead sole 2017 Rock sole 2017 Yellowfin sole ABC 66,250 161,100 211,700 64,580 145,000 203,500 TAC 15,163 52,659 154,278 21,000 57,100 144,000 ABC surplus 51,087 108,441 57,422 43,580 87,900 59,500 ABC reserve 51,087 108,441 57,422 43,580 87,900 59,500 CDQ ABC reserve 5,929 12,874 4,411 4,663 9,405 6,367 Amendment 80 ABC reserve 45,158 95,567 53,011 38,917 78,495 53,134 Alaska Groundfish Cooperative for 2016 1 4,145 22,974 24,019 n/a n/a n/a Alaska Seafood Cooperative for 2016 1 41,013 72,593 28,992 n/a n/a n/a 1 The 2017 allocations for Amendment 80 species between Amendment 80 cooperatives and the Amendment 80 limited access sector will not be known until eligible participants apply for participation in the program by November 1, 2016. Classification

    This action responds to the best available information recently obtained from the fishery. The Assistant Administrator for Fisheries, NOAA (AA), finds good cause to waive the requirement to provide prior notice and opportunity for public comment pursuant to the authority set forth at 5 U.S.C. 553(b)(B) as such requirement is impracticable and contrary to the public interest. This requirement is impracticable and contrary to the public interest as it would prevent NMFS from responding to the most recent fisheries data in a timely fashion and would delay the flatfish exchange by the Alaska Seafood cooperative the BSAI. Since these fisheries are currently open, it is important to immediately inform the industry as to the revised allocations. Immediate notification is necessary to allow for the orderly conduct and efficient operation of this fishery, to allow the industry to plan for the fishing season, and to avoid potential disruption to the fishing fleet as well as processors. NMFS was unable to publish a notice providing time for public comment because the most recent, relevant data only became available as of October 25, 2016.

    The AA also finds good cause to waive the 30-day delay in the effective date of this action under 5 U.S.C. 553(d)(3). This finding is based upon the reasons provided above for waiver of prior notice and opportunity for public comment.

    This action is required by § 679.20 and is exempt from review under Executive Order 12866.

    Authority:

    16 U.S.C. 1801 et seq.

    Dated: November 1, 2016. Emily H. Menashes, Acting Director, Office of Sustainable Fisheries, National Marine Fisheries Service.
    [FR Doc. 2016-26723 Filed 11-3-16; 8:45 am] BILLING CODE 3510-22-P
    81 214 Friday, November 4, 2016 Proposed Rules DEPARTMENT OF ENERGY 10 CFR Part 430 [Docket No. EERE-2016-BT-TP-0037] RIN 1904-AD74 Energy Conservation Program: Test Procedures for Integrated Light-Emitting Diode Lamps AGENCY:

    Office of Energy Efficiency and Renewable Energy, Department of Energy.

    ACTION:

    Notice of proposed rulemaking.

    SUMMARY:

    On July 1, 2016, the U.S. Department of Energy (DOE) published a final rule adopting a test procedure for integrated light-emitting diode (LED) lamps (hereafter referred to as “LED lamps”) to support the implementation of labeling provisions by the Federal Trade Commission, as well as the ongoing general service lamps rulemaking, which includes LED lamps. This notice of proposed rulemaking (NOPR) proposes to amend the LED lamps test procedure by allowing for time to failure measurements to be taken at elevated temperatures.

    DATES:

    DOE will accept comments, data, and information regarding this NOPR no later than December 5, 2016. See section V, “Public Participation,” for details.

    ADDRESSES:

    Any comments submitted must identify the Test Procedure NOPR for Integrated LED Lamps, and provide docket number EERE-2016-BT-TP-0037 and/or regulatory information number (RIN) 1904-AD74. Comments may be submitted using any of the following methods:

    1. Federal eRulemaking Portal: www.regulations.gov. Follow the instructions for submitting comments.

    2. Email: [email protected]. Include the docket number EERE-2016-BT-TP-0037 and/or RIN 1904-AD74 in the subject line of the message.

    3. Postal Mail: Appliance and Equipment Standards Program, U.S. Department of Energy, Building Technologies Office, Mailstop EE-2J, 1000 Independence Avenue SW., Washington, DC, 20585-0121. If possible, please submit all items on a compact disc (CD), in which case it is not necessary to include printed copies.

    4. Hand Delivery/Courier: Appliance and Equipment Standards Program, U.S. Department of Energy, Building Technologies Office, 950 L'Enfant Plaza SW., Suite 600, Washington, DC, 20024. Telephone: (202) 586-6636. If possible, please submit all items on a CD, in which case it is not necessary to include printed copies.

    For detailed instructions on submitting comments and additional information on the rulemaking process, see section V of this NOPR, “Public Participation.”

    DOCKET:

    The docket, which includes Federal Register notices, comments, and other supporting documents/materials, is available for review at www.regulations.gov. All documents in the docket are listed in the www.regulations.gov index. However, some documents listed in the index, such as those containing information that is exempt from public disclosure, may not be publicly available.

    A link to the docket Web page can be found at https://www1.eere.energy.gov/buildings/appliance_standards/standards.aspx?productid=19. The docket Web page contains simple instructions on how to access all documents, including public comments, in the docket. See section V, “Public Participation,” for information on how to submit comments through www.regulations.gov.

    FOR FURTHER INFORMATION CONTACT:

    Ms. Lucy deButts, U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Building Technologies Office, EE-2J, 1000 Independence Avenue SW., Washington, DC, 20585-0121. Telephone: (202) 287-1604. Email: [email protected].

    Ms. Celia Sher, U.S. Department of Energy, Office of the General Counsel, GC-33, 1000 Independence Avenue SW., Washington, DC, 20585-0121. Telephone: (202) 287-6122. Email: [email protected].

    SUPPLEMENTARY INFORMATION:

    Table of Contents I. Authority and Background II. Synopsis of the Notice of Proposed Rulemaking III. Discussion A. Scope of Applicability B. Proposed Amendment to Approach for Determining Lifetime C. Effective and Compliance Dates IV. Procedural Issues and Regulatory Review A. Review Under Executive Order 12866 B. Review Under the Regulatory Flexibility Act C. Review Under the Paperwork Reduction Act of 1995 D. Review Under the National Environmental Policy Act of 1969 E. Review Under Executive Order 13132 F. Review Under Executive Order 12988 G. Review Under the Unfunded Mandates Reform Act of 1995 H. Review Under the Treasury and General Government Appropriations Act, 1999 I. Review Under Executive Order 12630 J. Review Under Treasury and General Government Appropriations Act, 2001 K. Review Under Executive Order 13211 L. Review Under Section 32 of the Federal Energy Administration Act of 1974 V. Public Participation A. Submission of Comments B. Issues on Which DOE Seeks Comment VI. Approval of the Office of the Secretary I. Authority and Background

    Title III of the Energy Policy and Conservation Act of 1975 (42 U.S.C. 6291, et seq.; “EPCA” or “the Act”) sets forth a variety of provisions designed to improve energy efficiency.1 Part B of title III, which for editorial reasons was redesignated as Part A upon incorporation into the U.S. Code (42 U.S.C. 6291-6309, as codified), establishes the “Energy Conservation Program for Consumer Products Other Than Automobiles.” These consumer products include integrated light-emitting diode (LED) lamps, the subject of this notice of proposed rulemaking (NOPR).

    1 All references to EPCA refer to the statute as amended through the Energy Efficiency Improvement Act of 2015, Public Law 114-11 (April 30, 2015).

    Under EPCA, the energy conservation program consists essentially of four parts: (1) Testing, (2) labeling, (3) Federal energy conservation standards, and (4) certification and enforcement procedures. The testing requirements consist of test procedures that manufacturers of covered products must use as the basis for (1) certifying to DOE that their products comply with the applicable energy conservation standards adopted under EPCA (42 U.S.C. 6295(s)), and (2) making representations about the energy use or efficiency of those products (42 U.S.C. 6293(c)). Similarly, DOE must use these test procedures to determine whether the products comply with any relevant standards promulgated under EPCA. (42 U.S.C. 6295(s))

    Under 42 U.S.C. 6293, EPCA sets forth the criteria and procedures DOE must follow when prescribing or amending test procedures for covered products. EPCA provides, in relevant part, that any test procedures prescribed or amended under this section shall be reasonably designed to produce test results which measure energy efficiency, energy use or estimated annual operating cost of a covered product during a representative average use cycle or period of use and shall not be unduly burdensome to conduct. (42 U.S.C. 6293(b)(3))

    In addition, if DOE determines that a test procedure amendment is warranted, it must publish proposed test procedures and offer the public an opportunity to present oral and written comments on them. (42 U.S.C. 6293(b)(2)) Finally, in any rulemaking to amend a test procedure, DOE must determine to what extent, if any, the proposed test procedure would alter the measured energy efficiency of any covered product as determined under the existing test procedure. (42 U.S.C. 6293(e)(1)) If DOE determines that the amended test procedure would alter the measured efficiency of a covered product, DOE must amend the applicable energy conservation standard accordingly. (42 U.S.C. 6293(e)(2))

    DOE published a final rule in the Federal Register on July 1, 2016 (hereafter the “July 2016 LED TP final rule”), which adopted test procedures for integrated LED lamps in Appendix BB to support the implementation of labeling provisions by the Federal Trade Commission, as well as the ongoing general service lamps rulemaking, a category of lamps that includes LED lamps. 81 FR at 43404. In this notice, DOE proposes to amend the test procedures for integrated LED lamps.

    II. Synopsis of the Notice of Proposed Rulemaking

    In this NOPR, DOE proposes to amend the test procedures for integrated LED lamps with regard to the time to failure test method. Based on stakeholder feedback since the publication of the July 2016 LED TP final rule, DOE is proposing to allow time to failure measurements collected for DOE's LED lamps test procedure to be taken at elevated temperatures.

    Any amended test procedure adopted in this rulemaking will be effective as the applicable DOE test procedure beginning 30 days after publication of a final rule in the Federal Register. Representations of energy use or energy efficiency must be based on testing in accordance with this rulemaking, if adopted, beginning 180 days after the publication of a test procedure final rule. DOE notes that testing done in accordance with the current test procedure would also be in accordance with the amended test procedure proposed here.

    III. Discussion A. Scope of Applicability

    EPCA defines an LED as a p-n junction 2 solid-state device, the radiated output of which, either in the infrared region, visible region, or ultraviolet region, is a function of the physical construction, material used, and exciting current of the device. (42 U.S.C. 6291(30)(CC)) In the July 2016 LED TP final rule, DOE stated that the rulemaking applied to LED lamps that met DOE's adopted definition of an integrated LED lamp, which was based on the term as defined by ANSI/IES RP-16-2010, “Nomenclature and Definitions for Illuminating Engineering,” and adopted as follows:

    2 P-n junction is the boundary between p-type and n-type material in a semiconductor device, such as LEDs. P-n junctions are diodes, active sites where current can flow readily in one direction but not in the other direction.

    Integrated light-emitting diode lamp means an integrated LED lamp as defined in ANSI/IES RP-16 (incorporated by reference; see § 430.3).

    The ANSI/IES standard defines an integrated LED lamp as an integrated assembly that comprises LED packages (components) or LED arrays (modules) (collectively referred to as an LED source), an LED driver, an ANSI standard base, and other optical, thermal, mechanical and electrical components (such as phosphor layers, insulating materials, fasteners to hold components within the lamp together, and electrical wiring). The LED lamp is intended to connect directly to a branch circuit through a corresponding ANSI standard socket. 81 FR 43403, 43405 (July 1, 2016). This NOPR proposes to amend the test procedures for integrated LED lamps.

    B. Proposed Amendment To Approach for Determining Lifetime

    In the July 2016 LED TP final rule, DOE adopted test procedures, located in appendix BB to subpart B of 10 CFR part 430, for measuring and projecting time to failure of LED lamps based on lumen maintenance data. The adopted test procedures were largely based on the industry standards IES LM-84-14, “Approved Method: Measuring Luminous Flux and Color Maintenance of LED Lamps, Light Engines, and Luminaires,” and IES TM-28-14, “Projecting Long-Term Luminous Flux Maintenance of LED Lamps and Luminaires,” for the applicable lumen maintenance measurements and time to failure projection methods, with some modifications. 81 FR 43403, 43427-43428 (July 1, 2016). IES LM-84-14 provides a method for lumen maintenance measurement of integrated LED lamps and specifies the operational and environmental conditions during testing such as operating cycle, ambient temperature, airflow, and orientation. Lumen maintenance is the measure of lumen output after an elapsed operating time, expressed as a percentage of the initial lumen output. IES TM-28-14 provides methods for projecting the lumen maintenance of integrated LED lamps depending on the available data and test duration. The provided methods include projecting time to failure using multiple lumen maintenance measurements collected over a period of time, rather than a single measurement at the end of the test duration. 81 FR at 43409 (July 1, 2016). The adopted test procedure requires that the projection calculation be completed for each individual LED lamp and the projected time to failure values then be used to calculate the lifetime of the sample using the prescribed methods. 81 FR at 43414 (July 1, 2016). The lumen maintenance measurements used in the projection are to be taken at an ambient temperature of 25 °C ± 5 °C.

    Since the publication of the July 2016 LED TP final rule, DOE received a request from the National Electrical Manufacturers Association (NEMA) to approve the use of test results from the Elevated Temperature Life Test 3 contained in the ENERGY STAR Program Requirements Product Specification for Lamps (Light Bulbs) Eligibility Criteria Version 2.0 (hereafter “ENERGY STAR Lamps Specification V2.0”) 4 in place of the test method for measuring lumen maintenance and time to failure in DOE's LED lamps test procedure in order to reduce test burden. NEMA asserted that because the test conditions from the Elevated Temperature Life Test are more stringent, the test results, if any different, would be more conservative than if the lamps were tested according to the current DOE LED lamps test procedure. (NEMA, No. 48 at p. 1).

    3 The ENERGY STAR Elevated Temperature Life Test Method can be found at https://www.energystar.gov/sites/default/files/ENERGY%20STAR%20Elevated%20Temperature%20Life%20Test%20Method.pdf.

    4 “ENERGY STAR Program Requirements: Product Specification for Lamps (Light Bulbs) Version 2.0.” U.S. Environmental Protection Agency, February 2016.

    DOE agrees that the operating temperature test conditions specified in the ENERGY STAR Elevated Temperature Life Test will more negatively affect performance values than those prescribed in DOE's LED lamps test procedure since the Elevated Temperature Life Test requires testing of LED lamps at higher ambient temperatures. Specifically, the Elevated Temperature Life Test requires directional lamps with rated wattages less than or equal to 20 W to be tested at 45 °C ± 5 °C; directional lamps with rated wattages greater than 20 W to be tested at 55 °C ± 5 °C; and all other omnidirectional and decorative lamps to be tested at 45 °C ± 5 °C. DOE's test procedure requires operating temperature to be maintained at 25 °C ± 5 °C. The Elevated Temperature Life Test applies only to lamps that do not have a “not for use in totally enclosed or recessed luminaires” statement (or an equivalent statement) on the lamp label.

    In addition to a difference in ambient temperature during lumen maintenance testing, ENERGY STAR's and DOE's test procedures also differ in how to determine the value of lifetime. ENERGY STAR's test procedure provides a method to confirm a manufacturer-declared lifetime value. It requires manufacturers to meet or exceed minimum lumen maintenance values at a specific test duration to be able to claim a certain maximum lifetime. For example, for a lamp to be certified with a lifetime of 25,000 hours, that lamp must achieve a minimum lumen maintenance of 91.8% after 6,000 hours of operation. DOE's test procedure for determining lifetime depends on the time to failure of individual units, which is determined by taking lumen maintenance measurements at multiple intervals and then calculating the time to failure. For example, after 6,000 hours of testing, manufacturers can use the specified method to project a lamp's time to failure value to be up to 36,000 hours. Lifetime is then determined by calculating the median time to failure of the sample (calculated as the arithmetic mean of the time to failure of the two middle sample units when the numbers are sorted in value order). This is consistent with the statutory definition of lifetime, which is described as the length of operating time of a statistically large group of lamps between first use and failure of 50 percent of the group. 42 U.S.C. 6291(30)(P).

    To maintain consistency with the statutory definition of lifetime, DOE is not allowing for an entire substitution of the ENERGY STAR lifetime test procedure in place of DOE's time to failure measurements. Instead, DOE is proposing in this NOPR to amend section 4.4.4 of appendix BB to allow time to failure testing to be conducted at elevated temperatures above the current requirement, which stipulates to maintain ambient operating temperature at 25 °C ± 5 °C. Manufacturers would then have the flexibility to conduct the Elevated Temperature Life Test for ENERGY STAR, while also following the calculation method for DOE's LED lamps test procedure, and avoid test duplication. LED lamps are sensitive to changes in ambient temperature, generally performing less favorably at higher temperatures. DOE believes this proposed change will result in, if any difference, more conservative representations of lifetime.

    DOE requests comment on the proposed amendment to the integrated LED lamps test procedure to allow testing for time to failure, as prescribed in section 4 of appendix BB to subpart B of 10 CFR part 430, to be conducted at elevated temperatures.

    C. Effective and Compliance Dates

    If adopted, the effective date for the proposed test procedure amendments would be 30 days after publication of the final rule in the Federal Register. Pursuant to EPCA, manufacturers of covered products must use the test procedure as the basis for determining that their products comply with any applicable energy conservation and for making representations about the efficiency of those products. (42 U.S.C. 6293(c); 42 U.S.C. 6295(s)) For those energy efficiency or consumption metrics covered by the DOE test procedure, manufacturers must make representations in accordance with the DOE test procedure beginning 180 days after publication of the final rule in the Federal Register.

    IV. Procedural Issues and Regulatory Review A. Review Under Executive Order 12866

    The Office of Management and Budget (OMB) has determined that test procedure rulemakings do not constitute “significant regulatory actions” under section 3(f) of Executive Order 12866, Regulatory Planning and Review, 58 FR 51735 (Oct. 4, 1993). Accordingly, this action was not subject to review under the Executive Order by the Office of Information and Regulatory Affairs (OIRA) in the OMB.

    B. Review Under the Regulatory Flexibility Act

    The Regulatory Flexibility Act (5 U.S.C. 601 et seq.) requires preparation of an initial regulatory flexibility analysis (IRFA) for any rule that by law must be proposed for public comment, unless the agency certifies that the rule, if promulgated, will not have a significant economic impact on a substantial number of small entities. As required by Executive Order 13272, “Proper Consideration of Small Entities in Agency Rulemaking,” 67 FR 53461 (August 16, 2002), DOE published procedures and policies on February 19, 2003 to ensure that the potential impacts of its rules on small entities are properly considered during the DOE rulemaking process. 68 FR 7990. DOE has made its procedures and policies available on the Office of the General Counsel's Web site: http://energy.gov/gc/office-general-counsel.

    DOE reviewed the amended test procedures for LED lamps proposed in this NOPR under the provisions of the Regulatory Flexibility Act (RFA) and the procedures and policies published on February 19, 2003. DOE certifies that the proposed rule, if adopted, would not have a significant economic impact on a substantial number of small entities. The factual basis for this certification is set forth in the following paragraphs.

    The Small Business Administration (SBA) considers a business entity to be a small business, if, together with its affiliates, it employs less than a threshold number of workers specified in 13 CFR part 121. These size standards and codes are established by the North American Industry Classification System (NAICS). Manufacturing of LED lamps is classified under NAICS 335110, “Electric Lamp Bulb and Part Manufacturing.” The SBA sets a threshold of 1,250 employees or less for an entity to be considered as a small business for this category.

    To estimate the number of companies that could be small businesses that sell LED lamps covered by this rulemaking, DOE conducted a market survey using publicly available information. DOE's research involved information from the Environmental Protection Agency's ENERGY STAR Certified Light Bulbs Database,5 LED Lighting Facts Database,6 previous rulemakings, individual company Web sites, SBA's database, and market research tools (e.g., Hoover's reports). DOE screened out companies that did not meet the definition of a “small business” or are completely foreign owned and operated. DOE identified approximately seven small businesses that maintain domestic production facilities for the integrated LED lamps covered by this rulemaking.

    5 ENERGY STAR Certified Light Bulbs Database, https://www.energystar.gov/productfinder/product/certified-light-bulbs/results (last accessed October 19, 2016).

    6 DOE's LED Lighting Facts Database, http://www.lightingfacts.com/products (last accessed October 19, 2016).

    DOE notes that this proposed rule merely seeks to amend the existing LED test procedure in a way that would reduce test burden on manufacturers. The proposed amendment would reduce the instances in which two tests for lifetime must be conducted for the same lamp. In addition, the proposal is supported by industry, including NEMA. Manufacturers that would seek to test time to failure at elevated temperatures under the proposed amendment, if adopted, are likely to have previously accounted for testing costs associated with the ENERGY STAR program as these measurements are required to be reported to ENERGY STAR if manufacturers certify the lamps as meeting the program requirements. For manufacturers who do not test products at elevated temperatures, this proposed amendment presents no additional burden.

    For these reasons, DOE tentatively concludes and certifies that the proposed amendment in this NOPR would not have a significant economic impact on a substantial number of small entities, and the preparation of an IRFA is not warranted. DOE will transmit the certification and supporting statement of factual basis to the Chief Counsel for Advocacy of the SBA for review under 5 U.S.C. 605(b).

    C. Review Under the Paperwork Reduction Act of 1995

    Manufacturers of LED lamps must certify to DOE that their products comply with any applicable energy conservation standards. To certify compliance, manufacturers must first obtain test data for their products according to the DOE test procedures, including any amendments adopted for those test procedures. DOE has established regulations for the certification and recordkeeping requirements for all covered consumer products and commercial equipment, including LED lamps. (See generally 10 CFR part 429.) The collection-of-information requirement for the certification and recordkeeping is subject to review and approval by OMB under the Paperwork Reduction Act (PRA). This requirement has been approved by OMB under OMB control number 1910-1400. Public reporting burden for the certification is estimated to average 30 hours per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information.

    Notwithstanding any other provision of the law, no person is required to respond to, nor must any person be subject to a penalty for failure to comply with, a collection of information subject to the requirements of the PRA, unless that collection of information displays a currently valid OMB control number.

    D. Review Under the National Environmental Policy Act of 1969

    In this proposed rule, DOE is proposing an amendment to the test procedure for LED lamps that will be used to support the ongoing general service lamps energy conservation standards rulemaking as well as the FTC's Lighting Facts labeling program. DOE has determined that this rule falls into a class of actions that are categorically excluded from review under the National Environmental Policy Act of 1969 (42 U.S.C. 4321 et seq.) and DOE's implementing regulations at 10 CFR part 1021. Specifically, this proposed rule would amend the existing test procedure for integrated LED lamps without affecting the amount, quality or distribution of energy usage, and, therefore, will not result in any environmental impacts. Thus, this rulemaking is covered by Categorical Exclusion A5 under 10 CFR part 1021, subpart D, which applies to any rulemaking that interprets or amends an existing rule without changing the environmental effect of that rule. Accordingly, neither an environmental assessment nor an environmental impact statement is required.

    E. Review Under Executive Order 13132

    Executive Order 13132, “Federalism,” 64 FR 43255 (August 4, 1999), imposes certain requirements on agencies formulating and implementing policies or regulations that preempt State law or that have Federalism implications. The Executive Order requires agencies to examine the constitutional and statutory authority supporting any action that would limit the policymaking discretion of the States and to carefully assess the necessity for such actions. The Executive Order also requires agencies to have an accountable process to ensure meaningful and timely input by State and local officials in the development of regulatory policies that have Federalism implications. On March 14, 2000, DOE published a statement of policy describing the intergovernmental consultation process it will follow in the development of such regulations. 65 FR 13735. DOE has examined this proposed rule and has determined that it will not have a substantial direct effect on the States, on the relationship between the national government and the States, or on the distribution of power and responsibilities among the various levels of government. EPCA governs and prescribes Federal preemption of State regulations as to energy conservation for the products that are the subject of this proposed rule. States can petition DOE for exemption from such preemption to the extent, and based on criteria, set forth in EPCA. (42 U.S.C. 6297(d)) No further action is required by Executive Order 13132.

    F. Review Under Executive Order 12988

    Regarding the review of existing regulations and the promulgation of new regulations, section 3(a) of Executive Order 12988, “Civil Justice Reform,” 61 FR 4729 (Feb. 7, 1996), imposes on Federal agencies the general duty to adhere to the following requirements: (1) Eliminate drafting errors and ambiguity, (2) write regulations to minimize litigation, (3) provide a clear legal standard for affected conduct rather than a general standard, and (4) promote simplification and burden reduction. Section 3(b) of Executive Order 12988 specifically requires that Executive agencies make every reasonable effort to ensure that the regulation (1) clearly specifies the preemptive effect, if any, (2) clearly specifies any effect on existing Federal law or regulation, (3) provides a clear legal standard for affected conduct while promoting simplification and burden reduction, (4) specifies the retroactive effect, if any, (5) adequately defines key terms, and (6) addresses other important issues affecting clarity and general draftsmanship under any guidelines issued by the Attorney General. Section 3(c) of Executive Order 12988 requires Executive agencies to review regulations in light of applicable standards in sections 3(a) and 3(b) to determine whether they are met or it is unreasonable to meet one or more of them. DOE has completed the required review and determined that, to the extent permitted by law, the proposed rule meets the relevant standards of Executive Order 12988.

    G. Review Under the Unfunded Mandates Reform Act of 1995

    Title II of the Unfunded Mandates Reform Act of 1995 (UMRA) requires each Federal agency to assess the effects of Federal regulatory actions on State, local, and Tribal governments and the private sector. Public Law 104-4, sec. 201 (codified at 2 U.S.C. 1531). For a proposed regulatory action resulting in a rule that may cause the expenditure by State, local, and Tribal governments, in the aggregate, or by the private sector of $100 million or more in any one year (adjusted annually for inflation), section 202 of UMRA requires a Federal agency to publish a written statement that estimates the resulting costs, benefits, and other effects on the national economy. (2 U.S.C. 1532(a), (b)) The UMRA also requires a Federal agency to develop an effective process to permit timely input by elected officers of State, local, and Tribal governments on a proposed “significant intergovernmental mandate,” and requires an agency plan for giving notice and opportunity for timely input to potentially affected small governments before establishing any requirements that might significantly or uniquely affect small governments. On March 18, 1997, DOE published a statement of policy on its process for intergovernmental consultation under UMRA. 62 FR 12820; also available at http://energy.gov/gc/office-general-counsel. DOE examined this proposed rule according to UMRA and its statement of policy and determined that the rule contains neither an intergovernmental mandate, nor a mandate that may result in the expenditure of $100 million or more in any year, so these requirements do not apply.

    H. Review Under the Treasury and General Government Appropriations Act, 1999

    Section 654 of the Treasury and General Government Appropriations Act, 1999 (Public Law 105-277) requires Federal agencies to issue a Family Policymaking Assessment for any rule that may affect family well-being. This rule would not have any impact on the autonomy or integrity of the family as an institution. Accordingly, DOE has concluded that it is not necessary to prepare a Family Policymaking Assessment.

    I. Review Under Executive Order 12630

    DOE has determined, under Executive Order 12630, “Governmental Actions and Interference with Constitutionally Protected Property Rights” 53 FR 8859 (March 18, 1988), that this regulation would not result in any takings that might require compensation under the Fifth Amendment to the U.S. Constitution.

    J. Review Under Treasury and General Government Appropriations Act, 2001

    Section 515 of the Treasury and General Government Appropriations Act, 2001 (44 U.S.C. 3516 note) provides for agencies to review most disseminations of information to the public under guidelines established by each agency pursuant to general guidelines issued by OMB. OMB's guidelines were published at 67 FR 8452 (Feb. 22, 2002), and DOE's guidelines were published at 67 FR 62446 (Oct. 7, 2002). DOE has reviewed this proposed rule under the OMB and DOE guidelines and has concluded that it is consistent with applicable policies in those guidelines.

    K. Review Under Executive Order 13211

    Executive Order 13211, “Actions Concerning Regulations That Significantly Affect Energy Supply, Distribution, or Use,” 66 FR 28355 (May 22, 2001), requires Federal agencies to prepare and submit to OMB, a Statement of Energy Effects for any significant energy action. A “significant energy action” is defined as any action by an agency that promulgated or is expected to lead to promulgation of a final rule, and that (1) is a significant regulatory action under Executive Order 12866, or any successor order; and (2) is likely to have a significant adverse effect on the supply, distribution, or use of energy; or (3) is designated by the Administrator of OIRA as a significant energy action. For any proposed significant energy action, the agency must give a detailed statement of any adverse effects on energy supply, distribution, or use if the regulation is implemented, and of reasonable alternatives to the action and their expected benefits on energy supply, distribution, and use.

    This regulatory action to propose an amended test procedure for measuring the lumen maintenance and time to failure of LED lamps is not a significant regulatory action under Executive Order 12866. Moreover, it would not have a significant adverse effect on the supply, distribution, or use of energy, nor has it been designated as a significant energy action by the Administrator of OIRA. Therefore, it is not a significant energy action, and, accordingly, DOE has not prepared a Statement of Energy Effects.

    L. Review Under Section 32 of the Federal Energy Administration Act of 1974

    Under section 301 of the Department of Energy Organization Act (Public Law 95-91; 42 U.S.C. 7101), DOE must comply with section 32 of the Federal Energy Administration Act of 1974, as amended by the Federal Energy Administration Authorization Act of 1977. (15 U.S.C. 788; FEAA) Section 32 essentially provides in relevant part that, where a proposed rule authorizes or requires use of commercial standards, the notice of proposed rulemaking must inform the public of the use and background of such standards. In addition, section 32(c) requires DOE to consult with the Attorney General and the Chairman of the Federal Trade Commission (FTC) concerning the impact of the commercial or industry standards on competition.

    The proposed amendment to the test procedures for LED lamps in this NOPR does not incorporate any new standards that would require compliance under section 32(b) of the FEAA.

    V. Public Participation A. Submission of Comments

    DOE will accept comments, data, and information regarding this proposed rule no later than the date provided in the DATES section at the beginning of this NOPR. Interested parties may submit comments, data, and other information using any of the methods described in the ADDRESSES section at the beginning of this NOPR.

    Submitting comments via regulations.gov. The regulations.gov Web page will require you to provide your name and contact information. Your contact information will be viewable to DOE Building Technologies staff only. Your contact information will not be publicly viewable except for your first and last names, organization name (if any), and submitter representative name (if any). If your comment is not processed properly because of technical difficulties, DOE will use this information to contact you. If DOE cannot read your comment due to technical difficulties and cannot contact you for clarification, DOE may not be able to consider your comment.

    However, your contact information will be publicly viewable if you include it in the comment or in any documents attached to your comment. Any information that you do not want to be publicly viewable should not be included in your comment, nor in any document attached to your comment. Persons viewing comments will see only first and last names, organization names, correspondence containing comments, and any documents submitted with the comments.

    Do not submit to regulations.gov information for which disclosure is restricted by statute, such as trade secrets and commercial or financial information (hereinafter referred to as Confidential Business Information (CBI)). Comments submitted through regulations.gov cannot be claimed as CBI. Comments received through the Web site will waive any CBI claims for the information submitted. For information on submitting CBI, see the Confidential Business Information section.

    DOE processes submissions made through regulations.gov before posting. Normally, comments will be posted within a few days of being submitted. However, if large volumes of comments are being processed simultaneously, your comment may not be viewable for up to several weeks. Please keep the comment tracking number that regulations.gov provides after you have successfully uploaded your comment.

    Submitting comments via email, hand delivery, or mail. Comments and documents submitted via email, hand delivery, or mail also will be posted to regulations.gov. If you do not want your personal contact information to be publicly viewable, do not include it in your comment or any accompanying documents. Instead, provide your contact information on a cover letter. Include your first and last names, email address, telephone number, and optional mailing address. The cover letter will not be publicly viewable as long as it does not include any comments.

    Include contact information each time you submit comments, data, documents, and other information to DOE. If you submit via mail or hand delivery, please provide all items on a CD, if feasible. It is not necessary to submit printed copies. No facsimiles (faxes) will be accepted.

    Comments, data, and other information submitted to DOE electronically should be provided in PDF (preferred), Microsoft Word or Excel, WordPerfect, or text (ASCII) file format. Provide documents that are not secured, written in English and free of any defects or viruses. Documents should not contain special characters or any form of encryption and, if possible, they should carry the electronic signature of the author.

    Campaign form letters. Please submit campaign form letters by the originating organization in batches of between 50 to 500 form letters per PDF or as one form letter with a list of supporters' names compiled into one or more PDFs. This reduces comment processing and posting time.

    Confidential Business Information. According to 10 CFR 1004.11, any person submitting information that he or she believes to be confidential and exempt by law from public disclosure should submit via email, postal mail, or hand delivery two well-marked copies: one copy of the document marked confidential including all the information believed to be confidential, and one copy of the document marked non-confidential with the information believed to be confidential deleted. Submit these documents via email or on a CD, if feasible. DOE will make its own determination about the confidential status of the information and treat it according to its determination.

    Factors of interest to DOE when evaluating requests to treat submitted information as confidential include: (1) A description of the items; (2) whether and why such items are customarily treated as confidential within the industry; (3) whether the information is generally known by or available from other sources; (4) whether the information has previously been made available to others without obligation concerning its confidentiality; (5) an explanation of the competitive injury to the submitting person which would result from public disclosure; (6) when such information might lose its confidential character due to the passage of time; and (7) why disclosure of the information would be contrary to the public interest.

    It is DOE's policy that all comments may be included in the public docket, without change and as received, including any personal information provided in the comments (except information deemed to be exempt from public disclosure).

    B. Issues on Which DOE Seeks Comment

    Although comments are welcome on all aspects of this proposed rulemaking, DOE is particularly interested in comments on the proposed amendment to the integrated LED lamps test procedure to allow for testing to be conducted at elevated temperatures during time to failure tests as prescribed in section 4 of appendix BB to subpart B of 10 CFR part 430.

    VI. Approval of the Office of the Secretary

    The Secretary of Energy has approved publication of this proposed rule.

    List of Subjects in 10 CFR Part 430

    Administrative practice and procedure, Confidential business information, Energy conservation, Household appliances, Imports, Incorporation by reference, Intergovernmental relations, Small businesses.

    Issued in Washington, DC on October 28, 2016. Kathleen B. Hogan, Deputy Assistant Secretary for Energy Efficiency, Energy Efficiency and Renewable Energy.

    For the reasons stated in the preamble, DOE proposes to amend part 430 of Chapter II of Title 10, Code of Federal Regulations as set forth below:

    PART 430—ENERGY CONSERVATION PROGRAM FOR CONSUMER PRODUCTS 1. The authority citation for part 430 continues to read as follows: Authority:

    42 U.S.C.6291-6309; 28 U.S.C. 2461 note.

    2. Appendix BB to subpart B of part 430 is amended by revising the introductory note and section 4.4.4 to read as follows: Appendix BB to Subpart B of Part 430—Uniform Test Method for Measuring the Input Power, Lumen Output, Lamp Efficacy, Correlated Color Temperature (CCT), Color Rendering Index (CRI), Power Factor, Time to Failure, and Standby Mode Power of Integrated Light-Emitting Diode (LED) Lamps

    Note: On or after [Date 180 Days after Publication of Final Rule in the Federal Register], any representations made with respect to the energy use or efficiency of integrated light-emitting diode lamps must be made in accordance with the results of testing pursuant to this appendix.

    4. Active Mode Test Method to Measure Time to Failure

    4.4. Operating Conditions and Setup Between Lumen Output Measurements

    4.4.4. Ambient temperature conditions must be as described in section 4.4 of IES LM-84. Maintain the ambient temperature at 25 °C ± 5 °C or at a manufacturer-selected temperature higher than 25 °C with the same ± 5 °C tolerance.

    [FR Doc. 2016-26681 Filed 11-3-16; 8:45 am] BILLING CODE 6450-01-P
    DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 39 [Docket No. FAA-2016-9357; Directorate Identifier 2016-CE-030-AD] RIN 2120-AA64 Airworthiness Directives; Pilatus Aircraft Ltd. Airplanes AGENCY:

    Federal Aviation Administration (FAA), Department of Transportation (DOT).

    ACTION:

    Notice of proposed rulemaking (NPRM).

    SUMMARY:

    We propose to adopt a new airworthiness directive (AD) for Pilatus Aircraft Ltd. Models PC-6, PC-6-H1, PC-6-H2, PC-6/350, PC-6/350-H1, PC-6/350-H2, PC-6/A, PC-6/A-H1, PC-6/A-H2, PC-6/B-H2, PC-6/B1-H2, PC- 6/B2-H2, PC-6/B2-H4, PC-6/C-H2, and PC-6/C1-H2 airplanes. This proposed AD results from mandatory continuing airworthiness information (MCAI) originated by an aviation authority of another country to identify and correct an unsafe condition on an aviation product. The MCAI describes the unsafe condition as certain combinations of the aileron counterweight and the attaching parts possibly resulting in reduced thread engagement and leading to disconnection of the aileron counterweight from the aileron. We are issuing this proposed AD to require actions to address the unsafe condition on these products.

    DATES:

    We must receive comments on this proposed AD by December 19, 2016.

    ADDRESSES:

    You may send comments by any of the following methods:

    Federal eRulemaking Portal: Go to http://www.regulations.gov. Follow the instructions for submitting comments.

    Fax: (202) 493-2251.

    Mail: U.S. Department of Transportation, Docket Operations, M-30, West Building Ground Floor, Room W12-140, 1200 New Jersey Avenue SE., Washington, DC 20590.

    Hand Delivery: U.S. Department of Transportation, Docket Operations, M-30, West Building Ground Floor, Room W12-140, 1200 New Jersey Avenue SE., Washington, DC 20590, between 9 a.m. and 5 p.m., Monday through Friday, except Federal holidays.

    For service information identified in this proposed AD, contact Pilatus Aircraft Ltd., Customer Technical Support (MCC), P.O. Box 992, CH-6371 Stans, Switzerland; phone: +41 (0)41 619 3333; fax: +41 (0)41 619 7311; email: [email protected]; Internet: http://www.pilatus-aircraft.com. You may review this referenced service information at the FAA, Small Airplane Directorate, 901 Locust, Kansas City, Missouri 64106. For information on the availability of this material at the FAA, call (816) 329-4148.

    Examining the AD Docket

    You may examine the AD docket on the Internet at http://www.regulations.gov by searching for and locating Docket No. FAA-2016-9357; or in person at the Docket Management Facility between 9 a.m. and 5 p.m., Monday through Friday, except Federal holidays. The AD docket contains this proposed AD, the regulatory evaluation, any comments received, and other information. The street address for the Docket Office (telephone (800) 647-5527) is in the ADDRESSES section. Comments will be available in the AD docket shortly after receipt.

    FOR FURTHER INFORMATION CONTACT:

    Doug Rudolph, Aerospace Engineer, FAA, Small Airplane Directorate, 901 Locust, Room 301, Kansas City, Missouri 64106; telephone: (816) 329-4059; fax: (816) 329-4090; email: [email protected].

    SUPPLEMENTARY INFORMATION: Comments Invited

    We invite you to send any written relevant data, views, or arguments about this proposed AD. Send your comments to an address listed under the ADDRESSES section. Include “Docket No. FAA-2016-9357; Directorate Identifier 2016-CE-030-AD” at the beginning of your comments. We specifically invite comments on the overall regulatory, economic, environmental, and energy aspects of this proposed AD. We will consider all comments received by the closing date and may amend this proposed AD because of those comments.

    We will post all comments we receive, without change, to http://regulations.gov, including any personal information you provide. We will also post a report summarizing each substantive verbal contact we receive about this proposed AD.

    Discussion

    The European Aviation Safety Agency (EASA), which is the Technical Agent for the Member States of the European Community, has issued AD No.: 2016-0183, dated September 13, 2016 (referred to after this as “the MCAI”), to correct an unsafe condition for Pilatus Aircraft Ltd. Model PC-6, PC-6-H1, PC-6-H2, PC-6/350, PC-6/350-H1, PC-6/350-H2, PC-6/A, PC-6/A-H1, PC-6/A-H2, PC-6/B-H2, PC-6/B1-H2, PC- 6/B2-H2, PC-6/B2-H4, PC-6/C-H2, and PC-6/C1-H2 airplanes and was based on mandatory continuing airworthiness information originated by an aviation authority of another country. The MCAI states:

    The proper installation of the aileron counterweight requires a combination, peculiar to each aileron, of anchor nut types, bolt types, number of washers, and the definition of the bolt torque. Some combinations of counterweight and attaching parts, which could result in reduced thread engagement, have been reported on a PC-6 aeroplane.

    This condition, if not detected and corrected, may lead to a disconnection of the aileron counterweight from the aileron, possibly resulting in reduced control of the aeroplane.

    To address this potential unsafe condition, Pilatus issued Service Bulletin (SB) No. 57-006 (hereafter referred to as `the SB' in this AD) to provide inspection instructions.

    For the reason described above, this AD requires identification and inspection of the affected aileron mass-balance counterweight attachment parts and, depending on findings, accomplishment of applicable corrective action(s).

    You may examine the MCAI on the Internet at http://www.regulations.gov by searching for and locating Docket No. FAA-2016-9357.

    Related Service Information Under 1 CFR Part 51

    Pilatus Aircraft Ltd. has issued Pilatus PC-6 Service Bulletin No. 57-006, dated May 13, 2016. The service information describes procedures for removal, installation, and inspection of the ailerons, aileron balance tabs, and the aileron counterweights and their attaching parts. This service information is reasonably available because the interested parties have access to it through their normal course of business or by the means identified in the ADDRESSES section of this NPRM.

    FAA's Determination and Requirements of the Proposed AD

    This product has been approved by the aviation authority of another country, and is approved for operation in the United States. Pursuant to our bilateral agreement with this State of Design Authority, they have notified us of the unsafe condition described in the MCAI and service information referenced above. We are proposing this AD because we evaluated all information and determined the unsafe condition exists and is likely to exist or develop on other products of the same type design.

    Costs of Compliance

    We estimate that this proposed AD will affect 30 products of U.S. registry. We also estimate that it would take about 2 work-hours per product to comply with the basic requirements of this proposed AD. The average labor rate is $85 per work-hour. Required parts would cost about $100 per product.

    Based on these figures, we estimate the cost of the proposed AD on U.S. operators to be $8,100, or $270 per product.

    Authority for This Rulemaking

    Title 49 of the United States Code specifies the FAA's authority to issue rules on aviation safety. Subtitle I, section 106, describes the authority of the FAA Administrator. “Subtitle VII: Aviation Programs,” describes in more detail the scope of the Agency's authority.

    We are issuing this rulemaking under the authority described in “Subtitle VII, Part A, Subpart III, Section 44701: General requirements.” Under that section, Congress charges the FAA with promoting safe flight of civil aircraft in air commerce by prescribing regulations for practices, methods, and procedures the Administrator finds necessary for safety in air commerce. This regulation is within the scope of that authority because it addresses an unsafe condition that is likely to exist or develop on products identified in this rulemaking action.

    Regulatory Findings

    We determined that this proposed AD would not have federalism implications under Executive Order 13132. This proposed AD would not have a substantial direct effect on the States, on the relationship between the national Government and the States, or on the distribution of power and responsibilities among the various levels of government.

    For the reasons discussed above, I certify this proposed regulation:

    (1) Is not a “significant regulatory action” under Executive Order 12866,

    (2) Is not a “significant rule” under the DOT Regulatory Policies and Procedures (44 FR 11034, February 26, 1979),

    (3) Will not affect intrastate aviation in Alaska, and

    (4) Will not have a significant economic impact, positive or negative, on a substantial number of small entities under the criteria of the Regulatory Flexibility Act.

    List of Subjects in 14 CFR Part 39

    Air transportation, Aircraft, Aviation safety, Incorporation by reference, Safety.

    The Proposed Amendment

    Accordingly, under the authority delegated to me by the Administrator, the FAA proposes to amend 14 CFR part 39 as follows:

    PART 39—AIRWORTHINESS DIRECTIVES 1. The authority citation for part 39 continues to read as follows: Authority:

    49 U.S.C. 106(g), 40113, 44701.

    § 39.13 [Amended]
    2. The FAA amends § 39.13 by adding the following new AD: Pilatus Aircraft Ltd.: Docket No. FAA-2016-9357; Directorate Identifier 2016-CE-030-AD. (a) Comments Due Date

    We must receive comments by December 19, 2016.

    (b) Affected ADs

    None.

    (c) Applicability

    This AD applies to PILATUS Models PC-6, PC-6-H1, PC-6-H2, PC-6/350, PC-6/350-H1, PC-6/350-H2, PC-6/A, PC-6/A-H1, PC-6/A-H2, PC-6/B-H2, PC-6/B1-H2, PC- 6/B2-H2, PC-6/B2-H4, PC-6/C-H2, and PC-6/C1-H2 airplanes, all manufacturer serial numbers (MSN), including MSN 2001 through 2092 (see Note 1 of paragraph c), certificated in any category.

    Note 1 of paragraph (c):

    For MSN 2001-2092, these airplanes are also identified as Fairchild Republic Company PC-6 airplanes, Fairchild Industries PC-6 airplanes, Fairchild Heli Porter PC-6 airplanes, or Fairchild-Hiller Corporation PC-6 airplanes.

    (d) Subject

    Air Transport Association of America (ATA) Code 57: Wings.

    (e) Reason

    This AD was prompted by certain combinations of the aileron counterweight and the attaching parts possibly resulting in reduced thread engagement and leading to disconnection of the aileron counterweight from the aileron. We are issuing this proposed AD to prevent disconnection of the aileron counterweight from the aileron, which could result in loss of control.

    (f) Actions and Compliance

    Unless already done, do the following actions as specified in paragraphs (f)(1) and (2) of this AD:

    (1) Within the next 12 months after the effective date of this AD or the next time the ailerons or aileron counterweights are removed or installed, whichever occurs first, and thereafter anytime the ailerons or aileron counterweights are removed or installed, remove each aileron counterweight to inspect the type and number of washers required for the installation of a counterweight on each aileron following the accomplishment instructions of paragraphs 3.B.(2) and 3.B.(3) of Pilatus PC-6 Service Bulletin (SB) No. 57-006, dated May 13, 2016.

    (2) Before further flight after the inspection required by paragraph (f)(1) of this AD, reinstall each aileron counterweight on the airplane following the accomplishment instructions of paragraph 3.B.(3) of Pilatus PC-6 SB No. 57-006, dated May 13, 2016.

    (g) Other FAA AD Provisions

    The following provisions also apply to this AD:

    (1) Alternative Methods of Compliance (AMOCs): The Manager, Standards Office, FAA, has the authority to approve AMOCs for this AD, if requested using the procedures found in 14 CFR 39.19. Send information to ATTN: Doug Rudolph, Aerospace Engineer, FAA, Small Airplane Directorate, 901 Locust, Room 301, Kansas City, Missouri 64106; telephone: (816) 329-4059; fax: (816) 329-4090; email: [email protected]. Before using any approved AMOC on any airplane to which the AMOC applies, notify your appropriate principal inspector (PI) in the FAA Flight Standards District Office (FSDO), or lacking a PI, your local FSDO.

    (2) Airworthy Product: For any requirement in this AD to obtain corrective actions from a manufacturer or other source, use these actions if they are FAA-approved. Corrective actions are considered FAA-approved if they are approved by the State of Design Authority (or their delegated agent). You are required to assure the product is airworthy before it is returned to service.

    (h) Related Information

    Refer to MCAI EASA AD No.: 2016-0183, dated September 13, 2016, for related information. You may examine the MCAI on the Internet at http://www.regulations.gov by searching for and locating Docket No. FAA-2016-9357. For service information related to this AD, contact Pilatus Aircraft Ltd., Customer Technical Support (MCC), P.O. Box 992, CH-6371 Stans, Switzerland; phone: +41 (0)41 619 3333; fax: +41 (0)41 619 7311; email: [email protected]; Internet: http://www.pilatus-aircraft.com. You may review this referenced service information at the FAA, Small Airplane Directorate, 901 Locust, Kansas City, Missouri 64106. For information on the availability of this material at the FAA, call (816) 329-4148.

    Issued in Kansas City, Missouri, on October 27, 2016. Pat Mullen, Acting Manager, Small Airplane Directorate, Aircraft Certification Service.
    [FR Doc. 2016-26429 Filed 11-3-16; 8:45 am] BILLING CODE 4910-13-P
    DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 39 [Docket No. FAA-2016-7850; Directorate Identifier 2016-NE-16-AD] RIN 2120-AA64 Airworthiness Directives; Turbomeca S.A. Turboshaft Engines AGENCY:

    Federal Aviation Administration (FAA), DOT.

    ACTION:

    Notice of proposed rulemaking (NPRM).

    SUMMARY:

    We propose to adopt a new airworthiness directive (AD) for all Turbomeca S.A. Arriel 2B turboshaft engines. This proposed AD was prompted by a report of an uncommanded in-flight shutdown (IFSD) on a single-engine helicopter, caused by a low returning spring rate of the needle of the hydro-mechanical metering unit (HMU). This proposed AD would require removing any pre-modification (mod) TU 158 HMU and replacing with a part eligible for installation. We are proposing this AD to prevent failure of the HMU, failure of the engine, IFSD, and loss of the helicopter.

    DATES:

    We must receive comments on this NPRM by December 19, 2016.

    ADDRESSES:

    You may send comments by any of the following methods:

    Federal eRulemaking Portal: Go to http://www.regulations.gov. Follow the instructions for submitting comments.

    Mail: Docket Management Facility, U.S. Department of Transportation, 1200 New Jersey Avenue SE., West Building Ground Floor, Room W12-140, Washington, DC 20590-0001.

    Hand Delivery: Deliver to Mail address above between 9 a.m. and 5 p.m., Monday through Friday, except Federal holidays.

    Fax: 202-493-2251.

    For service information identified in this proposed AD, contact Turbomeca S.A., 40220 Tarnos, France; phone: (33) 05 59 74 40 00; fax: (33) 05 59 74 45 15. You may view this service information at the FAA, Engine & Propeller Directorate, 1200 District Avenue, Burlington, MA. For information on the availability of this material at the FAA, call 781-238-7125. It is also available on the Internet at http://www.regulations.gov by searching for and locating Docket No. FAA-2016-7850.

    Examining the AD Docket

    You may examine the AD docket on the Internet at http://www.regulations.gov by searching for and locating Docket No. FAA-2016-7850; or in person at the Docket Operations office between 9 a.m. and 5 p.m., Monday through Friday, except Federal holidays. The AD docket contains this proposed AD, the mandatory continuing airworthiness information (MCAI), the regulatory evaluation, any comments received, and other information. The address for the Docket Office (phone: 800-647-5527) is in the ADDRESSES section. Comments will be available in the AD docket shortly after receipt.

    FOR FURTHER INFORMATION CONTACT:

    Kenneth Steeves, Aerospace Engineer, Engine Certification Office, FAA, Engine & Propeller Directorate, 1200 District Avenue, Burlington, MA 01803; phone: 781-238-7765; fax: 781-238-7199; email: [email protected].

    SUPPLEMENTARY INFORMATION:

    Comments Invited

    We invite you to send any written relevant data, views, or arguments about this proposed AD. Send your comments to an address listed under the ADDRESSES section. Include “Docket No. FAA-2016-7850; Directorate Identifier 2016-NE-16-AD” at the beginning of your comments. We specifically invite comments on the overall regulatory, economic, environmental, and energy aspects of this proposed AD. We will consider all comments received by the closing date and may amend this proposed AD based on those comments.

    We will post all comments we receive, without change, to http://www.regulations.gov, including any personal information you provide. We will also post a report summarizing each substantive verbal contact with FAA personnel concerning this proposed AD.

    Discussion

    The European Aviation Safety Agency (EASA), which is the Technical Agent for the Member States of the European Community, has issued EASA AD 2016-0098, dated May 23, 2016 (referred to hereinafter as “the MCAI”), to correct an unsafe condition for the specified products. The MCAI states:

    Following a report of an un-commanded in-flight shut-down (IFSD), Turbomeca carried out an engineering investigation. This investigation concluded that the cause of the event was a low returning spring rate of the needle of the hydro-mechanical metering unit (HMU), which enabled needle oscillation during rapid engine deceleration.

    This condition, if not corrected, could lead to further cases of IFSD, possibly resulting in an emergency landing on single engine.

    To address this potential unsafe condition, Turbomeca developed modification (Mod) TU 158, which increases needle return spring rate to prevent oscillation during rapid deceleration, thus preventing the risk of un-commanded IFSD. Turbomeca also published Mandatory Service Bulletin (MSB) 292 73 3158 for embodiment of this modification in service.

    You may obtain further information by examining the MCAI in the AD docket on the Internet at http://www.regulations.gov by searching for and locating Docket No. FAA-2016-7850.

    Related Service Information Under 1 CFR Part 51

    Turbomeca S.A. has issued MSB No. 292 73 3158, Version A, dated April 7, 2016. The MSB describes procedures for removing the pre-mod TU 158 HMU and replacing with an HMU modified with mod TU 158. This service information is reasonably available because the interested parties have access to it through their normal course of business or by the means identified in the ADDRESSES section.

    FAA's Determination and Requirements of This Proposed AD

    This product has been approved by the aviation authority of France, and is approved for operation in the United States. Pursuant to our bilateral agreement with the European Community, EASA has notified us of the unsafe condition described in the MCAI and service information referenced above. We are proposing this AD because we evaluated all information provided by EASA and determined the unsafe condition exists and is likely to exist or develop on other products of the same type design. This AD would require removing the pre-mod TU 158 HMU and replacing with a part eligible for installation.

    Costs of Compliance

    We estimate that this proposed AD affects 124 engines installed on helicopters of U.S. registry. We also estimate that it will take about 2.0 hours per engine to comply with this AD. The average labor rate is $85 per hour. Based on these figures, we estimate the cost of this AD on U.S. operators to be $21,080.

    Authority for This Rulemaking

    Title 49 of the United States Code specifies the FAA's authority to issue rules on aviation safety. Subtitle I, section 106, describes the authority of the FAA Administrator. “Subtitle VII: Aviation Programs,” describes in more detail the scope of the Agency's authority.

    We are issuing this rulemaking under the authority described in “Subtitle VII, Part A, Subpart III, Section 44701: General requirements.” Under that section, Congress charges the FAA with promoting safe flight of civil aircraft in air commerce by prescribing regulations for practices, methods, and procedures the Administrator finds necessary for safety in air commerce. This regulation is within the scope of that authority because it addresses an unsafe condition that is likely to exist or develop on products identified in this rulemaking action.

    Regulatory Findings

    We determined that this proposed AD would not have federalism implications under Executive Order 13132. This proposed AD would not have a substantial direct effect on the States, on the relationship between the national Government and the States, or on the distribution of power and responsibilities among the various levels of government.

    For the reasons discussed above, I certify this proposed regulation:

    (1) Is not a “significant regulatory action” under Executive Order 12866,

    (2) Is not a “significant rule” under the DOT Regulatory Policies and Procedures (44 FR 11034, February 26, 1979),

    (3) Will not affect intrastate aviation in Alaska to the extent that it justifies making a regulatory distinction, and

    (4) Will not have a significant economic impact, positive or negative, on a substantial number of small entities under the criteria of the Regulatory Flexibility Act.

    List of Subjects in 14 CFR Part 39

    Air transportation, Aircraft, Aviation safety, Incorporation by reference, Safety.

    The Proposed Amendment

    Accordingly, under the authority delegated to me by the Administrator, the FAA proposes to amend 14 CFR part 39 as follows:

    PART 39—AIRWORTHINESS DIRECTIVES 1. The authority citation for part 39 continues to read as follows: Authority:

    49 U.S.C. 106(g), 40113, 44701.

    § 39.13 [Amended]
    2. The FAA amends § 39.13 by adding the following new airworthiness directive (AD): Turbomeca S.A.: Docket No. FAA-2016-7850; Directorate Identifier 2016-NE-16-AD. (a) Comments Due Date

    We must receive comments by December 19, 2016.

    (b) Affected ADs

    None.

    (c) Applicability

    This AD applies to all Turbomeca S.A. Arriel 2B turboshaft engines with a pre-modification (mod) TU 158 hydro-mechanical metering unit (HMU), installed.

    (d) Reason

    This AD was prompted by a report of an uncommanded in-flight shutdown (IFSD) on a single engine helicopter caused by a low returning spring rate of the needle of the HMU. We are issuing this AD to prevent failure of the HMU, failure of the engine, IFSD and loss of the helicopter.

    (e) Actions and Compliance

    Comply with this AD within the compliance times specified, unless already done.

    (1) For an engine in pre-mod TU 158 configuration, within 200 engine hours, or within 5 months, whichever occurs first after the effective date of this AD, remove the pre-mod TU 158 HMU from service and replace with a part eligible for installation.

    (2) Reserved.

    (f) Installation Prohibition

    After the effective date of the AD, do not install any pre-mod TU 158 HMU into any engine.

    (g) Definition

    For the purpose of this AD, an HMU eligible for installation is one that incorporates mod TU 158 in accordance with Turbomeca MSB No. 292 73 3158, Version A, dated April 7, 2016, or other FAA approved parts.

    (h) Alternative Methods of Compliance (AMOCs)

    The Manager, Engine Certification Office, may approve AMOCs for this AD. Use the procedures found in 14 CFR 39.19 to make your request. You may email your request to: [email protected].

    (i) Related Information

    (1) For more information about this AD, contact Kenneth Steeves, Aerospace Engineer, Engine Certification Office, FAA, Engine & Propeller Directorate, 1200 District Avenue, Burlington, MA 01803; phone: 781-238-7765; fax: 781-238-7199; email: [email protected].

    (2) Refer to MCAI European Aviation Safety Agency (EASA), AD 2016-0098, dated May 23, 2016, for more information. You may examine the MCAI in the AD docket on the Internet at http://www.regulations.gov by searching for and locating it in Docket No. FAA-2016-7850.

    (3) Turbomeca Mandatory Service Bulletin MSB No. 292 73 3158, Version A, dated April 7, 2016, can be obtained from Turbomeca S.A., using the contact information in paragraph (i)(4) of this proposed AD.

    (4) For service information identified in this proposed AD, contact Turbomeca S.A., 40220 Tarnos, France; phone: (33) 05 59 74 40 00; fax: (33) 05 59 74 45 15.

    (5) You may view this service information at the FAA, Engine & Propeller Directorate, 1200 District Avenue, Burlington, MA. For information on the availability of this material at the FAA, call 781-238-7125.

    Issued in Burlington, Massachusetts, on October 25, 2016. Colleen M. D'Alessandro, Manager, Engine & Propeller Directorate, Aircraft Certification Service.
    [FR Doc. 2016-26335 Filed 11-3-16; 8:45 am] BILLING CODE 4910-13-P
    DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 71 [Docket No. FAA-2016-8163; Airspace Docket No. 16-ANM-2] Proposed Establishment of Class E Airspace, Thermopolis, WY AGENCY:

    Federal Aviation Administration (FAA), DOT.

    ACTION:

    Notice of Proposed Rulemaking (NPRM).

    SUMMARY:

    This action proposes to establish Class E airspace extending upward from 700 feet above the surface at Hot Springs County Airport, Thermopolis, WY, to support the development of Instrument Flight Rules (IFR) operations under standard instrument approach and departure procedures at the airport, for the safety and management of aircraft within the National Airspace System.

    DATES:

    Comments must be received on or before December 19, 2016.

    ADDRESSES:

    Send comments on this proposal to the U.S. Department of Transportation, Docket Operations, 1200 New Jersey Avenue SE., West Building Ground Floor, Room W12-140, Washington, DC 20590; telephone: 1-800-647-5527, or (202) 366-9826. You must identify FAA Docket No. FAA-2016-8163; Airspace Docket No. 16-ANM-2, at the beginning of your comments. You may also submit comments through the Internet at http://www.regulations.gov.

    FAA Order 7400.11A, Airspace Designations and Reporting Points, and subsequent amendments can be viewed online at http://www.faa.gov/air_traffic/publications/. For further information, you can contact the Airspace Policy Group, Federal Aviation Administration, 800 Independence Avenue SW., Washington, DC 20591; telephone: 202-267-8783. The Order is also available for inspection at the National Archives and Records Administration (NARA). For information on the availability of FAA Order 7400.11A at NARA, call 202-741-6030, or go to http://www.archives.gov/federal_register/code_of_federal-regulations/ibr_locations.html.

    FAA Order 7400.11, Airspace Designations and Reporting Points, is published yearly and effective on September 15.

    FOR FURTHER INFORMATION CONTACT:

    Tom Clark, Federal Aviation Administration, Operations Support Group, Western Service Center, 1601 Lind Avenue SW., Renton, WA 98057; telephone (425) 203-4511.

    SUPPLEMENTARY INFORMATION:

    Authority for This Rulemaking

    The FAA's authority to issue rules regarding aviation safety is found in Title 49 of the United States Code. Subtitle I, Section 106 describes the authority of the FAA Administrator. Subtitle VII, Aviation Programs, describes in more detail the scope of the agency's authority. This rulemaking is promulgated under the authority described in Subtitle VII, Part A, Subpart I, Section 40103. Under that section, the FAA is charged with prescribing regulations to assign the use of airspace necessary to ensure the safety of aircraft and the efficient use of airspace. This regulation is within the scope of that authority as it would establish Class E airspace at Hot Springs County Airport, Thermopolis, WY.

    Comments Invited

    Interested parties are invited to participate in this proposed rulemaking by submitting such written data, views, or arguments, as they may desire. Comments that provide the factual basis supporting the views and suggestions presented are particularly helpful in developing reasoned regulatory decisions on the proposal. Comments are specifically invited on the overall regulatory, aeronautical, economic, environmental, and energy-related aspects of the proposal. Communications should identify both docket numbers and be submitted in triplicate to the address listed above. Persons wishing the FAA to acknowledge receipt of their comments on this notice must submit with those comments a self-addressed, stamped postcard on which the following statement is made: “Comments to Docket No. FAA-2016-8163/Airspace Docket No. 16-ANM-2”. The postcard will be date/time stamped and returned to the commenter.

    All communications received before the specified closing date for comments will be considered before taking action on the proposed rule. The proposal contained in this notice may be changed in light of the comments received. A report summarizing each substantive public contact with FAA personnel concerned with this rulemaking will be filed in the docket.

    Availability of NPRMs

    An electronic copy of this document may be downloaded through the Internet at http://www.regulations.gov. Recently published rulemaking documents can also be accessed through the FAA's Web page at http://www.faa.gov/air_traffic/publications/airspace_amendments/.

    You may review the public docket containing the proposal, any comments received, and any final disposition in person in the Dockets Office (see the ADDRESSES section for the address and phone number) between 9:00 a.m. and 5:00 p.m., Monday through Friday, except federal holidays. An informal docket may also be examined during normal business hours at the Northwest Mountain Regional Office of the Federal Aviation Administration, Air Traffic Organization, Western Service Center, Operations Support Group, 1601 Lind Avenue SW., Renton, WA 98057.

    Availability and Summary of Documents Proposed for Incorporation by Reference

    This document proposes to amend FAA Order 7400.11A, Airspace Designations and Reporting Points, dated August 3, 2016, and effective September 15, 2016. FAA Order 7400.11A is publicly available as listed in the ADDRESSES section of this document. FAA Order 7400.11A lists Class A, B, C, D, and E airspace areas, air traffic service routes, and reporting points.

    The Proposal

    The FAA is proposing an amendment to Title 14 Code of Federal Regulations (14 CFR) Part 71 by establishing Class E airspace extending upward from 700 feet above the surface at Hot Springs County Airport, Thermopolis, WY. Class E airspace would be established within a 4.8-mile radius of Hot Springs County Airport, with segments extending from the 3.5-mile radius to 5.5 miles northwest of the airport, and 7 miles southwest of the airport. This airspace is necessary to support the development of IFR operations in standard instrument approach and departure procedures at the airport.

    Class E airspace designations are published in paragraph 6005 of FAA Order 7400.11A, dated August 3, 2016, and effective September 15, 2016, which is incorporated by reference in 14 CFR 71.1. The Class E airspace designations listed in this document will be published subsequently in the Order.

    Regulatory Notices and Analyses

    The FAA has determined that this regulation only involves an established body of technical regulations for which frequent and routine amendments are necessary to keep them operationally current, is non-controversial and unlikely to result in adverse or negative comments. It, therefore: (1) Is not a “significant regulatory action” under Executive Order 12866; (2) is not a “significant rule” under DOT Regulatory Policies and Procedures (44 FR 11034; February 26, 1979); and (3) does not warrant preparation of a regulatory evaluation as the anticipated impact is so minimal. Since this is a routine matter that will only affect air traffic procedures and air navigation, it is certified that this rule, when promulgated, would not have a significant economic impact on a substantial number of small entities under the criteria of the Regulatory Flexibility Act.

    Environmental Review

    This proposal will be subject to an environmental analysis in accordance with FAA Order 1050.1F, “Environmental Impacts: Policies and Procedures” prior to any FAA final regulatory action.

    List of Subjects in 14 CFR Part 71

    Airspace, Incorporation by reference, Navigation (air).

    The Proposed Amendment

    Accordingly, pursuant to the authority delegated to me, the Federal Aviation Administration proposes to amend 14 CFR part 71 as follows:

    PART 71—DESIGNATION OF CLASS A, B, C, D, AND E AIRSPACE AREAS; AIR TRAFFIC SERVICE ROUTES; AND REPORTING POINTS 1. The authority citation for 14 CFR Part 71 continues to read as follows: Authority:

    49 U.S.C. 106(f), 106(g), 40103, 40113, 40120; E.O. 10854, 24 FR 9565, 3 CFR, 1959-1963 Comp., p. 389.

    § 71.1 [Amended]
    2. The incorporation by reference in 14 CFR 71.1 of FAA Order 7400.11A, Airspace Designations and Reporting Points, dated August 3, 2016, and effective September 15, 2016, is amended as follows: Paragraph 6005 Class E Airspace Areas Extending Upward From 700 Feet or More Above the Surface of the Earth. ANM WY E5 Thermopolis, WY [New] Hot Springs County Airport, WY (Lat. 43°42′49″ N., long. 108°23′23″ W.)

    That airspace extending upward from 700 feet above the surface within a 4.8-mile radius of Hot Spring County Airport, and within 4.8 miles each side of the airport 230° bearing extending from the 4.8 mile radius to 7 miles southwest of the airport, and within 1.8 miles each side of the airport 055° bearing extending from the 4.8-mile radius to 5.5 miles northeast of the airport.

    Issued in Seattle, Washington, on October 24, 2016. Tracey Johnson, Manager, Operations Support Group, Western Service Center.
    [FR Doc. 2016-26440 Filed 11-3-16; 8:45 am] BILLING CODE 4910-13-P
    DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 71 [Docket No. FAA-2016-9102; Airspace Docket No. 16-AEA-6] Proposed Amendment of Class E Airspace, Monongahela, PA AGENCY:

    Federal Aviation Administration (FAA), DOT.

    ACTION:

    Notice of proposed rulemaking (NPRM).

    SUMMARY:

    This action proposes to amend Class E airspace at Monongahela, PA, as the Allegheny VHF Omni-directional Range, (VOR), has been decommissioned, requiring airspace reconfiguration at Rostraver Airport. Controlled airspace is necessary for the safety and management of instrument flight rules (IFR) operations at the airport.

    DATES:

    Comments must be received on or before December 19, 2016.

    ADDRESSES:

    Send comments on this proposal to: U. S. Department of Transportation, Docket Operations, 1200 New Jersey Avenue SE., West Bldg Ground Floor Rm W12-140, Washington, DC 20590; Telephone: 1-800-647-5527, or (202) 366-9826. You must identify the Docket No. FAA-2016-9102; Airspace Docket No. 16-AEA-6, at the beginning of your comments. You may also submit and review received comments through the Internet at http://www.regulations.gov. You may review the public docket containing the proposal, any comments received, and any final disposition in person in the Dockets Office between 9:00 a.m. and 5:00 p.m., Monday through Friday, except Federal holidays.

    FAA Order 7400.11A, Airspace Designations and Reporting Points, and subsequent amendments can be viewed on line at http://www.faa.gov/air_traffic/publications/. For further information, you can contact the Airspace Policy Group, Federal Aviation Administration, 800 Independence Avenue SW., Washington, DC 20591; telephone: 202-267-8783. The Order is also available for inspection at the National Archives and Records Administration (NARA). For information on the availability of FAA Order 7400.11A at NARA, call 202-741-6030, or go to http://www.archives.gov/federal_register/code_of_federal-regulations/ibr_locations.html.

    FAA Order 7400.11, Airspace Designations and Reporting Points, is published yearly and effective on September 15.

    FOR FURTHER INFORMATION CONTACT:

    John Fornito, Operations Support Group, Eastern Service Center, Federal Aviation Administration, P.O. Box 20636, Atlanta, Georgia 30320; telephone (404) 305-6364.

    SUPPLEMENTARY INFORMATION:

    Authority for This Rulemaking

    The FAA's authority to issue rules regarding aviation safety is found in Title 49 of the United States Code. Subtitle I, Section 106 describes the authority of the FAA Administrator. Subtitle VII, Aviation Programs, describes in more detail the scope of the agency's authority. This rulemaking is promulgated under the authority described in Subtitle VII, Part, A, Subpart I, Section 40103. Under that section, the FAA is charged with prescribing regulations to assign the use of airspace necessary to ensure the safety of aircraft and the efficient use of airspace. This regulation is within the scope of that authority as it would amend Class E airspace at Rostraver Airport, Monongahela, PA.

    Comments Invited

    Interested persons are invited to comment on this proposed rule by submitting such written data, views, or arguments, as they may desire. Comments that provide the factual basis supporting the views and suggestions presented are particularly helpful in developing reasoned regulatory decisions on the proposal. Comments are specifically invited on the overall regulatory, aeronautical, economic, environmental, and energy-related aspects of the proposal.

    Communications should identify both docket numbers and be submitted in triplicate to the address listed above. You may also submit comments through the Internet at http://www.regulations.gov.

    Persons wishing the FAA to acknowledge receipt of their comments on this action must submit with those comments a self-addressed stamped postcard on which the following statement is made: “Comments to Docket No. FAA-2016-9102; Airspace Docket No. 16-AEA-6.” The postcard will be date/time stamped and returned to the commenter.

    All communications received before the specified closing date for comments will be considered before taking action on the proposed rule. The proposal contained in this notice may be changed in light of the comments received. A report summarizing each substantive public contact with FAA personnel concerned with this rulemaking will be filed in the docket.

    Availability of NPRMs

    An electronic copy of this document may be downloaded through the internet at http://www.regulations.gov. Recently published rulemaking documents can also be accessed through the FAA's Web page at http://www.faa.gov/air_traffic/publications/airspace_amendments/.

    You may review the public docket containing the proposal, any comments received, and any final disposition in person in the Dockets Office (see the ADDRESSES section for address and phone number) between 9:00 a.m. and 5:00 p.m., Monday through Friday, except Federal Holidays. An informal docket may also be examined between 8:00 a.m. and 4:30 p.m., Monday through Friday, except Federal Holidays at the office of the Eastern Service Center, Federal Aviation Administration, room 350, 1701 Columbia Avenue, College Park, Georgia 30337.

    Availability and Summary of Documents for Incorporation by Reference

    This document proposes to amend FAA Order 7400.11A, Airspace Designations and Reporting Points, dated August 3, 2016, and effective September 15, 2016. FAA Order 7400.11A is publicly available as listed in the ADDRESSES section of this document. FAA Order 7400.11A lists Class A, B, C, D, and E airspace areas, air traffic service routes, and reporting points.

    The Proposal

    The FAA is considering an amendment to Title 14, Code of Federal Regulations (14 CFR) Part 71 to amend Class E airspace extending upward from 700 feet or more above the surface within a 6.5-mile radius of Rostraver Airport, Monongahela, PA, due to the decommissioning of the Allegheny VOR, and to ensure the safety and management of the modified IFR operations at the airport.

    Class E airspace designations are published in Paragraph 6005 of FAA Order 7400.11A, dated August 3, 2016, and effective September 15, 2016, which is incorporated by reference in 14 CFR 71.1. The Class E airspace designation listed in this document will be published subsequently in the Order.

    Regulatory Notices and Analyses

    The FAA has determined that this proposed regulation only involves an established body of technical regulations for which frequent and routine amendments are necessary to keep them operationally current. It, therefore: (1) is not a “significant regulatory action” under Executive Order 12866; (2) is not a “significant rule” under DOT Regulatory Policies and Procedures (44 FR 11034; February 26, 1979); and (3) does not warrant preparation of a Regulatory Evaluation as the anticipated impact is so minimal. Since this is a routine matter that will only affect air traffic procedures and air navigation, it is certified that this proposed rule, when promulgated, will not have a significant economic impact on a substantial number of small entities under the criteria of the Regulatory Flexibility Act.

    Environmental Review

    This proposal would be subject to an environmental analysis in accordance with FAA Order 1050.1F, “Environmental Impacts: Policies and Procedures” prior to any FAA final regulatory action.

    Lists of Subjects in 14 CFR Part 71

    Airspace, Incorporation by reference, Navigation (Air).

    The Proposed Amendment

    In consideration of the foregoing, the Federal Aviation Administration proposes to amend 14 CFR part 71 as follows:

    PART 71—DESIGNATION OF CLASS A, B, C, D, AND E AIRSPACE AREAS; AIR TRAFFIC SERVICE ROUTES; AND REPORTING POINTS 1. The authority citation for Part 71 continues to read as follows: Authority:

    49 U.S.C. 106(f), 106(g); 40103, 40113, 40120; E.O. 10854, 24 FR 9565, 3 CFR, 1959-1963 Comp., p. 389.

    § 71.1 [Amended]
    2. The incorporation by reference in 14 CFR 71.1 of FAA Order 7400.11A, Airspace Designations and Reporting Points, dated August 3, 2016, effective September 15, 2016, is amended as follows: Paragraph 6005 Class E Airspace Areas Extending Upward From 700 Feet or More Above the Surface of the Earth. AEA PA E5 Monongahela, PA [Amended] Rostraver Airport, Monongahela, PA (Lat. 40°12′35″ N., long. 79°49′53″ W.)

    That airspace extending upward from 700 feet above the surface within a 6.5-mile radius of Rostraver Airport.

    Issued in College Park, Georgia, on October 21, 2016. Ryan W. Almasy, Manager, Operations Support Group, Eastern Service Center, Air Traffic Organization.
    [FR Doc. 2016-26436 Filed 11-3-16; 8:45 am] BILLING CODE 4910-13-P
    DEPARTMENT OF HOMELAND SECURITY Coast Guard 33 CFR Part 117 [Docket No. USCG-2016-0963] RIN 1625-AA09 Drawbridge Operation Regulations; Tchefuncta River, Madisonville, LA AGENCY:

    Coast Guard, DHS.

    ACTION:

    Notice of proposed rulemaking.

    SUMMARY:

    The Coast Guard proposes to modify the operating schedule that governs the State Route 22 Bridge (Madisonville (SR22) swing span bridge) across the Tchefuncta River, mile 2.5, at Madisonville, St. Tammany Parish, Louisiana. The Louisiana Department of Transportation and Development requested changes to the present drawbridge operating regulations governing the SR 22 swing span bridge, to enhance the flow of vehicle traffic across the bridge.

    DATES:

    Comments and related material must reach the Coast Guard on or before January 18, 2017.

    ADDRESSES:

    You may submit comments identified by docket number USCG-2016-0963 using the Federal eRulemaking Portal at http://www.regulations.gov. See the “Public Participation and Request for Comments” portion of the SUPPLEMENTARY INFORMATION section below for instructions on submitting comments.

    FOR FURTHER INFORMATION CONTACT:

    If you have questions on this proposed rule, call or email David Frank, Bridge Administrator, at 504-671-2128, email [email protected].

    SUPPLEMENTARY INFORMATION: I. Table of Abbreviations CFR Code of Federal Regulations DHS Department of Homeland Security FR Federal Register NPRM Notice of proposed rulemaking SNPRM Supplemental notice of proposed rulemaking Pub. L. Public Law § Section U.S.C. United States Code LTOTD Louisiana Department of Transportation and Development SR State Route MHW Mean High Water II. Background, Purpose and Legal Basis

    Local governmental officials from St. Tammany Parish and the City of Madisonville, in conjunction with the Louisiana Department of Transportation and Development (LDOTD), requested that the operating regulation of the SR 22 Bridge, a swing span bridge, be changed in order to better accommodate the increased vehicular traffic crossing the bridge especially during the peak, weekday rush hours. Currently, this bridge is governed under 33 CFR 117.500. The current regulation was created to allow for improved vehicular traffic flow during peak rush hours due to the increased population of the western portions of St. Tammany Parish.

    Based on a recent study of the current vehicle traffic crossing the bridge, public officials and LDOTD requested that the operating regulation be changed to better meet current bridge use.

    The traffic study conducted by the LDOTD determined that the existing vehicular traffic at the intersection of SR 22 and SR 21/SR 1077 is over capacity at peak hours and causes unacceptable levels of delay to roadway traffic. This situation is compounded by the opening of the bridge during these peak hours. A combination of modifications to the operating schedule of the bridge and modifications to the traffic controls at this intersection will improve traffic flow and reduce traffic delays. As the largest commercial facility upstream of the bridge is no longer in service, most of the vessels that request openings are recreational powerboats and sailboats that routinely transit this waterway and should be able to adjust their schedules to coincide with the proposed drawbridge operating schedule. The SR 22 swing bridge has a vertical clearance of 6.2 feet above Mean High Water (MHW) in the closed-to-navigation position and unlimited clearance in the open-to-navigation position.

    Concurrent with the publication of the Notice of Proposed Rulemaking (NPRM), a Test Deviation [USCG-2016-0963] has been issued to allow the LDOTD to test the proposed schedule and to obtain data and public comments. The test period will be in effect during the entire NPRM comment period. The Coast Guard will review the logs of the drawbridge, the traffic counts provided by LDOTD, and evaluate public comments from this NPRM and the above referenced Temporary Deviation to determine if the requested change to the permanent special drawbridge operating regulation is warranted.

    III. Discussion of Proposed Rule

    The rule proposes to amend 33 CFR 117.500. The proposed rule change would extend the time between openings from 30 minutes to an hour, between 6 a.m. and 7 p.m., and not require the bridge to open for the passage of vessels at 8 a.m., 5 p.m. and 6 p.m. during the weekday rush hours. This additional time would allow commuters and school buses to cross the bridge freely and prevent vehicular traffic from backing up for over a mile on SR 22. The bridge will open at any time in the case of an emergency.

    Approximately 7,500 vehicles cross the bridge daily between the hours of 6 a.m. and 7 p.m. Vessel openings for the month of July indicate that the bridge opened to pass vessels 118 times during the week and 202 times during the weekend. Vessel openings for the month of August dropped to 68 openings during the week and 85 openings during the weekend.

    Traffic studies have indicated a significant increase in highway traffic delays caused by bridge openings, consisting of mainly recreational traffic that presently passes through the bridge on scheduled openings, and can adjust their schedules to work with the needs of land transportation. There are no alternate routes available for vessels that wish to transit the bridge site; however, if vessels have a vertical clearance requirement of less than 6.2 feet above MHW, they may transit the bridge site at any time.

    IV. Regulatory Analyses

    We developed this proposed rule after considering numerous statutes and Executive Orders related to rulemaking. Below we summarize our analyses based on these statutes and Executive Orders and we discuss First Amendment rights of protestors.

    A. Regulatory Planning and Review

    Executive Orders 12866 and 13563 direct agencies to assess the costs and benefits of available regulatory alternatives and, if regulation is necessary, to select regulatory approaches that maximize net benefits. Executive Order 13563 emphasizes the importance of quantifying both costs and benefits, of reducing costs, of harmonizing rules, and of promoting flexibility. This NPRM has not been designated a “significant regulatory action,” under Executive Order 12866. Accordingly, the NPRM has not been reviewed by the Office of Management and Budget.

    This regulatory action determination is based on a reduction of commercial vessel traffic on this waterway, and the recreational powerboats and sailboats that routinely transit this waterway can still transit the bridge under the proposed schedule. And, those vessels with a vertical clearance requirement of less than 6.2 feet above MHW, they may transit the bridge site at any time. This regulatory action takes into account the reasonable needs of vessel and vehicular traffic.

    B. Impact on Small Entities

    The Regulatory Flexibility Act of 1980 (RFA), 5 U.S.C. 601-612, as amended, requires federal agencies to consider the potential impact of regulations on small entities during rulemaking. The term “small entities” comprises small businesses, not-for-profit organizations that are independently owned and operated and are not dominant in their fields, and governmental jurisdictions with populations of less than 50,000. The Coast Guard certifies under 5 U.S.C. 605(b) that this proposed rule would not have a significant economic impact on a substantial number of small entities. While some owners or operators of vessels intending to transit the bridge may be small entities, for the reasons stated in section IV.A above, this proposed rule would not have a significant economic impact on any vessel owner or operator.

    If you think that your business, organization, or governmental jurisdiction qualifies as a small entity and that this rule would have a significant economic impact on it, please submit a comment (see ADDRESSES) explaining why you think it qualifies and how and to what degree this rule would economically affect it.

    Under section 213(a) of the Small Business Regulatory Enforcement Fairness Act of 1996 (Pub. L. 104-121), we want to assist small entities in understanding this proposed rule. If the rule would affect your small business, organization, or governmental jurisdiction and you have questions concerning its provisions or options for compliance, please contact the person listed in the FOR FURTHER INFORMATION CONTACT, above. The Coast Guard will not retaliate against small entities that question or complain about this proposed rule or any policy or action of the Coast Guard.

    C. Collection of Information

    This proposed rule would call for no new collection of information under the Paperwork Reduction Act of 1995 (44 U.S.C. 3501-3520).

    D. Federalism and Indian Tribal Government

    A rule has implications for federalism under Executive Order 13132, Federalism, if it has a substantial direct effect on the States, on the relationship between the national government and the States, or on the distribution of power and responsibilities among the various levels of government. We have analyzed this proposed rule under that Order and have determined that it is consistent with the fundamental federalism principles and preemption requirements described in Executive Order 13132.

    Also, this proposed rule does not have tribal implications under Executive Order 13175, Consultation and Coordination with Indian Tribal Governments, because it would not have a substantial direct effect on one or more Indian tribes, on the relationship between the Federal Government and Indian tribes, or on the distribution of power and responsibilities between the Federal Government and Indian tribes. If you believe this proposed rule has implications for federalism or Indian tribes, please contact the person listed in the FOR FURTHER INFORMATION CONTACT section above.

    E. Unfunded Mandates Reform Act

    The Unfunded Mandates Reform Act of 1995 (2 U.S.C. 1531-1538) requires Federal agencies to assess the effects of their discretionary regulatory actions. In particular, the Act addresses actions that may result in the expenditure by a State, local, or tribal government, in the aggregate, or by the private sector of $100,000,000 (adjusted for inflation) or more in any one year. Though this proposed rule will not result in such an expenditure, we do discuss the effects of this proposed rule elsewhere in this preamble.

    F. Environment

    We have analyzed this proposed rule under Department of Homeland Security Management Directive 023-01 and Commandant Instruction M16475.lD, which guides the Coast Guard in complying with the National Environmental Policy Act of 1969 (NEPA)(42 U.S.C. 4321-4370f), and have made a preliminary determination that this action is one of a category of actions which do not individually or cumulatively have a significant effect on the human environment. This proposed rule simply promulgates the operating regulations or procedures for drawbridges. Normally such actions are categorically excluded from further review, under figure 2-1, paragraph (32)(e), of the Instruction.

    Under figure 2-1, paragraph (32)(e), of the Instruction, an environmental analysis checklist and a categorical exclusion determination are not required for this rule. We seek any comments or information that may lead to the discovery of a significant environmental impact from this proposed rule.

    G. Protest Activities

    The Coast Guard respects the First Amendment rights of protesters. Protesters are asked to contact the person listed in the FOR FURTHER INFORMATION CONTACT section to coordinate protest activities so that your message can be received without jeopardizing the safety or security of people, places or vessels.

    V. Public Participation and Request for Comments

    We view public participation as essential to effective rulemaking, and will consider all comments and material received during the comment period. Your comment can help shape the outcome of this rulemaking. If you submit a comment, please include the docket number for this rulemaking, indicate the specific section of this document to which each comment applies, and provide a reason for each suggestion or recommendation.

    We encourage you to submit comments through the Federal eRulemaking Portal at http://www.regulations.gov. If your material cannot be submitted using http://www.regulations.gov, contact the person in the FOR FURTHER INFORMATION CONTACT section of this document for alternate instructions.

    We accept anonymous comments. All comments received will be posted without change to http://www.regulations.gov and will include any personal information you have provided. For more about privacy and the docket, you may review a Privacy Act notice regarding the Federal Docket Management System in the March 24, 2005, issue of the Federal Register (70 FR 15086).

    Documents mentioned in this notice, and all public comments, are in our online docket at http://www.regulations.gov and can be viewed by following that Web site's instructions. Additionally, if you go to the online docket and sign up for email alerts, you will be notified when comments are posted or a final rule is published.

    List of Subjects in 33 CFR Part 117

    Bridges.

    For the reasons discussed in the preamble, the Coast Guard proposes to amend 33 CFR part 117 as follows:

    PART 117—DRAWBRIDGE OPERATION REGULATIONS 1. The authority citation for part 117 continues to read as follows: Authority:

    33 U.S.C. 499; 33 CFR 1.05-1; Department of Homeland Security Delegation No. 0170.1.

    2. Revise § 117.500 to read as follows:
    § 117.500 Tchefuncta River

    The draw of the SR 22 Bridge, mile 2.5, at Madisonville, shall open on signal from 7 p.m. to 6 a.m. From 6 a.m. to 7 p.m., the draw need only open on the hour, except that the draw need not open for the passage of vessels at 8 a.m., 5 p.m. and 6 p.m. Monday through Friday except federal holidays. The bridge will open at any time an emergency.

    Dated: October 31, 2016. David R. Callahan, Rear Admiral, U.S. Coast Guard, Commander, Eighth Coast Guard District.
    [FR Doc. 2016-26654 Filed 11-3-16; 8:45 am] BILLING CODE 9110-04-P
    ENVIRONMENTAL PROTECTION AGENCY 40 CFR Parts 52 and 81 [EPA-R06-OAR-2016-0293; FRL-9954-35-Region 6] Approval and Promulgation of Implementation Plans and Designation of Areas for Air Quality Planning Purposes; Louisiana; Redesignation of Baton Rouge Nonattainment Area, 2008 8-Hour Ozone Nonattainment Area to Attainment AGENCY:

    Environmental Protection Agency (EPA).

    ACTION:

    Proposed rule.

    SUMMARY:

    On May 2, 2016, the State of Louisiana submitted a request for the Environmental Protection Agency (EPA) to redesignate the five-parish Baton Rouge Nonattainment Area (BRNA or Area) for the 2008 8-hour ozone National Ambient Air Quality Standards (NAAQS or standard) to attainment and to approve a State Implementation Plan (SIP) revision containing a maintenance plan for the area. EPA is proposing to determine that the BRNA is continuing to attain the 2008 ozone NAAQS; to approve into the SIP the State's plan for maintaining attainment of the standard in the Area, including the motor vehicle emission budgets (MVEBs) for nitrogen oxides (NOX) and volatile organic compounds (VOC) for the years 2022 and 2027; and to redesignate the BRNA to attainment for the standard.

    DATES:

    Comments must be received on or before December 5, 2016.

    ADDRESSES:

    Submit your comments, identified by Docket No. EPA-R06-OAR-2016-0293, at http://www.regulations.gov or via email to [email protected]. Follow the online instructions for submitting comments. Once submitted, comments cannot be edited or removed from Regulations.gov. The EPA may publish any comment received to its public docket. Do not submit electronically any information you consider to be Confidential Business Information (CBI) or other information whose disclosure is restricted by statute. Multimedia submissions (audio, video, etc.) must be accompanied by a written comment. The written comment is considered the official comment and should include discussion of all points you wish to make. The EPA will generally not consider comments or comment contents located outside of the primary submission (i.e. on the web, cloud, or other file sharing system). For additional submission methods, please contact Wendy Jacques, (214) 665-7395, [email protected]. For the full EPA public comment policy, information about CBI or multimedia submissions, and general guidance on making effective comments, please visit http://www2.epa.gov/dockets/commenting-epa-dockets.

    Docket: The index to the docket for this action is available electronically at www.regulations.gov and in hard copy at the EPA Region 6, 1445 Ross Avenue, Suite 700, Dallas, Texas. While all documents in the docket are listed in the index, some information may be publicly available only at the hard copy location (e.g., copyrighted material), and some may not be publicly available at either location (e.g., CBI).

    FOR FURTHER INFORMATION CONTACT:

    Wendy Jacques, (214) 665-7395, [email protected]. To inspect the hard copy materials, please schedule an appointment with Ms. Jacques or Mr. Bill Deese at 214-665-7253.

    SUPPLEMENTARY INFORMATION:

    Throughout this document wherever “we”, “us”, or “our” is used, we mean the EPA.

    I. What are EPA's proposed actions?

    EPA is proposing to take the following three separate but related actions, one of which involves multiple elements: (1) To determine that the BRNA continues to attain the 2008 ozone NAAQS; 1 (2) to approve into the SIP, Louisiana's plan for maintaining the 2008 ozone NAAQS (maintenance plan), including the associated MVEBs for the BRNA; and (3) to redesignate the BRNA to attainment for the 2008 ozone NAAQS. EPA is also notifying the public of the status of EPA's adequacy determination for the MVEBs for the BRNA. The BRNA is comprised of five parishes that make up the historical metropolitan statistical area: Ascension, East Baton Rouge, Iberville, Livingston, and West Baton Rouge. Today's proposed actions are summarized below and described in greater detail throughout this notice of proposed rulemaking.

    1 On May 4, 2016, we determined that the BRNA had attained the ozone NAAQS, by the applicable attainment date of July 20, 2015, based on 2012-2014 monitoring data. See 81 FR 26697.

    EPA is proposing to approve Louisiana's maintenance plan for the BRNA as meeting the requirements of section 175A [such approval being one of the Clean Air Act (CAA or Act) criteria for redesignation to attainment status]. The maintenance plan is designed to keep the BRNA in attainment of the 2008 ozone NAAQS through 2027. The maintenance plan includes 2022 and 2027 MVEBs for NOX and VOC for the BRNA for transportation conformity purposes. EPA is proposing to approve these MVEBs and incorporate them into the Louisiana SIP.

    EPA also proposes to determine that the BRNA has met the requirements for redesignation under section 107(d)(3)(E) of the CAA. Accordingly, in this action, EPA is proposing to approve a request to change the legal designation of the BRNA, as found at 40 CFR part 81, from nonattainment to attainment for the 2008 ozone NAAQS.

    EPA is also notifying the public of the status of EPA's adequacy process for the 2022 and 2027 NOX and VOC MVEBs for the BRNA. The Adequacy comment period began on May 6, 2016, with EPA's posting of the availability of Louisiana's submissions on EPA's Adequacy Web site (http://www3.epa.gov/otaq/stateresources/transconf/currsips.htm). The Adequacy comment period for these MVEBs closed on June 6, 2016. No comments, adverse or otherwise, were received during the Adequacy comment period. Please see section VII of this proposed rulemaking for further explanation of this process and for more details on the MVEBs.

    In summary, today's notice of proposed rulemaking is in response to Louisiana's May 2, 2016, redesignation request and associated SIP submission that address the specific issues summarized above and the necessary elements described in section 107(d)(3)(E) of the CAA for redesignation of the BRNA to attainment for the 2008 ozone NAAQS.

    II. What is the background for EPA's proposed actions?

    On March 12, 2008, EPA promulgated a revised 8-hour ozone NAAQS of 0.075 parts per million (ppm). See 73 FR 16436 (March 27, 2008). Under EPA's regulations at 40 CFR part 50, the 2008 ozone NAAQS is attained when the 3-year average of the annual fourth highest daily maximum 8-hour average ambient air quality ozone concentrations is less than or equal to 0.075 ppm. See 40 CFR 50.15. Ambient air quality monitoring data for the 3-year period must meet a data completeness requirement. The ambient air quality monitoring data completeness requirement is met when the average percent of days with valid ambient monitoring data is equal to or greater than 90 percent, and no single year has less than 75 percent data completeness as determined in Appendix P of part 50.

    Upon promulgation of a new or revised NAAQS, the CAA requires EPA to designate as nonattainment any area that is violating the NAAQS, based on the three most recent years of complete, quality assured, and certified ambient air quality data at the conclusion of the designation process. The BRNA was designated nonattainment for the 2008 ozone NAAQS on May 21, 2012 (effective July 20, 2012) using 2008-2010 ambient air quality data. See 77 FR 30088 (May 21, 2012). At the time of designation, the BRNA was classified as a marginal nonattainment area for the 2008 ozone NAAQS. In the final implementation rule for the 2008 ozone NAAQS (SIP Implementation Rule),2 EPA established ozone nonattainment area attainment dates based on Table 1 of section 181(a) of the CAA. This rule established an attainment date three years after the July 20, 2012, effective date of designation for areas classified as marginal for the 2008 ozone nonattainment designations.3 Therefore, the BRNA's attainment date was July 20, 2015.

    2 This rule, entitled Implementation of the 2008 National Ambient Air Quality Standards for Ozone: State Implementation Plan Requirements and published at 80 FR 12264 (March 6, 2015), addresses a range of nonattainment area SIP requirements for the 2008 ozone NAAQS, including requirements pertaining to attainment demonstrations, reasonable further progress, RACT, reasonably available control measures, major NSR, emission inventories, and the timing of SIP submissions and of compliance with emission control measures in the SIP. This rule also addresses the revocation of the 1997 ozone NAAQS and the anti-backsliding requirements that apply when the 1997 ozone NAAQS are revoked.

    3 The SIP Implementation Rule modified 40 CFR 51.1103 to establish attainment dates that run from the effective date of designation, i.e., July 20, 2012. This action was in response to the D.C. Circuit's decision in NRDC v. EPA (D.C. Cir. No. 12-1321) (Dec. 23, 2014). The Court's decision held “that the EPA's decision to run the attainment periods from the end of the calendar year in which areas were designated was unreasonable.” 80 FR 12264, at 12268.

    III. What are the criteria for redesignation?

    The CAA provides the requirements for redesignating a nonattainment area to attainment. Specifically, section 107(d)(3)(E) of the CAA allows for redesignation providing that: (1) The Administrator determines that the area has attained the applicable NAAQS; (2) the Administrator has fully approved the applicable implementation plan for the area under section 110(k); (3) the Administrator determines that the improvement in air quality is due to permanent and enforceable reductions in emissions resulting from implementation of the applicable SIP and applicable Federal air pollutant control regulations and other permanent and enforceable reductions; (4) the Administrator has fully approved a maintenance plan for the area as meeting the requirements of section 175A; and, (5) the state containing such area has met all requirements applicable to the area for purposes of redesignation under section 110 and part D of the CAA.

    On April 16, 1992, EPA provided guidance on redesignation in the General Preamble for the Implementation of title I of the CAA Amendments of 1990 (57 FR 13498), and supplemented this guidance on April 28, 1992 (57 FR 18070). EPA has provided further guidance on processing redesignation requests in the following documents:

    1. “Ozone and Carbon Monoxide Design Value Calculations,” Memorandum from Bill Laxton, Director, Technical Support Division, June 18, 1990; 2. “Maintenance Plans for Redesignation of Ozone and Carbon Monoxide Nonattainment Areas,” Memorandum from G.T. Helms, Chief, Ozone/Carbon Monoxide Programs Branch, April 30, 1992; 3. “Contingency Measures for Ozone and Carbon Monoxide (CO) Redesignations,” Memorandum from G.T. Helms, Chief, Ozone/Carbon Monoxide Programs Branch, June 1, 1992; 4. “Procedures for Processing Requests to Redesignate Areas to Attainment,” Memorandum from John Calcagni, Director, Air Quality Management Division, September 4, 1992 (hereafter referred to as the “Calcagni Memorandum”); 5. “State Implementation Plan (SIP) Actions Submitted in Response to Clean Air Act (CAA) Deadlines,” Memorandum from John Calcagni, Director, Air Quality Management Division, October 28, 1992; 6. “Technical Support Documents (TSDs) for Redesignation of Ozone and Carbon Monoxide (CO) Nonattainment Areas,” Memorandum from G.T. Helms, Chief, Ozone/Carbon Monoxide Programs Branch, August 17, 1993; 7. “State Implementation Plan (SIP) Requirements for Areas Submitting Requests for Redesignation to Attainment of the Ozone and Carbon Monoxide (CO) National Ambient Air Quality Standards (NAAQS) On or After November 15, 1992,” Memorandum from Michael H. Shapiro, Acting Assistant Administrator for Air and Radiation, September 17, 1993; 8. “Use of Actual Emissions in Maintenance Demonstrations for Ozone and CO Nonattainment Areas,” Memorandum from D. Kent Berry, Acting Director, Air Quality Management Division, November 30, 1993; 9. “Part D New Source Review (Part D NSR) Requirements for Areas Requesting Redesignation to Attainment,” Memorandum from Mary D. Nichols, Assistant Administrator for Air and Radiation, October 14, 1994; and 10. “Reasonable Further Progress, Attainment Demonstration, and Related Requirements for Ozone Nonattainment Areas Meeting the Ozone National Ambient Air Quality Standard,” Memorandum from John S. Seitz, Director, Office of Air Quality Planning and Standards, May 10, 1995. IV. Why is EPA proposing these actions?

    On May 2, 2016, the State of Louisiana, through the Louisiana Department of Environmental Quality (LDEQ), requested that EPA redesignate the BRNA to attainment for the 2008 ozone NAAQS. EPA's evaluation indicates that the entire BRNA has attained the 2008 ozone NAAQS, and that the BRNA meets the requirements for redesignation as set forth in section 107(d)(3)(E), including the maintenance plan requirements under section 175A of the CAA. As a result, EPA is proposing to take the three related actions summarized in section I of this notice.

    V. What is EPA's analysis of the request?

    Our analysis of the State's request with respect to the five redesignation criteria provided under CAA section 107(d)(3)(E) is discussed in the following paragraphs of this section.

    Criteria (1)—The BRNA has attained the 2008 ozone NAAQS.

    For redesignating a nonattainment area to attainment, the CAA requires EPA to determine that the area has attained the applicable NAAQS (CAA section 107(d)(3)(E)(i)). For ozone, an area may be considered to be attaining the 2008 ozone NAAQS if it meets the 2008 ozone NAAQS, as determined in accordance with 40 CFR 50.15 and Appendix P of part 50, based on three complete, consecutive calendar years of quality-assured air quality monitoring data. To attain the 2008 ozone NAAQS, the 3-year average of the fourth-highest daily maximum average ozone concentrations measured at each monitor within an area over each year must not exceed 0.075 ppm. Based on the data handling and reporting convention described in 40 CFR part 50, Appendix P, the 2008 ozone NAAQS are attained if the design value is 0.075 ppm or below. The data must be collected and quality-assured in accordance with 40 CFR part 58 and recorded in the EPA Air Quality System (AQS). The monitors generally should have remained at the same location for the duration of the monitoring period required for demonstrating attainment.

    EPA is proposing to determine that the BRNA is continuing to attain the 2008 ozone NAAQS. EPA reviewed ozone monitoring data from monitoring stations in the BRNA for the 2008 8-hour ozone NAAQS for 2011-2015, and the design values for each monitor in the Area are less than 0.075 ppm. These data have been quality-assured, are recorded in AQS, and indicate that the Area is attaining the 2008 ozone NAAQS. The fourth-highest 8-hour ozone values at each monitor for 2011, 2012, 2013, 2014, 2015, and the 3-year averages of these values (i.e., design values), are summarized in Table 1, below.

    Table 1—2011-2015 Design Value Concentrations for the BRNA Site 4th Highest 8-hour ozone value
  • (ppm)
  • 2011 2012 2013 2014 2015 3-Year design values
  • (ppm)
  • 2011-2013 2012-2014 2013-2015
    Plaquemine 0.079 0.074 0.061 0.061 0.069 0.071 0.065 0.063 Carville 0.084 0.073 0.068 0.068 0.075 0.075 0.069 0.070 Dutchtown 0.080 0.071 0.062 0.069 0.074 0.071 0.067 0.068 LSU 0.083 0.075 0.067 0.075 0.073 0.075 0.072 0.071 Port Allen 0.074 0.070 0.060 0.066 0.066 0.068 0.065 0.064 Pride 0.075 0.070 0.062 0.068 0.062 0.069 0.066 0.064 French Settlement 0.077 0.071 0.069 0.073 0.070 0.072 0.071 0.070 Capitol 0.080 0.072 0.066 0.070 0.069 0.072 0.069 0.068

    The 3-year design value for 2011-2013 for the BRNA is 0.075 ppm,4 which meets the 2008 ozone NAAQS. Further, quality assured data shows the 2012-2014 design value for the BRNA has decreased to 0.072 ppm and the 2013-2015 design value for the BRNA has decreased to 0.071 ppm. In today's action, EPA is proposing to determine that the BRNA is continuing to attain the 2008 ozone NAAQS. EPA will not take final action to approve the redesignation if the 3-year design value exceeds the NAAQS prior to EPA finalizing the redesignation. As discussed in more detail below, the State of Louisiana has committed to continue monitoring in this Area in accordance with 40 CFR part 58.

    4 The monitor with the highest 3-year design value is considered the design value for the BRNA.

    Criteria (2)—Louisiana has a fully approved SIP under section 110(k) for the BRNA; and Criteria (5)—Louisiana has met all applicable requirements under section 110 and part D of title I of the CAA.

    For redesignating a nonattainment area to attainment, the CAA requires EPA to determine that the state has met all applicable requirements under section 110 and part D of title I of the CAA (CAA section 107(d)(3)(E)(v)) and that the state has a fully approved SIP under section 110(k) for the area (CAA section 107(d)(3)(E)(ii)). EPA proposes to find that Louisiana has met all applicable SIP requirements for the BRNA under section 110 of the CAA (general SIP requirements) for purposes of redesignation. Additionally, EPA proposes to find that the Louisiana SIP satisfies the criterion that it meets applicable SIP requirements for purposes of redesignation under part D of title I of the CAA in accordance with section 107(d)(3)(E)(v). Further, EPA proposes to determine that the SIP is fully approved with respect to all requirements applicable for purposes of redesignation in accordance with section 107(d)(3)(E)(ii). In making these determinations, EPA ascertained which requirements are applicable to the Area and, if applicable, that they are fully approved under section 110(k). SIPs must be fully approved only with respect to requirements that were applicable prior to submittal of the complete redesignation request. See Sierra Club v. EPA, 375 F.3d 537 (7th Cir. 2004). See also 68 FR 25424, 25427 (May 12, 2003) (redesignation of St. Louis, Missouri); September 4, 1992 Calcagni memorandum; September 17, 1993 Michael Shapiro memorandum, and 60 FR 12459, 12465-66 (March 7, 1995) (redesignation of Detroit-Ann Arbor, MI).

    a. The BRNA Has Met All Applicable Requirements Under Section 110 and Part D of the CAA

    General SIP requirements. General SIP elements and requirements are delineated in section 110(a)(2) of title I, part A of the CAA. These requirements include, but are not limited to, the following: Submittal of a SIP that has been adopted by the state after reasonable public notice and hearing; provisions for establishment and operation of appropriate procedures needed to monitor ambient air quality; implementation of a source permit program; provisions for the implementation of part C requirements (Prevention of Significant Deterioration (PSD)) and provisions for the implementation of part D requirements (Nonattainment NSR permit programs); provisions for air pollution modeling; and provisions for public and local agency participation in planning and emission control rule development.

    Section 110(a)(2)(D) requires that SIPs contain certain measures to prevent sources in a state from significantly contributing to air quality problems in another state. To implement this provision, EPA has required certain states to establish programs to address the interstate transport of air pollutants. The section 110(a)(2)(D) requirements for a state are not linked with a particular nonattainment area's designation and classification in that state. EPA believes that the requirements linked with a particular nonattainment area's designation and classifications are the relevant measures to evaluate in reviewing a redesignation request. The transport SIP submittal requirements, where applicable, continue to apply to a state regardless of the designation of any one particular area in the state. Thus, EPA does not believe that the CAA's interstate transport requirements should be construed to be applicable requirements for purposes of redesignation. See 75 FR 2091, January 14, 2010.

    In addition, EPA believes other section 110 elements that are neither connected with nonattainment plan submissions nor linked with an area's attainment status are applicable requirements for purposes of redesignation. The area will still be subject to these requirements after the area is redesignated. The section 110 and part D requirements that are linked with a particular area's designation and classification are the relevant measures to evaluate in reviewing a redesignation request. This approach is consistent with EPA's existing policy on applicability (i.e., for redesignations) of conformity and oxygenated fuels requirements, as well as with section 184 ozone transport requirements. See Reading, Pennsylvania, proposed and final rulemakings (61 FR 53174, October 10, 1996), (62 FR 24826, May 7, 1997); Cleveland-Akron-Loraine, Ohio, final rulemaking (61 FR 20458, May 7, 1996); and Tampa, Florida, final rulemaking (60 FR 62748, December 7, 1995). See also the discussion on this issue in the Cincinnati, Ohio, redesignation (65 FR 37890, June 19, 2000), and in the Pittsburgh, Pennsylvania, redesignation (66 FR 50399, October 19, 2001).

    Title I, Part D, applicable SIP requirements. Section 172(c) of the CAA sets forth the basic requirements of attainment plans for nonattainment areas that are required to submit them pursuant to section 172(b). Subpart 2 of part D, which includes section 182 of the CAA, establishes specific requirements for ozone nonattainment areas depending on the area's nonattainment classification. As provided in Subpart 2, the specific requirements of section 182(a) apply in lieu of the demonstration of attainment (and contingency measures) required by section 172(c). 42 U.S.C. 7511a(a). A thorough discussion of the requirements contained in sections 172(c) and 182 can be found in the General Preamble for Implementation of Title I (57 FR 13498, April 16, 1992).

    Section 182(a) Requirements. Section 182(a)(1) requires states to submit a comprehensive, accurate, and current inventory of actual emissions from sources of VOC and NOX emitted within the boundaries of the ozone nonattainment area. Louisiana provided an emissions inventory for the BRNA to EPA in this SIP submission. On July 5, 2016, EPA published a direct final rule to approve this emissions inventory into the SIP. See 81 FR 43490.

    Under section 182(a)(2)(A), states with ozone nonattainment areas that were designated prior to the enactment of the 1990 CAA amendments were required to submit, within six months of classification, all rules and corrections to existing VOC reasonably available control technology (RACT) that were required under section 172(b)(3) of the CAA (and related guidance) prior to the 1990 CAA amendments. The BRNA is subject to the section 182(a)(2) RACT “fix up” and has been approved (59 FR 23166, May 5, 1994).

    Section 182(a)(2)(B) requires each state with a marginal ozone nonattainment area that implemented, or was required to implement, an inspection and maintenance (I/M) program prior to the 1990 CAA amendments to submit a SIP revision providing for an I/M program no less stringent than that required prior to the 1990 CAA amendments or already in the SIP at the time of the amendments, whichever is more stringent. The BRNA is subject to the section 182(a)(2)(B) and does have an approved I/M program (71 FR 66113, November 13, 2006).

    Regarding the permitting and offset requirements of section 182(a)(2)(C) and section 182(a)(4), Louisiana does have an approved part D NSR program in place (62 FR 52948, October 10, 1997). However, EPA has determined that areas being redesignated need not comply with the requirement that a NSR program be approved prior to redesignation, provided that the area demonstrates maintenance of the NAAQS without part D NSR, because PSD requirements will apply after redesignation. A more detailed rationale for this view is described in a memorandum from Mary Nichols, Assistant Administrator for Air and Radiation, dated October 14, 1994, entitled, “Part D New Source Review Requirements for Areas Requesting Redesignation to Attainment.” Louisiana's PSD program will automatically become applicable in the BRNA upon redesignation to attainment. See Louisiana Regulations Title 33, Part III, Chapter 5, section 504 that is part of the SIP.

    Section 182(a)(3) requires states to submit periodic inventories and emissions statements. Section 182(a)(3)(A) requires states to submit a periodic inventory every three years. As discussed below in the section of this notice titled Criteria (4)(e), Verification of Continued Attainment, the State will continue to update its emissions inventory at least once every three years. Under section 182(a)(3)(B), each state with an ozone nonattainment area must submit a SIP revision requiring emissions statements to be submitted to the state by sources within that nonattainment area. Louisiana provided a SIP revision to EPA on March 3, 1993, addressing the section 182(a)(3)(B) emissions statements requirement, and on January 6, 1995, EPA published a final rule to approve this SIP revision. See 60 FR 2014.

    Section 176 Conformity Requirements. Section 176(c) of the CAA requires states to establish criteria and procedures to ensure that federally supported or funded projects conform to the air quality planning goals in the applicable SIP. The requirement to determine conformity applies to transportation plans, programs, and projects that are developed, funded, or approved under title 23 of the United States Code and the Federal Transit Act (transportation conformity) as well as to all other federally supported or funded projects (general conformity). State transportation conformity SIP revisions must be consistent with Federal conformity regulations relating to consultation, enforcement, and enforceability that EPA promulgated pursuant to its authority under the CAA.

    EPA interprets the conformity SIP requirements 5 as not applying for purposes of evaluating a redesignation request under section 107(d) because state conformity rules are still required after redesignation and Federal conformity rules apply where state rules have not been approved. See Wall v. EPA, 265 F.3d 426 (6th Cir. 2001) (upholding this interpretation); see also 60 FR 62748 (December 7, 1995) (redesignation of Tampa, Florida). Nonetheless, Louisiana has an approved conformity SIP. See 71 FR 63247 (October 30, 2006). EPA proposes that the BRNA has satisfied all applicable requirements for purposes of redesignation under section 110 and part D of title I of the CAA.

    5 CAA section 176(c)(4)(E) requires states to submit revisions to their SIPs to reflect certain Federal criteria and procedures for determining transportation conformity. Transportation conformity SIPs are different from the MVEBs that are established in control strategy SIPs and maintenance plans.

    b. The BRNA has a fully approved applicable SIP under section 110(k) of the CAA.

    EPA has fully approved the applicable Louisiana SIP for the BRNA under section 110(k) of the CAA for all requirements applicable for purposes of redesignation. EPA may rely on prior SIP approvals in approving a redesignation request (see Calcagni Memorandum at p. 3; Southwestern Pennsylvania Growth Alliance v. Browner, 144 F.3d 984, 989-90 (6th Cir. 1998); Wall, 265 F.3d 426) plus any additional measures it may approve in conjunction with a redesignation action (see 68 FR 25426, May 12, 2003, and citations therein). Louisiana has adopted and submitted, and EPA has fully approved at various times, provisions addressing the various SIP elements applicable for the ozone NAAQS. See e.g. 76 FR 74000, November 15, 2011.

    As indicated above, EPA believes that the section 110 elements that are neither connected with nonattainment plan submissions nor linked to an area's nonattainment status are not applicable requirements for purposes of redesignation. EPA has approved all part D requirements applicable for purposes of this redesignation.

    Criteria (3)—The air quality improvement in the BRNA is due to permanent and enforceable reductions in emissions resulting from implementation of the SIP and applicable Federal air pollution control regulations and other permanent and enforceable reductions.

    For redesignating a nonattainment area to attainment, the CAA requires EPA to determine that the air quality improvement in the area is due to permanent and enforceable reductions in emissions resulting from implementation of the SIP, applicable Federal air pollution control regulations, and other permanent and enforceable reductions (CAA section 107(d)(3)(E)(iii)). EPA has preliminarily determined that Louisiana has demonstrated that the observed air quality improvement in the BRNA is due to permanent and enforceable reductions in emissions resulting from Federal measures and from state measures adopted into the SIP. EPA does not have any information to suggest that the decrease in ozone concentrations in the BRNA is due to unusually favorable meteorological conditions.

    Federal measures enacted in recent years have resulted in permanent emission reductions. Most of these emission reductions are enforceable through regulations. The Federal measures that have been implemented include the following:

    Tier 2 vehicle and fuel standards. Implementation began in 2004 in phases and requires all passenger vehicles in any manufacturer's fleet to meet an average standard of 0.07 grams of NOX per mile. In January 2006 the sulfur content of gasoline was required to be on average 30 ppm which assists in lowering the NOX emissions (65 FR 6698, February 10, 2000).6

    6 Louisiana also identified Tier 3 Motor Vehicle Emissions and Fuel Standards as a federal measure. EPA issued this rule in April 28, 2014, which applies to light duty passenger cars and trucks. EPA promulgated this rule to reduce air pollution from new passenger cars and trucks beginning in 2017. Tier 3 emission standards will lower sulfur content of gasoline and lower the emissions standards.

    Large non-road diesel engines rule. This rule was promulgated in 2004, and was phased in between 2008 through 2014 (69 FR 38958, June 29, 2004). This rule reduces the sulfur content in the nonroad diesel fuel, and also reduces NOX, VOC, particulate matter, and carbon monoxide emissions. These emission reductions are federally enforceable. This rule applies to diesel engines used in industries, such as construction, agriculture, and mining. It is estimated that compliance with this rule will cut NOX emissions from non-road diesel engines by up to 90 percent nationwide.

    Heavy-duty gasoline and diesel highway vehicle standards. EPA issued this rule in January 2001 (66 FR 5002). This rule includes standards limiting the sulfur content of diesel fuel, which went into effect in 2004. A second phase of the rule took effect in 2007, which further reduced the highway diesel fuel sulfur content to 15 ppm, leading to additional reductions in combustion NOX and VOC emissions. EPA expects that this rule will achieve a 95 percent reduction in NOX emissions from diesel trucks and buses and will reduce NOX emissions by 2.6 million tons by 2030 when the heavy-duty vehicle fleet is completely replaced with newer heavy-duty vehicles that comply with these emission standards.7

    7 66 FR 5002, 5012 (January 18, 2001).

    Nonroad spark-ignition engines and recreational engines standards. The nonroad spark-ignition and recreational engine standards, effective in January 2003, regulate NOX, hydrocarbons, and carbon monoxide from groups of previously unregulated nonroad engines (67 FR 68242, November 8, 2002). These engine standards apply to large spark-ignition engines (e.g., forklifts and airport ground service equipment), recreational vehicles (e.g., off-highway motorcycles and all-terrain-vehicles), and recreational marine diesel engines sold in the United States and imported after the effective date of these standards. When all of the nonroad spark-ignition and recreational engine standards are fully implemented, an overall 72 percent reduction in hydrocarbons, 80 percent reduction in NOX, and 56 percent reduction in carbon monoxide emissions are expected by 2020. These controls reduce ambient concentrations of ozone, carbon monoxide, and fine particulate matter.

    National program for greenhouse gas (GHG) emissions and fuel economy standards. The federal GHG and fuel economy standards apply to light-duty cars and trucks in model years 2012-2016 (phase 1) (75 FR 25324, May 7, 2010) and 2017-2025 (phase 2) (proposed at 80 FR 40138, July 13, 2015). The final standards are projected to result in an average industry fleet-wide level of 163 grams/mile of carbon dioxide which is equivalent to 54.5 miles per gallon if achieved exclusively through fuel economy improvements. The fuel economy standards result in less fuel being consumed, and therefore less NOX emissions released.

    Point Sources. In the submittal Louisiana noted their adoption of a NOX control rule that was approved by EPA (76 FR 38977, July 5, 2011). Additionally, we note that RACT controls were implemented in the area for the 1997 ozone NAAQS (76 FR 74000, November 30, 2011 and 76 FR 75467, December 2, 2011).

    Criteria (4)—The BRNA has a fully approved maintenance plan pursuant to section 175A of the CAA.

    For redesignating a nonattainment area to attainment, the CAA requires EPA to determine that the area has a fully approved maintenance plan pursuant to section 175A of the CAA (CAA section 107(d)(3)(E)(iv)). In conjunction with its request to redesignate the BRNA to attainment for the 2008 ozone NAAQS, LDEQ submitted a SIP revision to provide for the maintenance of the 2008 ozone NAAQS for at least 10 years after the effective date of redesignation to attainment. EPA believes that this maintenance plan meets the requirements for approval under section 175A of the CAA.

    a. What is required in a maintenance plan?

    Section 175A of the CAA sets forth the elements of a maintenance plan for areas seeking redesignation from nonattainment to attainment. Under section 175A, the plan must demonstrate continued attainment of the applicable NAAQS for at least 10 years after the Administrator approves a redesignation to attainment. Eight years after the redesignation, the state must submit a revised maintenance plan demonstrating that attainment will continue to be maintained for the 10 years following the initial 10-year period. To address the possibility of future NAAQS violations, the maintenance plan must contain contingency measures as necessary to assure prompt correction of any future violations of the 2008 ozone NAAQS. The Calcagni Memorandum provides further guidance on the content of a maintenance plan, explaining that a maintenance plan should address five requirements: The attainment emissions inventory, maintenance demonstration, monitoring, verification of continued attainment, and a contingency plan.8 As is discussed more fully below, EPA is proposing to determine that Louisiana's maintenance plan includes all the necessary components and is thus proposing to approve it as a revision to the Louisiana SIP.

    8 Procedures for Processing Requests to Redesignate Areas to Attainment, Memorandum from John Calcagni, Director, Air Quality Management Division, September 4, 1992.

    b. Attainment Emissions Inventory

    EPA is proposing to determine that the BRNA has attained the 2008 ozone NAAQS based on quality-assured monitoring data for the 3-year period from 2011-2013, and is continuing to attain the standard based on 2012-2014 and 2013-2015 data. Louisiana selected 2011 as the base year (i.e., attainment emissions inventory year) for developing a comprehensive emissions inventory for NOx and VOC, for which projected emissions could be developed for 2022 and 2027. The attainment inventory identifies a level of emissions in the Area that is sufficient to attain the 2008 ozone NAAQS. Louisiana began development of the attainment inventory by first generating a baseline emissions inventory for the State's portion of the BRNA. The projected emission inventories have been estimated using projected rates of growth in population, traffic, economic activity, and other parameters. In addition to comparing the final year of the plan (2027) to the base year (2011), Louisiana compared an interim year to the baseline to demonstrate that this year is also expected to show continued maintenance of the 2008 ozone standard.

    The emissions inventory is composed of four major types of sources: nonroad, onroad, nonpoint and point. The complete descriptions of how the inventories were developed are discussed in the Appendix F and Appendix K of the May 2, 2016, submittal, which can be found in the docket for this action. The 2011 NOx and VOC emissions for the BRNA, as well as the emissions for other years, were developed consistent with EPA guidance and are summarized in Table 2 of the following subsection discussing the maintenance demonstration.

    c. Maintenance Demonstration

    The maintenance plan associated with the redesignation request includes a maintenance demonstration that:

    (i) Shows compliance with and maintenance of the 2008 ozone NAAQS by providing information to support the demonstration that current and future emissions of NOx and VOC remain at or below 2011 emissions levels.

    (ii) Uses 2011 as the attainment year and includes future emissions inventory projections for 2022 and 2027.

    (iii) Identifies an “out year” at least 10 years after the time necessary for EPA to review and approve the maintenance plan. Per 40 CFR part 93, NOx and VOC MVEBs were established for 2022 and 2027 (see section VII below).

    (iv) Provides actual (2011) and projected emissions inventories, in tons per day (tpd), for the BRNA, as shown in Table 2, below.

    On July 5, 2016, we approved the BRNA 2011 Base Year Emissions Inventory (EI) for the 2008 8 Hour NAAQS. See 81 FR 43490. LDEQ developed projected EIs for the years 2022 and 2027 using the 2011 EI (Table 2). The projected emissions for 2022 and 2027 indicate that ozone precursor emissions in the BRNA will remain below those in the attainment year inventory for the duration of the maintenance plan. While LDEQ projected an increase in NOx and VOC emissions from the nonpoint source sector, they projected that the increases from this sector would be offset from reductions in the nonroad mobile and onroad mobile source sectors. LDEQ will compare emission inventory data submitted to the National Emission Inventory with the emission growth data submitted in the maintenance plan to ensure emission reductions (from all sources, collectively) continue the downward trend considering all emission sources.

    Table 2—Summary of 2011 and Future NOX and VOC Emissions (tpd) for the BRNA Sector 2011 NOX VOC 2022 NOX VOC 2027 NOX VOC Δ 2011-2027 NOX VOC Nonpoint 17.1 82.6 17.9 90.5 17.9 92.7 0.8 10.1 Nonroad 27.3 8.7 12.6 6.5 15.2 6.1 −12.1 −2.6 Onroad 38.4 19.2 14.4 13.0 11.0 11.4 −27.4 −7.8 Point 74.2 33.6 74.2 33.6 74.2 33.6 0.0 0.0 Total 157.0 144.0 119.0 143.5 118.2 143.6 −38.8 −0.4 d. Monitoring Network

    There currently are 8 monitors measuring ozone in the BRNA. The State of Louisiana, through LDEQ, has committed to continue operation of the monitors in the BRNA throughout the maintenance period in compliance with 40 CFR part 58.

    e. Verification of Continued Attainment

    The State of Louisiana, through LDEQ, has the legal authority to enforce and implement the maintenance plan for the BRNA. This includes the authority to adopt, implement, and enforce any subsequent emissions control contingency measures determined to be necessary to correct future ozone attainment problems.

    LDEQ will track the progress of the maintenance plan through continued ambient ozone monitoring in accordance with the requirements of 40 CFR part 58, and by performing future reviews of actual emissions from all sources in the area using the latest emissions factors, models, and methodologies. LDEQ will work with EPA to ensure that the air monitoring network continues to be effective and will quality assure the data according to Federal requirements as one way to verify continued attainment.

    Additionally, under the Air Emissions Reporting Requirements (AERR), LDEQ is required to develop a comprehensive, annual, statewide emissions inventory every three years that is due twelve to eighteen months after the completion of the inventory year. As noted above, LDEQ will compare emission inventory data submitted to the National Emission Inventory with the emission growth data submitted in the maintenance plan to ensure emission reductions (from all sources, collectively) continue the downward trend.

    f. Contingency Measures in the Maintenance Plan

    Section 175A of the CAA requires that a maintenance plan include such contingency measures as EPA deems necessary to assure that the state will promptly correct a violation of the NAAQS that occurs after redesignation. The maintenance plan should identify the contingency measures to be adopted, a schedule and procedure for adoption and implementation, and a time limit for action by the state. A state should also identify specific indicators to be used to determine when the contingency measures need to be implemented.

    The contingency plan included in the submittal includes a triggering mechanism to determine when contingency measures are needed and a process of developing and implementing appropriate control measures. The trigger of the contingency plan will be a violation of the 2008 ozone NAAQS (i.e., when the three-year average of the 4th highest values is equal to or greater than 0.075 ppm at a monitor in the Area).

    Once a trigger is activated, the LDEQ has committed to adopt additional measures, if LDEQ determines that the violations are caused by sources within the State, and to implement the measures as expeditiously as practicable, but no later than 24 months following the trigger. The following contingency measures are identified for possible implementation, but may not be limited to:

    • Extending the applicability of the state's NOX control rule in LAC 33:III.2202 to include the months of April and October each year (currently Chapter 22 applies from May 1 to September 30). This would assist in reducing incidences of high ozone days in the BRNA. In addition, the state will consider other measures such as lowering the NOX emission factors of LAC 33:III.2205.D and/or requiring more stringent monitoring of elevated flares, as well as measures targeting the following:

    • Diesel retrofit/replacement initiatives;

    • Programs or incentives to decrease motor vehicle use;

    • Implementation of fuel programs, including incentives for alternative fuels;

    • Employer-based transportation management plans;

    • Anti-backsliding ordinances; and

    • Programs to limit or restrict vehicle use in areas of high emissions concentration during periods of peak use.

    Given the substantial amount of industrial emissions in the BRNA, and the fact the Area's ozone problem is mostly driven by NOX emissions, these potential contingency measures would be appropriate for adequately correcting an attainment problem.

    EPA proposes to conclude that the maintenance plan adequately addresses the five basic components of a maintenance plan: the attainment emissions inventory, maintenance demonstration, monitoring, verification of continued attainment, and a contingency plan. Therefore, EPA proposes that the maintenance plan SIP revision submitted by Louisiana for the BRNA meets the requirements of section 175A of the CAA and is approvable.

    VI. What is EPA's analysis of louisiana's proposed NOX and VOC MVEBs for the Baton Rouge Area?

    Under section 176(c) of the CAA, new transportation plans, programs, and projects, such as the construction of new highways, must “conform” to (i.e., be consistent with) the part of the state's air quality plan that addresses pollution from cars and trucks. Conformity to the SIP means that transportation activities will not cause new air quality violations, worsen existing violations, or delay timely attainment of the NAAQS or any interim milestones. If a transportation plan does not conform, most new projects that would expand the capacity of roadways cannot go forward. Regulations at 40 CFR part 93 set forth EPA policy, criteria, and procedures for demonstrating and assuring conformity of such transportation activities to a SIP. The regional emissions analysis is one, but not the only, requirement for implementing transportation conformity. Transportation conformity is a requirement for nonattainment and maintenance areas. Maintenance areas are areas that were previously nonattainment for a particular NAAQS but have since been redesignated to attainment with an approved maintenance plan for that NAAQS.

    Under the CAA, states are required to submit, at various times, control strategy SIPs and maintenance plans for nonattainment areas. These control strategy SIPs, including maintenance plans, create MVEBs for criteria pollutants and/or their precursors to address pollution from cars and trucks. Per 40 CFR part 93, a MVEB must be established for the last year of the maintenance plan. A state may adopt MVEBs for other years as well. The MVEB is the portion of the total allowable emissions in the maintenance demonstration that is allocated to highway and transit vehicle use and emissions. See 40 CFR 93.101. The MVEB serves as a ceiling on emissions from an area's planned transportation system. The MVEB concept is further explained in the preamble to the November 24, 1993, Transportation Conformity Rule (58 FR 62188). The preamble also describes how to establish the MVEB in the SIP and how to revise the MVEB.

    As part of the interagency consultation process on setting MVEBs, LDEQ held discussions to determine what years to set MVEBs for the BRNA maintenance plan. According to the transportation conformity rule, a maintenance plan must establish MVEBs for the last year of the maintenance plan (in this case, 2027). See 40 CFR 93.118. Louisiana also provided MVEBs for 2022. Table 3 below provides the NOX and VOC MVEBs in tpd for 2022 and 2027, as reflected in Section 9, Tables 9.1 and 9.2 of the State's submittal.

    Table 3—Baton Rouge MVEBs [tpd] Year NOX VOC 2022 14.37 13.19 2027 10.95 11.55

    Through this rulemaking, EPA is proposing to approve the MVEBs for NOX and VOC for 2022 and 2027 for the Baton Rouge Area because EPA believes that the Area maintains the 2008 ozone NAAQS with the emissions at the levels of the budgets. Once the MVEBs for the BRNA are approved, they must be used for future conformity determinations.

    VII. What is the status of EPA's adequacy determination for the proposed NOX and VOC MVEBs for the BRNA?

    EPA found the BRNA MVEBs adequate for transportation conformity purposes effective July 14, 2016, see 81 FR 42350 (June 29, 2016). The MVEB must be used by state and Federal agencies in determining whether proposed transportation projects conform to the SIP as required by section 176(c) of the CAA.

    EPA's substantive criteria for determining adequacy of a MVEB are set out in 40 CFR 93.118(e)(4). The process for determining adequacy consists of three basic steps: public notification of a SIP submission, a public comment period, and EPA's adequacy determination. This process for determining the adequacy of submitted MVEBs for transportation conformity purposes was initially outlined in EPA's May 14, 1999, guidance, “Conformity Guidance on Implementation of March 2, 1999, Conformity Court Decision.” EPA adopted regulations to codify the adequacy process in the Transportation Conformity Rule Amendments for the “New 8-Hour Ozone and PM2.5 National Ambient Air Quality Standards and Miscellaneous Revisions for Existing Areas; Transportation Conformity Rule Amendments—Response to Court Decision and Additional Rule Change,” on July 1, 2004 (69 FR 40004). Additional information on the adequacy process for transportation conformity purposes is available in the proposed rule entitled, “Transportation Conformity Rule Amendments: Response to Court Decision and Additional Rule Changes,” 68 FR 38974, 38984 (June 30, 2003).

    VIII. What is the effect of EPA's proposed actions?

    EPA's proposed actions establish the basis upon which EPA may take final action on the issues being proposed for approval today. Approval of Louisiana's redesignation request would change the legal designation of the BRNA as found at 40 CFR part 81, from nonattainment to attainment for the 2008 ozone NAAQS. Approval of Louisiana's associated SIP revision would also incorporate a plan for maintaining the 2008 ozone NAAQS in the BRNA through 2027 into the SIP. This maintenance plan includes contingency measures to remedy any future violations of the 2008 ozone NAAQS and procedures for evaluation of potential violations. The maintenance plan also establishes NOx and VOC MVEBs for 2022 and 2027 for the Baton Rouge Area. The MVEBs are listed in Table 5 in section VI. Additionally, EPA is notifying the public of the status of EPA's adequacy determination for the newly-established NOx and VOC MVEBs for 2022 and 2027 for the Baton Rouge Area.

    IX. Proposed Actions

    EPA is proposing three separate but related actions regarding the redesignation and maintenance of the 2008 ozone NAAQS for the BRNA. EPA is proposing to determine that the BRNA is attaining the 2008 ozone NAAQS. EPA is also proposing to approve the maintenance plan for the BRNA, including the NOX and VOC MVEBs for 2022 and 2027, into the Louisiana SIP (under CAA section 175A). The maintenance plan demonstrates that the Area will continue to maintain the 2008 ozone NAAQS through 2027 and that the budgets meet all of the adequacy criteria contained in 40 CFR 93.118(e)(4) and (5). Further, as part of today's action, EPA is describing the status of its adequacy determination for the NOX and VOC MVEBs for 2022 and 2027 in accordance with 40 CFR 93.118(f)(2). Within 24 months from the effective date of EPA's adequacy determination for the MVEBs or the publication date for the final rule for this action, whichever is earlier, the transportation partners will need to demonstrate conformity to the new NOX and VOC MVEBs pursuant to 40 CFR 93.104(e)(3).

    Additionally, EPA is proposing to determine that the BRNA has met the criteria under CAA section 107(d)(3)(E) for redesignation from nonattainment to attainment for the 2008 ozone NAAQS. On this basis, EPA is proposing to approve Louisiana's redesignation request for the BRNA. If finalized, approval of the redesignation request would change the official designation of the portion of BRNA, as found at 40 CFR part 81, from nonattainment to attainment for the 2008 ozone NAAQS.

    X. Statutory and Executive Order Reviews

    Under the CAA, redesignation of an area to attainment and the accompanying approval of a maintenance plan under section 107(d)(3)(E) are actions that affect the status of a geographical area and do not impose any additional regulatory requirements on sources beyond those imposed by state law. A redesignation to attainment does not in and of itself create any new requirements, but rather results in the applicability of requirements contained in the CAA for areas that have been redesignated to attainment. Moreover, the Administrator is required to approve a SIP submission that complies with the provisions of the Act and applicable Federal regulations. See 42 U.S.C. 7410(k); 40 CFR 52.02(a). Thus, in reviewing SIP submissions, EPA's role is to approve state choices, provided that they meet the criteria of the CAA. Accordingly, these proposed actions merely propose to approve state law as meeting Federal requirements and do not impose additional requirements beyond those imposed by state law. For this reason, these proposed actions:

    • Are not a significant regulatory action subject to review by the Office of Management and Budget under Executive Orders 12866 (58 FR 51735, October 4, 1993) and 13563 (76 FR 3821, January 21, 2011);

    • do not impose an information collection burden under the provisions of the Paperwork Reduction Act (44 U.S.C. 3501 et seq.);

    • are certified as not having a significant economic impact on a substantial number of small entities under the Regulatory Flexibility Act (5 U.S.C. 601 et seq.);

    • do not contain any unfunded mandate or significantly or uniquely affect small governments, as described in the Unfunded Mandates Reform Act of 1995 (Public Law 104-4);

    • do not have Federalism implications as specified in Executive Order 13132 (64 FR 43255, August 10, 1999);

    • are not economically significant regulatory actions based on health or safety risks subject to Executive Order 13045 (62 FR 19885, April 23, 1997);

    • are not significant regulatory actions subject to Executive Order 13211 (66 FR 28355, May 22, 2001);

    • are not subject to requirements of section 12(d) of the National Technology Transfer and Advancement Act of 1995 (15 U.S.C. 272 note) because application of those requirements would be inconsistent with the CAA; and

    • do not provide EPA with the discretionary authority to address, as appropriate, disproportionate human health or environmental effects, using practicable and legally permissible methods, under Executive Order 12898 (59 FR 7629, February 16, 1994).

    In addition the SIP is not approved to apply on any Indian reservation land or in any other area where EPA or an Indian tribe has demonstrated that a tribe has jurisdiction. In those areas of Indian country, the proposed rule does not have tribal implications and will not impose substantial direct costs on tribal governments or preempt tribal law as specified by Executive Order 13175 (65 FR 67249, November 9, 2000).

    List of Subjects 40 CFR Part 52

    Environmental protection, Air pollution control, Incorporation by reference, Intergovernmental relations, Nitrogen dioxide, Ozone, Reporting and recordkeeping requirements, Volatile organic compounds.

    40 CFR Part 81

    Environmental protection, Air pollution control.

    Authority:

    42 U.S.C. 7401 et seq.

    Dated: October 27, 2016. Samuel Coleman, Acting Regional Administrator, Region 6.
    [FR Doc. 2016-26584 Filed 11-3-16; 8:45 am] BILLING CODE 6560-50-P
    DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare & Medicaid Services 42 CFR Part 494 [CMS-3334-P] RIN 0938-AS94 Medicare and Medicaid Programs; Fire Safety Requirements for Certain Dialysis Facilities AGENCY:

    Centers for Medicare & Medicaid Services (CMS), HHS.

    ACTION:

    Proposed rule.

    SUMMARY:

    This proposed rule would update fire safety standards for Medicare and Medicaid participating ESRD facilities, adopt the 2012 edition of the Life Safety Code and eliminate references in our regulations to all earlier editions of the Life Safety Code and adopt the 2012 edition of the Health Care Facilities Code, with some exceptions.

    DATES:

    To be assured consideration, comments must be received at one of the addresses provided below, no later than 5 p.m. on January 3, 2017.

    ADDRESSES:

    In commenting, please refer to file code CMS-3334-P. Because of staff and resource limitations, we cannot accept comments by facsimile (FAX) transmission.

    You may submit comments in one of four ways (please choose only one of the ways listed):

    1. Electronically. You may submit electronic comments on this regulation to http://www.regulations.gov. Follow the “Submit a comment” instructions.

    2. By regular mail. You may mail written comments to the following address ONLY: Centers for Medicare & Medicaid Services, Department of Health and Human Services, Attention: CMS-3334-P, P.O. Box 8010, Baltimore, MD 21244-8010.

    Please allow sufficient time for mailed comments to be received before the close of the comment period.

    3. By express or overnight mail. You may send written comments to the following address ONLY: Centers for Medicare & Medicaid Services, Department of Health and Human Services, Attention: CMS-3334-P, Mail Stop C4-26-05, 7500 Security Boulevard, Baltimore, MD 21244-1850.

    4. By hand or courier. Alternatively, you may deliver (by hand or courier) your written comments ONLY to the following addresses prior to the close of the comment period:

    a. For delivery in Washington, DC—Centers for Medicare & Medicaid Services, Department of Health and Human Services, Room 445-G, Hubert H. Humphrey Building, 200 Independence Avenue SW., Washington, DC 20201

    (Because access to the interior of the Hubert H. Humphrey Building is not readily available to persons without Federal government identification, commenters are encouraged to leave their comments in the CMS drop slots located in the main lobby of the building. A stamp-in clock is available for persons wishing to retain a proof of filing by stamping in and retaining an extra copy of the comments being filed.)

    b. For delivery in Baltimore, MD—Centers for Medicare & Medicaid Services, Department of Health and Human Services, 7500 Security Boulevard, Baltimore, MD 21244-1850.

    If you intend to deliver your comments to the Baltimore address, call telephone number (410) 786-9994 in advance to schedule your arrival with one of our staff members.

    Comments erroneously mailed to the addresses indicated as appropriate for hand or courier delivery may be delayed and received after the comment period.

    For information on viewing public comments, see the beginning of the SUPPLEMENTARY INFORMATION section.

    FOR FURTHER INFORMATION CONTACT:

    Kristin Shifflett, (410) 786-4133.

    SUPPLEMENTARY INFORMATION:

    Inspection of Public Comments: All comments received before the close of the comment period are available for viewing by the public, including any personally identifiable or confidential business information that is included in a comment. We post all comments received before the close of the comment period on the following Web site as soon as possible after they have been received: http://www.regulations.gov. Follow the search instructions on that Web site to view public comments.

    Comments received timely will also be available for public inspection as they are received, generally beginning approximately 3 weeks after publication of a document, at the headquarters of the Centers for Medicare & Medicaid Services, 7500 Security Boulevard, Baltimore, Maryland 21244, Monday through Friday of each week from 8:30 a.m. to 4 p.m. To schedule an appointment to view public comments, phone 1-800-743-3951.

    I. Background A. Overview

    The Life Safety Code (LSC) is a compilation of fire safety requirements for new and existing buildings, and is updated and published every 3 years by the National Fire Protection Association (NFPA), a private, nonprofit organization dedicated to reducing loss of life due to fire. The Medicare and Medicaid regulations have historically incorporated these requirements by reference, along with Secretarial waiver authority. The statutory basis for incorporating NFPA's LSC into the regulations we apply to Medicare and, as applicable, Medicaid providers and suppliers is the Secretary of the Department of Health and Human Services' (the Secretary) authority to stipulate health, safety and other regulations for each type of Medicare and (if applicable) Medicaid-participating facility. Specifically, section 1881(b)(1)(A) of the Social Security Act (the Act) provides for payments for “providers of services and renal dialysis facilities which meet such requirements as the Secretary shall by regulation prescribe for institutional dialysis services and supplies . . . . ” Under this statutory authority, the Secretary has set out “Conditions for Coverage,” including LSC compliance requirements, at 42 CFR part 494, subpart B. Our current LSC provisions are set out at § 494.60(e).

    In implementing the LSC provisions, we have given ourselves the discretion to waive specific provisions of the LSC for facilities if application of our rules would result in unreasonable hardship for the facility, and if the health and safety of its patients would not be compromised by such waiver. For dialysis facilities, that authority is set out at § 494.60(e)(4). In addition, the Secretary may accept a State's fire and safety code instead of the LSC if the Centers for Medicare & Medicaid Services (CMS) determines that the protections of the State's fire and safety code are equivalent to, or more stringent than, the protections offered by the LSC; dialysis facility provisions to that effect are set out at § 494.60(e)(3). These flexibilities mitigate the potential unnecessary burdens of applying the requirements of the LSC to all affected health care facilities.

    On May 12, 2012, we published a final rule in the Federal Register, entitled “Medicare and Medicaid Program; Regulatory Provisions to Promote Program Efficiency, Transparency, and Burden Reduction” (77 FR 29002). In that final rule, we limited the application of LSC requirements to dialysis facilities either located adjacent to industrial high hazard areas, and those that did not provide one or more exits to the outside at grade level from the patient treatment area level. However, we inadvertently neglected to include updated provisions for dialysis facilities in our proposed update to the Life Safety Code provisions for CMS providers and suppliers, “Medicare and Medicaid Programs; Fire Safety Requirements for Certain Health Care Facilities; Proposed Rule” (79 FR 21552, April 16, 2014). Therefore, we are proposing these provisions now, with some modifications to address the unique needs of dialysis facilities. The proposed update would apply only to dialysis facilities that do not provide one or more exits to the outside at grade level from the treatment area level (for instance, in upper floors of a mid-rise or high-rise building). We would not require other dialysis facilities to comply with NFPA 99® 2012 edition of the Health Care Facilities Code (NFPA 99) and NFPA 101® 2012 edition of the Life Safety Code (NFPA 101) because we believe that patients in dialysis facilities are generally capable of unhooking themselves from dialysis machines and self-evacuating without additional assistance in the event of an emergency. We believe that in all facilities with at-grade exits, patients would be able to evacuate the building in a timely fashion. Consequently, we believe that state and local requirements are sufficient to protect these patients and staff in the event of an emergency. In accordance with NFPA 101 sections 20.1.3.7 and 21.1.3.7, we would prohibit Medicare-approved dialysis facilities from being located adjacent to industrial high hazard facilities. “Adjacent to” is defined as sharing a wall, ceiling or floor, with a facility.

    Defining “Exit to the Outside at Grade Level From the Patient Treatment Area Level”

    The phrase “exit to the outside at grade level from the patient treatment area level” applies to dialysis facilities that are on the ground or grade level of a building where patients do not have to traverse up or down stairways within the building to evacuate to the outside. Accessibility ramps in the exit area that provide an ease of access between the patient treatment level and the outside ground level are not considered stairways.

    A dialysis facility which provides one or more exits to the outside at grade level from patient treatment level and which has a patient exit path to the outside (which may include an accessibility ramp that is compliant with NFPA and the Americans with Disabilities Act (ADA)) would be exempt from compliance with the applicable provisions of NFPA 99 and NFPA 101.

    II. Provisions of the Proposed Regulations

    In this rule, we are proposing to update our requirements for dialysis facilities that do not provide one or more exits to the outside at grade level from the patient treatment area level, by incorporating by reference the 2012 edition of NFPA 101 and NFPA 99. These facilities are already required to meet the 2000 edition of the LSC; other provider types affected by the LSC are now required to meet the 2012 edition of the NFPA 101 and the NFPA 99 (LSC final rule published May 4, 2016 at 81 FR 26872). The 2012 edition of the LSC includes new provisions that we believe are vital to the health and safety of all patients and staff. Our intention is to ensure that patients and staff continue to experience the highest degree of fire safety possible.

    The NFPA 101 2012 edition of the LSC provides minimum requirements, with due regard to function, for the design, operation and maintenance of buildings and structures for safety to life from fire. Its provisions also aid life safety in similar emergencies.

    The NFPA 99 2012 edition of the Health Care Facilities Code provides minimum requirements for health care facilities for the installation, inspection, testing, maintenance, performance, and safe practices for facilities, material, equipment, and appliances.

    B. 2012 Edition of the Life Safety Code

    The 2012 edition of the LSC includes new provisions that we believe are vital to the health and safety of all patients and staff. Our intention is to ensure that patients and staff continue to experience the highest degree of fire safety possible. We do review each edition of the NFPA 101 and NFPA 99 every 3 years to see if there are any significant provisions that we need to adopt. CMS will continue to review revisions to ensure we meet proper standards for patient safety. We have reviewed the 2015 edition of the NFPA 101 and NFPA 99 and do not believe that there are any significant provisions that need to be addressed at this time. Newer buildings are typically built to comply with the newer versions of the LSC because state and local jurisdictions often adopt and enforce newer versions of the LSC as they become available.

    CMS must emphasize that the LSC is not an accessibility code, and compliance with the LSC does not ensure compliance with the requirements of the ADA. State and local government programs and services, including health care facilities, are required to comply with Title II of the ADA. Private entities that operate public accommodations such as nursing homes, hospitals, and social service center establishments are required to comply with Title III of the ADA. Entities that receive federal financial assistance from the Department of Health and Human Services, including Medicare and Medicaid, are also required to comply with section 504 of the Rehabilitation Act of 1973. The same accessibility standards apply regardless of whether health care facilities are covered under Title II or Title III of the ADA or section 504 of the Rehabilitation Act of 1973.1 For more information about the ADA's requirements, see the Department of Justice's Web site at http://www.ada.gov or call 1-800-514-0301 (voice) or 1-800-514-0383 (TTY).

    1 Facilities newly constructed or altered after March 15, 2012 must comply with the 2010 Standards for Accessible Design (2010 Standards). Facilities newly constructed or altered between September 15, 2010 and March 15, 2012 had the option of complying with either the 1991 Standards for Accessible Design (1991 Standards) or the 2010 Standards. Facilities newly constructed between January 26, 1993 and September 15, 2010, or altered between January 26, 1992 and September 15, 2010 were required to comply with the 1991 Standards under Title III and either the 1991 Standards or the Uniform Federal Accessibility Standards under Title II.

    C. Incorporation by Reference

    This proposed rule would incorporate by reference the NFPA 101® 2012 edition of the LSC, issued August 11, 2011, and all Tentative Interim Amendments issued prior to April 16, 2014; and the NFPA 99® 2012 edition of the Health Care Facilities Code, issued August 11, 2011, and Tentative Interim Amendments issued prior to April 16, 2014 in § 494.60(g).

    These materials have been previously incorporated by reference for other provider types by a final rule titled “Medicare and Medicaid Programs; Fire Safety Requirements for Certain Health Care Facilities” published on May 4, 2016 (81 FR 26872).

    The materials that are incorporated by reference can be found for interested parties and are available for inspection at the CMS Information Resource Center, 7500 Security Boulevard, Baltimore, MD 21244, or from the National Fire Protection Association, 1 Batterymarch Park, Quincy, MA 02269. If any changes to this edition of the Code are incorporated by reference, CMS will publish a document in the Federal Register to announce those changes.

    D. Ambulatory Health Care Occupancies

    According to our memorandum, “Survey & Certification: 13-47-LSC/ESRD,” issued July 12, 2013, dialysis facilities that are subject to the LSC provisions must meet the requirements of the Ambulatory Health Care Occupancy chapters 20 and 21 of the LSC. Dialysis facilities that are not subject to our LSC regulations must continue to meet State and local fire codes. (See https://www.cms.gov/Medicare/Provider-Enrollment-and-Certification/SurveyCertificationGenInfo/Downloads/Survey-and-Cert-Letter-13-47.pdf.)

    The following are key provisions in the 2012 edition of the LSC from Chapter 20, “New Ambulatory Health Care Occupancies” and Chapter 21, “Existing Ambulatory Health Care Occupancies.” We have provided the LSC citation and a description of the requirement.

    The 2012 edition of the LSC defines an “Ambulatory Health Care Occupancy” as a facility capable of treating 4 or more patients simultaneously on an outpatient basis. We believe that dialysis facilities that do not provide one or more exits to the outside at grade level from the patient treatment area should also be required to meet the provisions applicable to Ambulatory Health Care Occupancy Chapters, regardless of the number of patients served, as a matter of health and safety of patients receiving services in these facilities. In the burden reduction final rule, published in the Federal Register on May 12, 2012 entitled, “Medicare and Medicaid Program; Regulatory Provisions to Promote Program Efficiency, Transparency, and Burden Reduction” (77 FR 29002), we removed the provision's applicability to dialysis facilities with at-grade exits directly from the treatment area because, in our view, there was, and continues to be, an extremely low risk of fire in dialysis facilities. Medicare-approved dialysis facilities that provide exits to the outside at grade level would continue to be required to follow State and local fire codes, which we believe provide for sufficient patient protection in the event of an emergency. If a facility's exits were located above or below grade, patients would require more time to evacuate. Consequently, we believe that the LSC would still be required due to the additional risk entailed in longer exit times.

    Sections 20.3.2.1 and 21.3.2.1—Doors

    This provision requires all doors to hazardous areas be self-closing or close automatically.

    Sections 20.3.2.6 and 21.3.2.6—Alcohol Based Hand Rubs

    This provision explicitly allows aerosol dispensers, in addition to gel hand rub dispensers. The aerosol dispensers are subject to limitations on size, quantity, and location, just as gel dispensers are limited. Automatic dispensers are also now permitted in ambulatory care facilities, provided, among other things, that—(1) they do not release contents unless they are activated; (2) the activation occurs only when an object is within 4 inches of the sensing device; (3) any object placed in the activation zone and left in place must not cause more than one activation; (4) the dispenser must not dispense more than the amount required for hand hygiene consistent with the label instructions; (5) the dispenser is designed, constructed and operated in a way to minimize accidental or malicious dispensing; and (6) all dispensers are tested in accordance with the manufacturer's care and use instructions each time a new refill is installed. The provision further defines prior language regarding “above or adjacent to an ignition source” as being “within 1 inch” of the ignition source.

    Sections 20.3.5 and 21.3.5—Extinguishment Requirements

    This provision is related to sprinkler system requirements and requires the evacuation of a building or the instituting of an approved fire watch when a sprinkler system is out of service for more than 10 hours in a 24-hour period until the system has been returned to service. A facility must evacuate the building or portion of the building affected by the system outage until the system is back in service, or establish a fire watch until the system is back in service.

    E. 2012 Edition of the Health Care Facilities Code

    The 2012 edition of the NFPA 99, “Health Care Facilities Code,” addresses requirements for both health care occupancies and ambulatory care occupancies, and serves as a resource for those who are responsible for protecting health care facilities from fire and associated hazards. The purpose of this Code is to provide minimum requirements for the installation, inspection, testing, maintenance, performance, and safe practices for health care facility materials, equipment and appliances. This Code is a compilation of documents that have been developed over a 40-year period by NFPA, and is intended to be used by those persons involved in the design, construction, inspection, and operation of health care facilities, and in the design, manufacture, and testing of appliances and equipment used in patient care areas of health care facilities. It provides information on subjects, for example, medical gas and vacuum systems, electrical systems, electrical equipment, and gas equipment. The NFPA 99 applies specific requirements in accordance with the results of a risk-based assessment methodology. A risk-based approach allows for the application of requirements based upon the types of treatment and services being provided to patients or residents rather than the type of facility in which they are being performed. In order to ensure the minimum level of protection afforded by NFPA 99 is applicable to all patient and resident care areas within a health care facility, we are proposing adoption of the 2012 edition of NFPA 99, with the exception of chapters 7—“Information Technology and Communications Systems for Health Care Facilities”; 8—“Plumbing”; 12—“Emergency Management”; and 13—“Security Management”. The first three chapters of the NFPA 99 address the administration of the NFPA 99, the referenced publications, and definitions. Short descriptions of some of the more important provisions of NFPA 99 follow:

    Chapter 4—Fundamentals

    Chapter 4 provides guidance on how to apply NFPA 99 requirements to health care facilities based upon “categories” determined when using a risk-based methodology.

    There are four categories utilized in the risk assessment methodology, depending on the types of treatment and services being provided to patients or residents. Section 4.1.1 of NFPA 99 describes Category 1 as, “Facility systems in which failure of such equipment or system is likely to cause major injury or death of patients or caregivers. . . .” Section A.4.1.1 provides examples of what a major injury could include, such as amputation or a burn to the eye. Section 4.1.2 describes Category 2 as, “Facility systems in which failure of such equipment is likely to cause minor injury to patients or caregivers. . . .” Section A.4.1.2 describes a minor injury as one that is not serious or involving risk of life. Section 4.1.3 describes Category 3 as, “Facility systems in which failure of such equipment is not likely to cause injury to patients or caregivers, but can cause patient discomfort. . . .” Section 4.1.4 describes Category 4 as, “Facility systems in which failure of such equipment would have no impact on patient care. . . .”

    Section 4.2 requires that each facility that is a health care or ambulatory occupancy define its risk assessment methodology, implement the methodology, and document the results. We did not propose to require the use of any particular risk assessment procedure. However, if future situations indicate the need to define a particular risk assessment procedure, we would pursue that through a separate notice and comment rulemaking.

    Chapter 5—Gas and Vacuum Systems

    The hazards addressed in Chapter 5 include the ability of oxygen and nitrous oxide to exacerbate fires, safety concerns from the storage and use of pressurized gas, and the reliance upon medical gas and vacuum systems for patient care. Chapter 5 does not mandate the installation of any systems; rather, if they are installed or are required to be installed, the systems will be required to comply with NFPA 99. Chapter 5 covers the performance, maintenance, installation, and testing of the following:

    • Nonflammable medical gas systems with operating pressure below a gauge pressure of 300 psi;

    • Vacuum systems in health care facilities;

    • Waste anesthetic gas disposal systems (WAGD); and

    • Manufactured assemblies that are intended for connection to the medical gas, vacuum, or WAGD systems.

    Chapter 6—Electrical Systems

    The hazards addressed in Chapter 6 are related to the electrical power distribution systems in health care facilities, and address issues such as electrical shock, power continuity, fire, electrocution, and explosions that might be caused by faults in the electrical system. Chapter 6 also covers the performance, maintenance, and testing of the electrical systems in health care facilities.

    Chapter 9—Heating, Ventilation, and Air Conditioning (HVAC)

    Chapter 9 requires HVAC systems serving spaces or providing health care functions to be in accordance with the American Society of Heating, Refrigeration and Air-Conditioning Engineers (ASHRAE) Standard 170—Ventilation of Health Care Facilities (2008 edition) (http://www.ashrae.org).

    Chapter 9 does not apply to existing HVAC systems, but applies to the construction of new health care facilities, and the altered, renovated, or modernized portions of existing systems or individual components. Chapter 9 ensures minimum levels of heating, ventilation and air conditioning performance in patient and resident care areas. Some of the issues discussed in Chapter 9 are as follows:

    • HVAC system energy conservation;

    • Commissioning;

    • Piping;

    • Ductwork;

    • Acoustics;

    • Requirements for the ventilation of medical gas storage and trans-filling areas;

    • Waste anesthetic gases;

    • Plumes from medical procedures;

    • Emergency power system rooms; and

    • Ventilation during construction.

    Chapter 10—Electrical Equipment

    Chapter 10 covers the performance, maintenance, and testing of electrical equipment in health care facilities. Much of this chapter applies to requirements for portable electrical equipment in health care facilities, but there are also requirements for fixed-equipment and information on administrative issues.

    Chapter 11—Gas Equipment

    The hazards addressed in Chapter 11 relate to general fire, explosions, and mechanical issues associated with gas equipment, including compressed gas cylinders.

    Chapter 14—Hyperbaric Facilities

    Chapter 14 addresses the hazards associated with hyperbaric facilities in health care facilities, including electrical, explosive, implosive, as well as fire hazards. Chapter 14 sets forth minimum safeguards for the protection of patients and personnel administering hyperbaric therapy and procedures. Chapter 14 contains requirements for hyperbaric chamber manufacturers, hyperbaric facility designers, and personnel operating hyperbaric facilities. It also contains requirements related to construction of the hyperbaric chamber itself and the equipment used for supporting the hyperbaric chamber, as well as administration and maintenance. Many requirements in this chapter are applicable only to new construction and new facilities.

    Chapter 15—Features of Fire Protection

    Chapter 15 covers the performance, maintenance, and testing of fire protection equipment in health care facilities. Issues addressed in this chapter range from the use of flammable liquids in an operating room to special sprinkler protection. These fire protection requirements are independent of the risk-based approach, as they are applicable to all patient care areas in both new and existing facilities.

    Chapter 15 has several sections taken directly from the NFPA 101, including requirements for the following:

    • Construction and compartmentalization of health care facilities.

    • Laboratories.

    • Utilities.

    • Heating, ventilation and air conditioning systems.

    • Elevators.

    • Escalators.

    • Conveyors.

    • Rubbish Chutes.

    • Incinerators.

    • Laundry Chutes.

    • Fire detection, alarm and communication systems.

    • Automatic sprinklers and other extinguishing equipment.

    • Compact storage including mobile storage and maintenance.

    • Testing of water based fire protection systems.

    These sections have requirements for inspection, testing and maintenance which apply to all facilities, as well as specific requirements for existing systems and equipment that also apply to all facilities.

    The prospective timeline for applicability of these requirements would be 60 days after the publication of the final rule in the Federal Register. We are soliciting comments on the proposal of the adoption of the 2012 NFPA 101 and the 2012 NFPA 99 for dialysis facilities that do not provide one or more exits to the outside at grade level from the treatment area level.

    III. Collection of Information Requirements

    This document does not impose information collection requirements, that is, reporting, recordkeeping or third-party disclosure requirements. Consequently, there is no need for review by the Office of Management and Budget under the authority of the Paperwork Reduction Act of 1995 (44 U.S.C. 3501 et seq.).

    IV. Response to Comments

    Because of the large number of public comments we normally receive on Federal Register documents, we are not able to acknowledge or respond to them individually. We will consider all comments we receive by the date and time specified in the DATES section of this preamble, and, when we proceed with a subsequent document, we will respond to the comments in the preamble to that document.

    V. Regulatory Impact Statement

    We have examined the impact of this rule as required by Executive Order 12866 on Regulatory Planning and Review (September 30, 1993), Executive Order 13563 on Improving Regulation and Regulatory Review (January 18, 2011), the Regulatory Flexibility Act (RFA) (September 19, 1980, Pub. L. 96-354), section 1102(b) of the Social Security Act, section 202 of the Unfunded Mandates Reform Act of 1995 (March 22, 1995; Pub. L. 104-4), Executive Order 13132 on Federalism (August 4, 1999) and the Congressional Review Act (5 U.S.C. 804(2).

    Executive Orders 12866 and 13563 direct agencies to assess all costs and benefits of available regulatory alternatives and, if regulation is necessary, to select regulatory approaches that maximize net benefits (including potential economic, environmental, public health and safety effects, distributive impacts, and equity). A regulatory impact analysis (RIA) must be prepared for major rules with economically significant effects ($100 million or more in any 1 year). This rule does not reach the economic threshold and thus is not considered a major rule.

    The RFA requires agencies to analyze options for regulatory relief of small entities. For purposes of the RFA, small entities include small businesses, nonprofit organizations, and small governmental jurisdictions. Most hospitals and most other providers and suppliers are small entities, either by nonprofit status or by having revenues of less than $7.5 million to $38.5 million in any 1 year. Individuals and States are not included in the definition of a small entity. We are not preparing an analysis for the RFA because we have determined, and the Secretary certifies, that this proposed rule would not have a significant economic impact on a substantial number of small entities.

    In addition, section 1102(b) of the Social Security Act (the Act) requires us to prepare a regulatory impact analysis if a rule may have a significant impact on the operations of a substantial number of small rural hospitals. This analysis must conform to the provisions of section 603 of the RFA. For purposes of section 1102(b) of the Act, we define a small rural hospital as a hospital that is located outside of a Metropolitan Statistical Area for Medicare payment regulations and has fewer than 100 beds. We are not preparing an analysis for section 1102(b) of the Act because we have determined, and the Secretary certifies, that this proposed rule would not have a significant impact on the operations of a substantial number of small rural hospitals.

    Section 202 of the Unfunded Mandates Reform Act of 1995 also requires that agencies assess anticipated costs and benefits before issuing any rule whose mandates require spending in any 1 year of $100 million in 1995 dollars, updated annually for inflation. In 2016, that threshold is approximately $146 million. This rule will have no consequential effect on State, local, or tribal governments or on the private sector.

    Executive Order 13132 establishes certain requirements that an agency must meet when it promulgates a proposed rule (and subsequent final rule) that imposes substantial direct requirement costs on State and local governments, preempts State law, or otherwise has Federalism implications. Since this regulation does not impose any costs on State or local governments, the requirements of Executive Order 13132 are not applicable.

    We do not know how many, if any, dialysis facilities would be affected by this adoption of the 2012 editions of the NFPA 101 and NFPA 99. However, we anticipate that the impact of this rule would be less than $1,000 for each facility, and that is if they are not already meeting the requirements of the 2012 editions of the NFPA 101 and NFPA 99. Twenty states have already adopted the 2012 editions, so if there are facilities in those States, they are already following the 2012 requirements. In accordance with the provisions of Executive Order 12866, this regulation was reviewed by the Office of Management and Budget.

    List of Subjects in 42 CFR Part 494

    Health facilities, Incorporation by reference, Kidney diseases, Medicare, Reporting and recordkeeping requirements.

    For the reasons set forth in the preamble, the Centers for Medicare & Medicaid Services proposes to amend 42 CFR chapter IV as set forth below:

    PART 494—CONDITIONS FOR COVERAGE FOR END-STAGE RENAL DISEASE FACILITIES 1. The authority citation for part 494 continues to read as follows: Authority:

    Secs. 1102 and 1871 of the Social Security Act (42 U.S.C. 1302 and 1395(hh)).

    2. Amend § 494.60 by revising paragraphs (e)(1) and (4) and adding paragraphs (e)(5), (f), and (g) to read as follows:
    § 494.60 Condition: Physical environment.

    (e) * * *

    (1) Except as provided in paragraph (e)(2) of this section, dialysis facilities that do not provide one or more exits to the outside at grade level from the patient treatment area level must comply with provisions of the 2012 edition of the Life Safety Code of the National Fire Protection Association (NFPA 101 and Tentative Interim Amendments TIA 12-1, TIA 12-2, TIA 12-3, and TIA 12-4 applicable to Ambulatory Health Care Occupancies (which is incorporated by reference in paragraph (g) of this section), regardless of the number of patients served.

    (4) In consideration of a recommendation by the State survey agency or at the discretion of the Secretary, the Secretary may waive, for periods deemed appropriate, specific provisions of the Life Safety Code, which would result in unreasonable hardship upon an ESRD facility, but only if the waiver will not adversely affect the health and safety of the patients.

    (5) No dialysis facility may operate in a building that is adjacent to an industrial high hazard area, as described in sections 20.1.3.7 and 21.1.3.7 of the 2012 edition of the Health Care Facilities Code of the National Fire Protection Association (NFPA 99), incorporated by reference in paragraph (g) of this section.

    (f) Standard: Building safety. (1) Dialysis facilities that do not provide one or more exits to the outside at grade level from the patient treatment area level must meet the applicable provisions of the 2012 edition of the Health Care Facilities Code of the National Fire Protection Association (NFPA 99 and Tentative Interim Amendments TIA 12-2, TIA 12-3, TIA 12-4, TIA 12-5, and TIA 12-6), regardless of the number of patients served.

    (2) A copy of the Code is available for inspection at the CMS Information Resource Center, 7500 Security Boulevard, Baltimore, MD.

    (3) Chapters 7, 8, 12, and 13 of the NFPA 99 2012 Health Care Facilities Code do not apply to a dialysis facility.

    (4) If application of the NFPA 99 would result in unreasonable hardship for the dialysis facility, CMS may waive specific provisions of the Health Care Facilities Code for such facility, but only if the waiver does not adversely affect the health and safety of patients.

    (g) Incorporation by reference. The standards incorporated by reference in this section are approved for incorporation by reference by the Director of the Office of the Federal Register in accordance with 5 U.S.C. 552(a) and 1 CFR part 51. You may inspect a copy at the CMS Information Resource Center, 7500 Security Boulevard, Baltimore, MD or at the National Archives and Records Administration (NARA). For information on the availability of this material at NARA, call 202-741-6030, or go to: http://www.archives.gov/federal_register/code_of_federal_regulations/ibr_locations.html. If any changes in this edition of the Code are incorporated by reference, CMS will publish a document in the Federal Register to announce the changes.

    (1) National Fire Protection Association, 1 Batterymarch Park, Quincy, MA 02169, www.nfpa.org, 1-617-7470-3000.

    (i) NFPA 99, Standard for Health Care Facilities Code of the National Fire Protection Association 99, 2012 edition, issued August 11, 2011.

    (ii) TIA 12-2 to NFPA 99, issued August 11, 2011.

    (iii) TIA 12-3 to NFPA 99, issued August 9, 2012.

    (iv) TIA 12-4 to NFPA 99, issued March 7, 2013.

    (v) TIA 12-5 to NFPA 99, issued August 1, 2013.

    (vi) TIA 12-6 to NFPA 99, issued March 3, 2014.

    (vii) NFPA 101, Life Safety Code, 2012 edition, issued August 11, 2011.

    (viii) TIA 12-1 to NFPA 101, issued August 11, 2011.

    (ix) TIA 12-2 to NFPA 101, issued October 30, 2012.

    (x) TIA 12-3 to NFPA 101, issued October 22, 2013.

    (xi) TIA 12-4 to NFPA 101, issued October 22, 2013.

    (2) [Reserved]

    Dated: September 7, 2016. Andrew M. Slavitt, Acting Administrator, Centers for Medicare & Medicaid Services. Dated: October 17, 2016. Sylvia M. Burwell, Secretary, Department of Health and Human Services.
    [FR Doc. 2016-26583 Filed 11-3-16; 8:45 am] BILLING CODE 4120-01-P
    DEPARTMENT OF THE INTERIOR Bureau of Land Management 43 CFR Part 8360 [LLCAC09400 L19200000.NU0000 XXXL1109RM LRORBX619900] Proposed Supplementary Rules for Fort Ord National Monument, California AGENCY:

    Bureau of Land Management, Interior.

    ACTION:

    Notice of proposed supplementary rules.

    SUMMARY:

    The California State Director of the Bureau of Land Management (BLM) is proposing to establish new supplementary rules related to dog management and public safety on public lands at Fort Ord National Monument (FONM), California.

    Furthermore, these proposed rules would clarify some of the existing restrictions that have been in place on the FONM since 1996. These proposed rules are consistent with the national monument proclamation of 2012 (i.e., Proclamation 8803), and the BLM's 2007 Resource Management Plan.

    DATES:

    Interested parties may submit written comments regarding the proposed supplementary rules until January 3, 2017.

    ADDRESSES:

    You may submit comments by mail, hand-delivery, or electronic mail. Mail: FONM Manager, BLM, Central Coast Field Office, 940 2nd Avenue, Marina, CA 93933. Electronic mail: [email protected].

    FOR FURTHER INFORMATION CONTACT:

    Eric Morgan, FONM Manager, Bureau of Land Management, Central Coast Field Office, 940 2nd Avenue, Marina, CA 93933, at (831) 582-2200, or [email protected]. Persons who use a telecommunications device for the deaf may call the Federal Relay Service at 1-800-877-8339 to contact the above individual during normal business hours. The Service is available 24 hours a day, seven days a week, to leave a message or question with the above individual. You will receive a reply during normal business hours.

    SUPPLEMENTARY INFORMATION: I. Public Comment Procedures

    You may mail or email comments to the Central Coast Field Office, at the addresses listed above (See ADDRESSES). Written comments on the proposed supplementary rules should be specific and confined to issues pertinent to the proposed rules, and should explain the reason for any recommended change. Where possible, comments should reference the specific section or paragraph of the proposal that the commenter is addressing. The BLM is not obligated to consider or include, in the Administrative Record for the final supplementary rules, comments delivered to an address other than those listed above (See ADDRESSES) or comments that the BLM receives after the close of the comment period (See DATES), unless they are postmarked or electronically dated before the deadline.

    Comments, including names, street addresses, and other contact information for respondents, will be available for public review at 940 2nd Avenue, Marina, CA 93933, during regular business hours (7:30 a.m. to 4 p.m., Monday through Friday, excluding Federal holidays). Before including your address, phone number, email address, or other personal identifying information in your comment, you should be aware that your comment—including your personal identifying information—may be made publicly available at any time. While you can ask us in your comment to withhold your personal identifying information from public review, we cannot guarantee that we will be able to do so.

    II. Background

    The BLM California State Director is proposing to establish new supplementary rules related to dog management and other public safety issues for public lands on the FONM in Monterey County, California. Furthermore, the State Director is supplementing some of the existing land restrictions that have been in place on the monument since December 5, 1996 (61 FR 64530), that are consistent with the national monument proclamation of 2012 (i.e., Proclamation 8803), and the BLM's 2007 Resource Management Plan. The proposed supplementary rules are necessary to support the mission of the BLM by protecting the natural resources and enhancing the health and safety of those using and enjoying the public lands.

    These proposed rules would implement restrictions prescribed within the FONM Dog Management Plan that was approved on July 5, 2016. The plan was analyzed under environmental assessment DOI-BLM-CA-C090-2016-0021-EA (Fort Ord National Monument Dog Management Plan), and associated Decision Record and Finding of No Significant Impact. The plan considered various dog management prescriptions across the monument within four different planning units. One of the planning units, the Inland Range Planning Unit, contains extremely hazardous military munitions and public use opportunities are greatly limited.

    III. Discussion of Proposed Supplementary Rules

    When the former Fort Ord military installation closed in 1994, the Secretary of the Army transferred administration of approximately 7,205 acres to the BLM via a letter of transfer to the Secretary of Interior on October 18, 1996. Those lands are now part of the 14,651 acre FONM that was designated by President Obama under Proclamation 8803. The Army currently manages approximately 7,446 acres of the FONM and will transfer those lands to the BLM for administration following a munitions cleanup being performed under the Comprehensive Environmental Response, Compensation, and Liability Act.

    The BLM issued a notice of emergency closure and established restrictions on use of public lands on the former Fort Ord on December 5, 1996 (61 FR 64530). Since that time, the BLM has applied those restrictions as they pertain to public use, but those restrictions did not address management of dogs on these public lands. On September 7, 2007, the BLM State Director approved a Record of Decision for the Southern Diablo Mountain Range and Central Coast of California Resource Management Plan (RMP) that directed the BLM's Central Coast Field Office to develop a dog management plan for FONM due to conflicts between visitors, attacks on livestock, and impacts to wildlife. On April 8, 2015, the BLM notified the public of its intent to develop a dog management plan and, using the 1996 emergency closure, initiated an interim dog leash restriction on public lands at FONM due to increasing conflicts between visitors, attacks on livestock, hazards from munitions, and impacts to wildlife. The BLM held three public scoping workshops (July 28 and 29, 2015, and August 5, 2015) to solicit public input on the development of the draft dog management plan. The proposed supplementary rules are the logical conclusion of the dog management planning process.

    On May 17, 2016, the BLM released the Draft FONM Dog Management Plan and associated environmental assessment (DOI-BLM-CA-C090-2016-0021-EA) for a 30 day comment period. The proposed supplementary rules were included with the draft plan and were analyzed within the environmental assessment. One comment was made on the proposed supplementary rules that resulted in a minor editorial change regarding the definition of “yield” as is described in the proposed rule text.

    On July 5, 2016 the BLM approved the Final FONM Dog Management Plan and associated environmental assessment (DOI-BLM-CA-C090-2016-0021-EA). The proposed supplementary rules (when approved) will supplement some of the December 1996 restrictions and April 2015 restrictions under 43 CFR 8364.1 and 43 CFR 8341.2 and enact new rules that are specified in the Final FONM Dog Management Plan. The proposed supplementary rules also would implement existing Monterey County ordinances germane to dog use under 43 CFR 8365.1-6, 43 U.S.C. 1733(a), 16 U.S.C. 670h(c)(5), and 43 U.S.C. 315a that were disclosed and analyzed within the approved plan.

    The proposed supplementary rules are broken into three categories. Proposed supplementary rules numbered 1 through 9 are new and would implement new direction from the approved dog management plan. Proposed supplementary rules 10 through 15 are not new, but would implement previous restrictions that were established in 1996 (see 61 FR 64530) and that are consistent with the national monument proclamation of 2012 (i.e. Proclamation 8803), and the BLM 2007 Resource Management Plan. Finally, proposed supplementary rules 16 and 17 are existing Monterey County ordinances that the BLM proposes to adopt as supplementary rules in order to facilitate cooperation between BLM rangers and local law enforcement officials.

    IV. Procedural Matters Regulatory Planning and Review (Executive Orders 12866 and 13563)

    The proposed supplementary rules are not a significant regulatory action and are not subject to review by the Office of Management and Budget under Executive Orders 12866 and 13563. They would not have an effect of $100 million or more on the economy. The proposed supplementary rules would not adversely affect in a material way the economy, productivity, competition, jobs, the environment, public health and safety, or State, local, or tribal governments or communities. The proposed supplementary rules would not create a serious inconsistency or otherwise interfere with an action taken or planned by another agency. The proposed supplementary rules would not alter the budgetary effects of entitlements, grants, user fees or loan programs, or the rights or obligations of their recipients, nor do they raise novel legal or policy issues. They would merely impose rules of conduct and impose other limitations on certain recreational and commercial activities on certain public lands to protect natural resources and human health and safety.

    Clarity of the Supplementary Rules

    Executive Order 12866 requires each agency to write regulations that are simple and easy to understand. The BLM invites your comments on how to make these proposed supplementary rules easier to understand, including answers to questions such as the following:

    (1) Are the requirements in the supplementary rules clearly stated?

    (2) Do the supplementary rules contain technical language or jargon that interferes with their clarity?

    (3) Does the format of the supplementary rules (grouping and order of sections, use of headings, paragraphing, etc.) aid or reduce clarity?

    (4) Would the supplementary rules be easier to understand if they were divided into more (but shorter) sections?

    (5) Is the description of the supplementary rules in the SUPPLEMENTARY INFORMATION section of this preamble helpful in understanding the supplementary rules? How could this description be more helpful in making the supplementary rules easier to understand?

    Please send any comments you have on the clarity of the rule to the addresses specified in the ADDRESSES section.

    National Environmental Policy Act

    The BLM has prepared an environmental assessment (EA) that analyzed different dog management alternatives on FONM under Section 102(2)(C) of the National Environmental Policy Act of 1969 (NEPA), 42 U.S.C. 4332(2)(C). On July 5, 2016, the BLM approved the Final FONM Dog Management Plan and associated environmental assessment (DOI-BLM-CA-C090-2016-0021-EA). The proposed supplementary rules are also consistent with the Record of Decision for the Southern Diablo Mountain Range and Central Coast of California RMP approved in 2007.

    Regulatory Flexibility Act

    Congress enacted the Regulatory Flexibility Act (RFA) of 1980, as amended, 5 U.S.C. 601-612, to ensure that government regulations do not unnecessarily or disproportionately burden small entities. The RFA requires a regulatory flexibility analysis if a rule would have a significant economic impact, either detrimental or beneficial, on a substantial number of small entities. The proposed supplementary rules would merely impose reasonable restrictions on certain recreational or commercial activities on public lands in order to protect natural resources and the environment, and provide for human health and safety. Therefore, the BLM has determined under the RFA that the proposed supplementary rules would not have a significant economic impact on a substantial number of small entities.

    Small Business Regulatory Enforcement Fairness Act

    The proposed supplementary rules are not a “major rule” as defined under 5 U.S.C. 804(2). The proposed supplementary rules would merely revise the rules of conduct for public use of limited areas of public lands and would not affect commercial or business activities of any kind.

    Unfunded Mandates Reform Act

    The proposed supplementary rules would not impose an unfunded mandate of more than $100 million per year; on State, local, or tribal governments in the aggregate; or on the private sector; nor would they have a significant or unique effect on small governments. The proposed supplementary rules would have no effect on governmental or tribal entities and would impose no requirements on any of these entities. The proposed supplementary rules would merely revise the rules of conduct for public use of limited areas of public lands and would not affect tribal, commercial, or business activities of any kind. Therefore, the BLM is not required to prepare a statement containing the information required by the Unfunded Mandates Reform Act at 2 U.S.C. 1531.

    Executive Order 12630, Governmental Actions and Interference With Constitutionally Protected Property Rights (Takings)

    The proposed supplementary rules do not represent a government action capable of interfering with constitutionally protected property rights. Therefore, the BLM has determined that the proposed supplementary rules would not cause a taking of private property or require further discussion of takings implications under this Executive order.

    Executive Order 13132, Federalism

    The proposed supplementary rules would not have a substantial direct effect on the States, on the relationship between the National Government and the States, or on the distribution of power and responsibilities among the various levels of government. Therefore, in accordance with Executive Order 13132, the BLM has determined that the proposed supplementary rules would not have sufficient federalism implications to warrant preparation of a federalism assessment.

    Executive Order 12988, Civil Justice Reform

    Under Executive Order 12988, the BLM has determined that the proposed supplementary rules would not unduly burden the judicial system, and that they meet the requirements of sections 3(a) and 3(b)(2) of Executive Order 12988.

    Executive Order 13175, Consultation and Coordination With Indian Tribal Governments

    In accordance with Executive Order 13175, the BLM has found that the proposed supplementary rules do not include policies that would have tribal implications. The proposed supplementary rules would merely revise the rules of conduct for public use of limited areas of public lands.

    Executive Order 13352, Facilitation of Cooperative Conservation

    In accordance with Executive Order 13352, the BLM has determined that these proposed consolidated supplementary rules would not impede facilitating cooperative conservation; would take appropriate account of and consider the interests of persons with ownership or other legally recognized interests in land or other natural resources. The rules would properly accommodate local participation in the Federal decision-making process, and would provide that the programs, projects, and activities are consistent with protecting public health and safety.

    Information Quality Act

    In developing these proposed supplementary rules, the BLM did not conduct or use a study, experiment, or survey requiring peer review under the Information Quality Act (Pub. L. 106-554). In accordance with the Information Quality Act, the DOI has issued guidance regarding the quality of information that it relies on for regulatory decisions. This guidance is available on the DOI's Web site at http://www.doi.gov/ocio/information_management/iq.cfm.

    Executive Order 13211, Actions Concerning Regulations That Significantly Affect Energy Supply, Distribution, or Use

    Under Executive Order 13211, the BLM has determined that the proposed supplementary rules would not comprise a significant energy action, and that they would not have an adverse effect on energy supplies, production, or consumption.

    Paperwork Reduction Act

    The proposed supplementary rules do not directly provide for any information collection that the Office of Management and Budget must approve under the Paperwork Reduction Act of 1995, 44 U.S.C. 3501-3521. Moreover, any information collection that may result from Federal criminal investigations or prosecutions conducted under the proposed supplementary rules are exempt from the provisions of 44 U.S.C. 3518(c)(1).

    Author

    The principal author of these proposed supplementary rules is Eric Morgan, Monument Manager, Central Coast Field Office, 940 2nd Avenue, Marina, CA 93933.

    Proposed Supplementary Rules

    For the reasons stated in the preamble and under the authorities for supplementary rules found under 43 CFR 8365.1-6, 43 U.S.C. 1733(a), 16 U.S.C. 670h(c)(5), and 43 U.S.C. 315a, the BLM California State Director proposes to issue supplementary rules for public lands managed by the BLM within the boundaries of the FONM, to read as follows:

    Definitions

    Designated route means any road or trail that the BLM has signed and shown on trail maps where public use is authorized.

    Dog means any domestic dog that is not classified as a “service animal.”

    “Off-leash-opportunity-route” means a specific road or trail on FONM that has been designated by the BLM to allow some opportunities for dogs to be off leash under specific circumstances.

    Service animal means a dog that is individually trained to do work or perform tasks for people with disabilities as covered under the Americans with Disabilities Act.

    Street-legal vehicle means a vehicle, such as an automobile, motorcycle, or light truck, that is equipped and licensed for use on a public street and/or highway and that is subject to registration under the California Vehicle Code 4000(a)(1).

    Unattended dog means any dog that is unaccompanied by an owner and/or handler whether on tether or otherwise.

    Yield means slowing or stopping forward progress to a point where it is possible to safely pass another visitor without injuring, startling, or surprising that visitor. For bicycles, the passing speed shall be no greater than 10 mph on roads, and 5 mph on single-track trails.

    Prohibited Acts

    Unless otherwise authorized by the BLM, the following prohibitions apply to all BLM-managed public lands on the Fort Ord National Monument (FONM):

    Proposed Supplementary Rules From the Dog Management Plan

    1. You must not bring a dog into the Inland Range Planning Unit. Service animals accompanying a disabled person as accommodated by the Americans with Disabilities Act are excluded from this provision.

    2. You must physically restrain, or keep your dog(s) on a leash or cord not to exceed 6 feet in length, at all times while you are on a road or trail that has not been designated as an “off-leash-opportunity-route.”

    3. You and/or your dog must not walk or roam off a designated route, including any route designated as an “off-leash-opportunity route.”

    4. You must physically restrain, or keep your dog on a leash or cord not to exceed 6 feet in length, on a designated “off-leash-opportunity-route” when you are within 100 feet of another person and/or dog that is not with your party.

    5. You must not allow your dog to roam over 50 feet away from you while on a designated “off-leash-opportunity-route.”

    6. You must not allow your dog to enter any vernal pool or pond, or roam within 20 feet of any such area, unless you and your dog are on a route designated for public use.

    7. You must carry a leash for each dog you have with you.

    8. You are prohibited from leaving a dog unattended, even if on tether, within a crate, or within an unoccupied motor vehicle.

    9. Visitors must yield the path, on both roads and trails, to other visitors in the following manner: Bicycles must yield to pedestrians and equestrians; and pedestrians must yield to equestrians. For bicycles, the passing speed shall be no greater than 10 mph on roads, and 5 mph on single-track trails.

    Proposed Supplementary Rules That Clarify Existing Restrictions Established in 1996 and Direction From the 2007 Record of Decision

    10. Motorized vehicles and other motorized devices, including electronic bicycles, are prohibited on all roads and trails excluding Creekside Terrace Road and Badger Hills Driveway. Motorized vehicle use on these two roadways is restricted to highway licensed street-legal vehicles.

    11. Use and/or occupancy of all lands within the FONM, including leaving personal property unattended, is prohibited between 1/2 hour after sunset and 1/2 hour before sunrise.

    12. All use (including pet use) is restricted to designated routes and trails. Open routes and trails are indicated on BLM maps and signed with route or trail markers. Any unsigned route which does not appear on the most current BLM map is closed to all uses.

    13. Campfires and other open flame fires are prohibited.

    14. Possession or discharge of fireworks, including “safe and sane” fireworks, is prohibited.

    15. Wood cutting and the collection of downed wood are prohibited.

    Proposed FONM Supplementary Rules That Are Currently Monterey County Ordinances

    16. It shall be unlawful for the owner or person having custody of any dog, either willfully or through failure to exercise due care or control, to allow said dog to defecate and to allow the feces thereafter to remain on FONM other than within trash receptacles provided for such purposes. This includes bagged feces—Reference Monterey County ordinance, 8.36.030.

    17. All dogs under four months of age shall be kept under physical restraint by the owner, keeper, or harborer when on FONM—Reference Monterey County ordinance, 8.20.020.

    18. Dogs on FONM shall wear a license tag with or without a chip implant at all times. The tag shall be attached at all times to a collar, harness, or other suitable device upon the dog for which the license tag was issued—Reference Monterey County ordinance, 8.08.040.

    Exemptions

    The following persons are exempt from these supplementary rules: Any Federal, State, or local officer or employee in the scope of their duties; members of any organized law enforcement, rescue, or fire-fighting force in performance of an official duty; and any person whose activities are authorized in writing by the BLM.

    Enforcement

    Any person who violates any of these supplementary rules may be tried before a United States Magistrate and fined in accordance with 18 U.S.C. 3571, imprisoned no more than 12 months under 43 U.S.C. 1733(a) and 43 CFR 8360.0-7, or both.

    In accordance with 43 CFR 8365.1-7, State or local officials may also impose penalties for violations of California law.

    Jerome E. Perez, State Director, California.
    [FR Doc. 2016-26457 Filed 11-3-16; 8:45 am] BILLING CODE 4310-40-P
    DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration 50 CFR Part 622 RIN 0648-BG18 Fisheries of the Caribbean, Gulf of Mexico, and South Atlantic; Reef Fish Fishery of the Gulf of Mexico; Amendment 43 AGENCY:

    National Marine Fisheries Service (NMFS), National Oceanic and Atmospheric Administration (NOAA), Commerce.

    ACTION:

    Notice of availability; request for comments.

    SUMMARY:

    The Gulf of Mexico (Gulf) Fishery Management Council (Gulf Council) has submitted Amendment 43 to the Fishery Management Plan for the Reef Fish Resources of the Gulf of Mexico (FMP) for review, approval, and implementation by NMFS. If approved by the Secretary of Commerce (Secretary), Amendment 43 would revise the hogfish fishery management unit (FMU) to be the West Florida stock and define the geographic range of this stock consistent with the South Atlantic Fishery Management Council's (South Atlantic Council) proposed boundary between the Florida Keys/East Florida and West Florida stocks, set the status determination criteria (SDC) and annual catch limits (ACLs) for the West Florida stock, increase the minimum size limit for the West Florida stock, and remove the powerhead exception for harvest of hogfish in the Gulf reef fish stressed area. The purpose of Amendment 43 is to establish management measures for the West Florida hogfish stock in the Gulf using the best scientific information available.

    DATES:

    Written comments must be received by January 3, 2017.

    ADDRESSES:

    You may submit comments on Amendment 43 identified by “NOAA-NMFS-2016-0126” by either of the following methods:

    Electronic Submission: Submit all electronic public comments via the Federal e-Rulemaking Portal. Go to www.regulations.gov/#!docketDetail;D=NOAA-NMFS-2016-0126, click the “Comment Now!” icon, complete the required fields, and enter or attach your comments.

    Mail: Submit all written comments to Peter Hood, NMFS Southeast Regional Office, 263 13th Avenue South, St. Petersburg, FL 33701.

    Instructions: Comments sent by any other method, to any other address or individual, or received after the end of the comment period, may not be considered by NMFS. All comments received are a part of the public record and will generally be posted for public viewing on www.regulations.gov without change. All personal identifying information (e.g., name, address, etc.), confidential business information, or otherwise sensitive information submitted voluntarily by the sender will be publicly accessible. NMFS will accept anonymous comments (enter “N/A” in the required fields if you wish to remain anonymous).

    Electronic copies of Amendment 43, which includes an environmental assessment, a fishery impact statement, a Regulatory Flexibility Act analysis, and a regulatory impact review, may be obtained from www.regulations.gov or the Southeast Regional Office Web site at http://sero.nmfs.noaa.gov/sustainable_fisheries/gulf_fisheries/reef_fish/2016/am43/index.html.

    FOR FURTHER INFORMATION CONTACT:

    Peter Hood, NMFS Southeast Regional Office, telephone: 727-824-5305, email: [email protected].

    SUPPLEMENTARY INFORMATION:

    The Magnuson-Stevens Fishery Conservation and Management Act (Magnuson-Stevens Act) requires each regional fishery management council to submit any FMP or amendment to NMFS for review and approval, partial approval, or disapproval. The Magnuson-Stevens Act also requires that NMFS, upon receiving an FMP or amendment, publish an announcement in the Federal Register notifying the public that the FMP or amendment is available for review and comment.

    The FMP being revised by Amendment 43 was prepared by the Gulf Council and, if approved, would be implemented by NMFS through regulations at 50 CFR part 622 under the authority of the Magnuson-Stevens Act.

    Background

    Currently, hogfish in the Gulf exclusive economic zone (EEZ) are managed as a single stock with a stock ACL and no allocation between the commercial and recreational sectors. Although hogfish occur throughout the Gulf, they are caught primarily off the Florida west coast. Generally, the fishing season for both sectors is open year-round, January 1 through December 31. However, accountability measures (AMs) for hogfish specify that if commercial and recreational landings exceed the stock ACL in a fishing year, then during the following fishing year if the stock ACL is reached or is projected to be reached, the commercial and recreational sectors will be closed for the remainder of the fishing year. The hogfish ACL and AMs were implemented in 2012 (76 FR 82044, December 29, 2011). The AMs were triggered when the hogfish ACL was exceeded in 2012, and the 2013 season was closed on December 2 because NMFS determined that the 2013 hogfish stock ACL had been harvested (78 FR 72583, December 3, 2013). The stock ACL was exceeded again in 2013. However, there was no closure in 2014, and the stock ACL was not exceeded in the 2014 or 2015 fishing years.

    In 2014, the Florida Fish and Wildlife Conservation Commission (FWC) completed the most recent stock assessment for hogfish through the Southeast Data, Assessment, and Review process (SEDAR 37). SEDAR 37 divided the hogfish stock into three stocks based upon genetic analysis as follows: The West Florida stock, Florida Keys/East Florida stock, and the Georgia through North Carolina stock. The West Florida stock is completely within the jurisdiction of the Gulf Council and the Georgia through North Carolina stock is completely within the jurisdiction of the South Atlantic Council. The Florida Keys/East Florida stock crosses the Councils' jurisdictional boundary, with a small portion of the stock extending into the Gulf Council's jurisdiction off the west coast of Florida. Based on SEDAR 37 and the Gulf and South Atlantic Councils' Scientific and Statistical Committee (SSC) recommendations, NMFS determined the West Florida stock is not overfished or undergoing overfishing, the Florida Keys/East Florida stock is overfished and undergoing overfishing, and the status of the Georgia-North Carolina stock status is unknown.

    Actions Contained in Amendment 43

    Amendment 43 includes actions to revise the FMU for hogfish to be the West Florida stock and define the geographic range of this stock consistent with the South Atlantic Council's proposed boundary between the Florida Keys/East Florida and West Florida stocks, and set the SDC and ACL for the West Florida stock. In addition, actions in Amendment 43 increase the minimum size limit for the West Florida stock, and remove the powerhead exception for harvest of hogfish in the Gulf reef fish stressed area.

    Fishery Management Unit

    The South Atlantic Council developed and submitted for review by the Secretary of Commerce a rebuilding plan for the Florida Keys/East Florida hogfish stock through Amendment 37 to the FMP for the Snapper-Grouper Fishery of the South Atlantic Region. Because SEDAR 37 indicated only a small portion of the Florida Keys/East Florida stock extends into the Gulf Council's jurisdiction off south Florida, the Gulf Council through Amendment 43 proposes to revise the hogfish FMU to be the West Florida stock and define the geographic range of this stock consistent with the South Atlantic Council's proposed boundary between the Florida Keys/East Florida and West Florida hogfish stocks near Cape Sable, Florida. This boundary would be a line extending west along 25°09′ N. lat. to the outer boundary of the EEZ. The Gulf Council would manage hogfish (the West Florida stock) in the Gulf EEZ except south of 25°09′ N. lat. off the west coast of Florida. The South Atlantic Council would manage hogfish (the Florida Keys/East Florida stock) in the Gulf EEZ south of 25°09′ N. lat. off the west coast of Florida, and in the South Atlantic EEZ to the state border of Florida and Georgia. The boundary line near Cape Sable is south of the line used in SEDAR 37, which defined the West Florida stock as north of the Monroe and Collier County, Florida, boundary line. Therefore, it is possible that some fish from the Florida Keys/East Florida stock will be harvested under the regulations by the Gulf Council. However, the majority of hogfish landings in Monroe County occur in the Florida Keys, and the proposed boundary is far enough north of the Florida Keys that fishing trips originating in the Florida Keys rarely travel north of the boundary, and far enough south of Naples and Marco Island, Florida, that fishing trips originating from these locations rarely travel south of the boundary. In addition, the boundary line proposed by the Gulf and South Atlantic Councils is currently used by the FWC as a regulatory boundary for certain state-managed species. Using a pre-existing management boundary will increase enforceability and help fishermen by simplifying regulations across adjacent management jurisdictions.

    In accordance with section 304(f) of the Magnuson-Stevens Act, the Gulf Council requested that the Secretary designate the South Atlantic Council as the responsible Council for management of the Florida Keys/East Florida hogfish stock in Gulf Federal waters south of 25°09′ N. lat. near Cape Sable on the west coast of Florida. If the Gulf Council's request is approved, the Gulf Council would continue to manage hogfish in Federal waters in the Gulf, except in Federal waters south of this boundary. Therefore, the South Atlantic Council, and not the Gulf Council, would establish the management measures for the entire range of the Florida Keys/East Florida hogfish stock, including in Federal waters south of 25°09′ N. lat. near Cape Sable in the Gulf. Commercial and recreational for-hire vessels fishing for hogfish in Gulf Federal waters, i.e., north and west of the jurisdictional boundary between the Gulf and South Atlantic Councils (approximately the Florida Keys), as defined at 50 CFR 600.105(c), would still be required to have the appropriate Federal Gulf reef fish permits, and vessels fishing for hogfish in South Atlantic Federal waters, i.e., south and east of the jurisdictional boundary, would still be required to have the appropriate Federal South Atlantic snapper-grouper permits. Those permit holders would still be required to follow the sale and reporting requirements associated with the respective permits.

    NMFS specifically seeks public comment regarding the revised stock boundaries and the manner in which the Councils would have jurisdiction over these stocks if both Amendment 37 for the South Atlantic and Amendment 43 for the Gulf of Mexico are approved and implemented.

    Status Determination Criteria

    Currently, the only SDC implemented for Gulf hogfish is the overfishing threshold, or maximum fishing mortality rate (MFMT). The overfished threshold, or minimum stock size threshold (MSST), and maximum sustainable yield (MSY) actions were disapproved when the Gulf Council's Sustainable Fisheries Act Generic Amendment was approved by NMFS on November 17, 1999. Amendment 43 would maintain the current MFMT value at the fishing mortality corresponding to 30 percent of the stock's spawning potential ratio (SPR) (F30 % SPR). Amendment 43 would also specify the MSST and MSY values. The MSY proxy would equal the equilibrium yield at F30 % SPR and the MSST value would be equal to 75 percent of the spawning stock biomass capable of producing an equilibrium yield of the MSY proxy.

    Annual Catch Limit

    The current ACL and annual catch target (ACT) for Gulf hogfish were established based on 1999 through 2008 landings. The ACL and ACT were set using the Gulf Council's acceptable biological catch (ABC) control rule for stocks that have not been assessed but are stable over time. Amendment 43 would set the ACL for the West Florida hogfish stock at 219,000 lb (99,337 kg), round weight, for the 2017 and 2018 fishing years based on recommendations from the Gulf Council's SSC after its review of SEDAR 37. In 2019, and subsequent fishing years, the ACL would be set at the equilibrium ABC of 159,300 lb (72,257 kg), round weight. The Gulf Council decided to discontinue the designation of an ACT, because it is not used in the current AMs or for other management purposes.

    Minimum Size Limit

    Although the West Florida hogfish stock is not overfished or undergoing overfishing, the stock could be subject to seasonal closures should landings exceed the stock ACL and AMs are triggered. In 2012 and 2013, the stock ACL was exceeded, thus causing landings to be closely monitored in 2013 and 2014. This resulted in an in-season closure in 2013 but not in 2014. To reduce the likelihood of AMs being triggered, the Gulf Council determined that increasing the minimum size limit in Federal waters from 12 inches (30.5 cm), fork length (FL), to 14 inches (35.6 cm), FL, could reduce the directed harvest rate and, consequently, reduce the probability of exceeding the stock ACL and triggering AMs. This action has an additional benefit of allowing the hogfish to grow larger and have an additional spawning opportunity before being susceptible to harvest.

    Powerhead Exemption

    Currently, as described at 50 CFR 622.35(a), a regulatory exemption allows for the harvest of hogfish using powerheads in the reef fish stressed area. Amendment 43 would remove this exemption. The powerhead exemption provision is a regulatory holdover from when hogfish were listed as a species in the fishery but not in the reef fish FMU. Harvesting species in the FMU with powerheads in the stressed area was prohibited. By removing the powerhead exemption for hogfish, hogfish would be subject to the same regulations for Gulf reef fish in the stressed area as other species in the reef fish FMU. The coordinates for the reef fish stressed area are provided in 50 CFR part 622, Table 2 in Appendix B.

    Proposed Rule for Amendment 43

    A proposed rule that would implement Amendment 43 has been drafted. In accordance with the Magnuson-Stevens Act, NMFS is evaluating Amendment 43 to determine whether it is consistent with the FMP, the Magnuson-Stevens Act, and other applicable law. If the determination is affirmative, NMFS will publish the proposed rule in the Federal Register for public review and comment.

    Consideration of Public Comments

    The Gulf Council has submitted Amendment 43 for Secretarial review, approval, and implementation. Comments on Amendment 43 must be received by January 3, 2017. Comments received during the respective comment periods, whether specifically directed to Amendment 43 or the proposed rule, will be considered by NMFS in its decision to approve, partially approve, or disapprove Amendment 43. Comments received after the comment periods will not be considered by NMFS in this decision. All comments received by NMFS on Amendment 43 or the proposed rule during their respective comment periods will be addressed in the final rule.

    Authority:

    16 U.S.C. 1801 et seq.

    Dated: October 31, 2016. Emily H. Menashes, Acting Director, Office of Sustainable Fisheries, National Marine Fisheries Service.
    [FR Doc. 2016-26616 Filed 11-3-16; 8:45 am] BILLING CODE 3510-22-P
    81 214 Friday, November 4, 2016 Notices DEPARTMENT OF AGRICULTURE Farm Service Agency Information Collection Request; Inventory Property Management AGENCY:

    Farm Service Agency, USDA.

    ACTION:

    Notice; request for comments.

    SUMMARY:

    In accordance with the Paperwork Reduction Act of 1995, the Farm Service Agency (FSA) is requesting comments from all interested individuals and organizations on an extension with a revision of a currently approved information collection that supports Inventory Property Management. The collected information is used to evaluate applicant requests to purchase inventory property, determine eligibility to lease or purchase inventory property, and ensure the payment of the lease amount or purchase amount associated with the acquisition of inventory property. Revisions to the information collection includes an increase in the total amount of burden hours expected related to inventory property requests.

    DATES:

    We will consider comments that we receive by January 3, 2017.

    ADDRESSES:

    We invite you to submit comments on this notice. In your comments, include date, volume, and page number, the OMB control number, and the title of the information collection of this issue of the Federal Register. You may submit comments by any of the following methods:

    Federal eRulemaking Portal: Go to http://regulations.gov. Follow the online instructions for submitting comments.

    Mail: J. Lee Nault, Loan Specialist, USDA/FSA/FLP, STOP 0523, 1400 Independence Avenue SW., Washington, DC 20250-0503.

    You may also send comments to the Desk Officer for Agriculture, Office of Information and Regulatory Affairs, Office of Management and Budget, Washington, DC 20503. Copies of the information collection may be requested by contacting J. Lee Nault at the above address.

    FOR FURTHER INFORMATION CONTACT:

    J. Lee Nault, (202) 720-6834.

    SUPPLEMENTARY INFORMATION:

    Title: (7 CFR part 767) Farm Loan Programs—Inventory Property Management, OMB Number: 0560-0234, Expiration Date: 03/31/2017, Type of Request: Extension with a revision.

    Abstract: FSA's Farm Loan Programs provide supervised credit in the form of loans to family farmers to purchase real estate and equipment and finance agricultural production. Inventory Property Management, as specified in 7 CFR part 767, provides the requirements for the management, lease, and sale of security property acquired by FSA. FSA may take title to real estate as part of dealing with a problem loan either by entering a winning bid in an attempt to protect its interest at a foreclosure sale, or by accepting a deed of conveyance in lieu of foreclosure. Information collections established in the regulation are necessary for FSA to determine an applicant's eligibility to lease or purchase inventory property and to ensure the applicant's ability to make payment on the lease or purchase amount.

    The number of respondents and burden hours increase in the request. The increase is based on an approximate 13% increase in the number of inventory properties being held by FSA since the previous approval request. No additional forms, response actions, or time increases are added to the request.

    The formula used to calculate the total burden hour is estimated average time per responses hours times total annual responses.

    Estimate of Average Time to respond: 44 minutes per response. The average travel time, which is included in the total annual burden, is estimated to be 1 hour per respondent.

    Respondents: Individuals or households, businesses or other for profit farms.

    Estimated Annual Number of Respondents: 351.

    Estimated Number of Responses per Respondent: 1.03.

    Total Annual Responses: 363.

    Estimated Average Time per Response: 1.69.

    Estimated Total Annual Burden Hours: 616.

    We are requesting comments on all aspects of this information collection to help us to:

    (1) Evaluate whether the proposed collection of information is necessary for the proper performance of the functions of the agency, including whether the information will have practical utility;

    (2) Evaluate the accuracy of the agency's estimate of the burden of the collection of information including the validity of the methodology and assumptions used;

    (3) Evaluate the quality, utility, and clarity of the information technology; and

    (4) Minimize the burden of the information collection on those who respond through the use of appropriate automated, electronic, mechanical, or other technological collection techniques or other forms of information technology.

    All comments received in response to this notice, including names and addresses where provided, will be made a matter of public record. Comments will be summarized and included in the request for OMB approval of the information collection.

    Val Dolcini, Administrator, Farm Service Agency.
    [FR Doc. 2016-26660 Filed 11-3-16; 8:45 am] BILLING CODE P
    DEPARTMENT OF AGRICULTURE Forest Service Fremont and Winema Resource Advisory Committee AGENCY:

    Forest Service, USDA.

    ACTION:

    Notice of meeting.

    SUMMARY:

    The Fremont and Winema Resource Advisory Committee (RAC) will meet in Klamath Falls, Oregon. The committee is authorized under the Secure Rural Schools and Community Self-Determination Act (the Act) and operates in compliance with the Federal Advisory Committee Act. The purpose of the committee is to improve collaborative relationships and to provide advice and recommendations to the Forest Service concerning projects and funding consistent with Title II of the Act. RAC information can be found at the following Web site: http://facadatabase.gov/committee/committee.aspx?cid=2266&aid=171.

    DATES:

    The meeting will be held on November 17, 2016, from 9 a.m. to 5 p.m.

    All RAC meetings are subject to cancellation. For status of meeting prior to attendance, please contact the person listed under For Further Information Contact.

    ADDRESSES:

    The meeting will be held at the Klamath Ranger Station, 2819 Dahlia Street, Klamath Falls, Oregon.

    Written comments may be submitted as described under Supplementary Information. All comments, including names and addresses when provided, are placed in the record and are available for public inspection and copying. The public may inspect comments received at the Klamath Ranger Station, 2819 Dahlia Street, Klamath Falls, Oregon. Please call ahead at 541-883-6714 to facilitate entry into the building.

    FOR FURTHER INFORMATION CONTACT:

    David Brillenz, Designated Federal Official by phone at 541-947-6328, or by email at [email protected].

    Individuals who use telecommunication devices for the deaf (TDD) may call the Federal Information Relay Service (FIRS) at 1-800-877-8339 between 8:00 a.m. and 8:00 p.m., Eastern Standard Time, Monday through Friday.

    SUPPLEMENTARY INFORMATION:

    The purpose of the meeting is to provide:

    (1) Recommendations to the Forest Service concerning projects in Klamath County, and

    (2) Funding consistent with Title II of the Act.

    The meeting is open to the public. The agenda will include time for people to make oral statements of three minutes or less. Individuals wishing to make an oral statement should request it in writing by November 7, 2016, to be scheduled on the agenda. Anyone who would like to bring related matters to the attention of the committee may file written statements with the committee staff before or after the meeting. Written comments and requests for time to make oral comments must be sent to Roland Giller, Partnership Coordinator, 38500 Highway 97 North, Chiloquin, Oregon 97624; or by email to [email protected], or via facsimile to 541-783-2134.

    Meeting Accommodations: If you are a person requiring reasonable accommodation, please make requests in advance for sign language interpreting, assistive listening devices, or other reasonable accommodation. For access to the facility or proceedings, please contact the person listed in the section titled

    FOR FURTHER INFORMATION CONTACT:

    All reasonable accommodation requests are managed on a case by case basis.

    Dated: October 25, 2016. Eric J. Watrud, Acting Fremont-Winema National Forest Supervisor.
    [FR Doc. 2016-26635 Filed 11-3-16; 8:45 am] BILLING CODE 3411-15-P
    DEPARTMENT OF AGRICULTURE Forest Service Fremont and Winema Resource Advisory Committee AGENCY:

    Forest Service, USDA.

    ACTION:

    Notice of meeting.

    SUMMARY:

    The Fremont and Winema Resource Advisory Committee (RAC) will meet in Lakeview, Oregon. The committee is authorized under the Secure Rural Schools and Community Self-Determination Act (the Act) and operates in compliance with the Federal Advisory Committee Act. The purpose of the committee is to improve collaborative relationships and to provide advice and recommendations to the Forest Service concerning projects and funding consistent with Title II of the Act. RAC information can be found at the following Web site: http://facadatabase.gov/committee/committee.aspx?cid=2266&aid=171.

    DATES:

    The meeting will be held on November 16, 2016, from 9 a.m. to 5 p.m.

    All RAC meetings are subject to cancellation. For status of the meeting prior to attendance, please contact the person listed under For Further Information Contact.

    ADDRESSES:

    The meeting will be held at the Lakeview Interagency Building, Main Conference Rooms, 1301 South G Street, Lakeview, Oregon.

    Written comments may be submitted as described under Supplementary Information. All comments, including names and addresses when provided, are placed in the record and are available for public inspection and copying. The public may inspect comments received at the Lakeview Interagency Building, 1301 South G Street, Lakeview, Oregon. Please call ahead at 541-947-6328 to facilitate entry into the building.

    FOR FURTHER INFORMATION CONTACT:

    David Brillenz, Designated Federal Official by phone at 541-947-6328, or by email at [email protected].

    Individuals who use telecommunication devices for the deaf (TDD) may call the Federal Information Relay Service (FIRS) at 1-800-877-8339 between 8:00 a.m. and 8:00 p.m., Eastern Standard Time, Monday through Friday.

    SUPPLEMENTARY INFORMATION:

    The purpose of the meeting is to provide:

    (1) Recommendations to the Forest Service concerning projects in Lake County; and

    (2) Funding consistent with Title II of the Act.

    The meeting is open to the public. The agenda will include time for people to make oral statements of three minutes or less. Individuals wishing to make an oral statement should request it in writing by November 7, 2016, to be scheduled on the agenda. Anyone who would like to bring related matters to the attention of the committee may file written statements with the committee staff before or after the meeting. Written comments and requests for time to make oral comments must be sent to Roland Giller, Partnership Coordinator, 38500 Highway 97 North, Chiloquin, Oregon 97624; or by email to [email protected], or via facsimile to 541-783-2134.

    Meeting Accommodations: If you are a person requiring reasonable accommodation, please make requests in advance for sign language interpreting, assistive listening devices, or other reasonable accommodation. For access to the facility or proceedings, please contact the person listed in the section titled FOR FURTHER INFORMATION CONTACT. All reasonable accommodation requests are managed on a case by case basis.

    Dated: October 25, 2016. Eric J. Watrud, Acting Fremont-Winema National Forest Supervisor.
    [FR Doc. 2016-26634 Filed 11-3-16; 8:45 am] BILLING CODE 3411-15-P
    DEPARTMENT OF AGRICULTURE Forest Service Black Hills National Forest Advisory Board AGENCY:

    Forest Service, USDA.

    ACTION:

    Notice of meeting.

    SUMMARY:

    The Black Hills National Forest Advisory Board (Board) will meet in Rapid City, South Dakota. The Board is established consistent with the Federal Advisory Committee Act of 1972 (5 U.S.C. App. II), the Forest and Rangeland Renewable Resources Planning Act of 1974 (16 U.S.C. 1600 et.seq.), the National Forest Management Act of 1976 (16 U.S.C. 1612), and the Federal Public Lands Recreation Enhancement Act (Pub. L. 108-447). Additional information concerning the Board, including the meeting summary/minutes, can be found by visiting the Board's Web site at: http://www.fs.usda.gov/main/blackhills/workingtogether/advisorycommittees.

    DATES:

    The meeting will be held on Wednesday, November 16, 2016, at 1:00 p.m.

    All meetings are subject to cancellation. For updated status of meeting prior to attendance, please contact the person listed under For Further Information Contact.

    ADDRESSES:

    The meeting will be held at the Mystic Ranger District, 8221 South Highway 16, Rapid City, South Dakota.

    Written comments may be submitted as described under Supplementary Information. All comments, including names and addresses, when provided, are placed in the record and available for public inspection and copying. The public may inspect comments received at the Black Hills National Forest Supervisor's Office. Please call ahead to facilitate entry into the building.

    FOR FURTHER INFORMATION CONTACT:

    Scott Jacobson, Board Coordinator by phone at 605-440-1409, or by email at [email protected].

    Individuals who use telecommunication devices for the deaf (TDD) may call the Federal Information Relay Service (FIRS) at 1-800-877-8339 between 8:00 a.m. and 8:00 p.m., Eastern Standard Time, Monday through Friday.

    SUPPLEMENTARY INFORMATION:

    The purpose of the meeting is to provide:

    (1) Orientation Topic—Forest Plan Overview;

    (2) Structural Stages of the Forest;

    (3) Black Hills Resilient Landscapes (BHRL) Project update;

    (4) MPB—Epidemic to Endemic; and

    (5) 2016 Aerial Photo Results/Update.

    The meeting is open to the public. The agenda will include time for people to make oral statements of three minutes or less. Individuals wishing to make an oral statement should submit a request in writing by November 7, 2016, to be scheduled on the agenda. Anyone who would like to bring related matters to the attention of the Board may file written statements with the Board's staff before or after the meeting. Written comments and time requests for oral comments must be sent to Scott Jacobson, Black Hills National Forest Supervisor's Office, 1019 North Fifth Street, Custer, South Dakota 57730; by email to [email protected], or via facsimile to 605-673-9208.

    Meeting Accommodations: If you are a person requiring reasonable accommodation, please make requests in advance for sign language interpreting, assistive listening devices, or other reasonable accommodation for access to the facility or proceedings by contacting the person listed in the section titled For Further Information Contact. All reasonable accommodation requests are managed on a case by case basis.

    Dated: October 31, 2016. Mark Van Every, Forest Supervisor.
    [FR Doc. 2016-26671 Filed 11-3-16; 8:45 am] BILLING CODE 3411-15-P
    COMMISSION ON CIVIL RIGHTS Agenda and Notice of Public Meeting of the Maine Advisory Committee; Correction AGENCY:

    Commission on Civil Rights.

    ACTION:

    Notice; correction.

    SUMMARY:

    The Commission on Civil Rights published a notice in the Federal Register of September 16, 2016, concerning a meeting of the Maine Advisory Committee. The state purpose of the meeting on Tuesday, November 15, 2016, was incorrect. The committee on this date will discuss and vote on its human tracking report.

    FOR FURTHER INFORMATION CONTACT:

    Evelyn Bohor, (202) 376-7533.

    Correction

    In the Federal Register of September 16, 2016, in FR Doc. 2016-22334, on page 63739, correct the first paragraph to read:

    Notice is hereby given, pursuant to the provisions of the rules and regulations of the U.S. Commission on Civil Rights (Commission), and the Federal Advisory Committee Act (FACA), that planning meetings of the Maine Advisory Committee to the Commission will convene by conference call at 1:30 p.m. (ET) on: Tuesday, October 18, 2016; Tuesday, November 15, 2016; Tuesday, December 20, 2016; Tuesday, January, 17, 2017 and Tuesday, February 21, 2017. The purpose of each meeting is to discuss project planning as the Committee moves to selecting a topic as its civil rights project. The Committee will also select additional officers, as necessary. The November 15 meeting will be to discuss and vote on the Committee's report to the Commission on human trafficking in Maine.

    Dated: November 1, 2016. David Mussatt, Supervisory Chief, Regional Programs Coordination Unit.
    [FR Doc. 2016-26687 Filed 11-3-16; 8:45 am] BILLING CODE P
    DEPARTMENT OF COMMERCE Economic Development Administration Notice of Petitions by Firms for Determination of Eligibility To Apply for Trade Adjustment Assistance AGENCY:

    Economic Development Administration, Department of Commerce.

    ACTION:

    Notice and Opportunity for Public Comment.

    Pursuant to Section 251 of the Trade Act 1974, as amended (19 U.S.C. 2341 et seq.), the Economic Development Administration (EDA) has received petitions for certification of eligibility to apply for Trade Adjustment Assistance from the firms listed below. Accordingly, EDA has initiated investigations to determine whether increased imports into the United States of articles like or directly competitive with those produced by each of these firms contributed importantly to the total or partial separation of the firm's workers, or threat thereof, and to a decrease in sales or production of each petitioning firm.

    List of Petitions Received by EDA for Certification Eligibility To Apply for Trade Adjustment Assistance [10/22/2016 through 10/31/2016] Firm name Firm address Date accepted for
  • investigation
  • Product(s)
    TMP Technologies, Inc 1200 Northland Avenue, Buffalo, NY 14215 10/25/2016 The firm manufactures custom foam applicators and rubber components. The PWT Group, LLC, d/b/a Precision Wire Technologies 6320 Highview Drive, Fort Wayne, IN 46818 10/26/2016 The firm manufactures wire dies and precision drawn round and flat wire in stainless steel, steel alloys and a variety of non-ferrous alloys. DCI, Inc 265 S. Main Street, Lisbon, NH 3585 10/31/2016 The firm is a manufacturer of hardwood dormitory style furniture. Encore Ceramics, Inc P.O. Box 2124, Grants Pass, OR 97528 10/31/2016 The firm manufactures ceramic tiles. JT Precision, Inc 8701 Haight Road, Baker, NY 14012 10/31/2016 The firm manufactures precision machined component parts.

    Any party having a substantial interest in these proceedings may request a public hearing on the matter. A written request for a hearing must be submitted to the Trade Adjustment Assistance for Firms Division, Room 71030, Economic Development Administration, U.S. Department of Commerce, Washington, DC 20230, no later than ten (10) calendar days following publication of this notice.

    Please follow the requirements set forth in EDA's regulations at 13 CFR 315.9 for procedures to request a public hearing. The Catalog of Federal Domestic Assistance official number and title for the program under which these petitions are submitted is 11.313, Trade Adjustment Assistance for Firms.

    Miriam Kearse, Lead Program Analyst.
    [FR Doc. 2016-26650 Filed 11-3-16; 8:45 am] BILLING CODE 3510-WH-P
    DEPARTMENT OF COMMERCE Foreign-Trade Zones Board [Order No. 2012] Grant of Authority; Establishment of a Foreign-Trade Zone, Under the Alternative Site Framework, Vancouver, Washington

    Pursuant to its authority under the Foreign-Trade Zones Act of June 18, 1934, as amended (19 U.S.C. 81a-81u), the Foreign-Trade Zones Board (the Board) adopts the following Order:

    Whereas, the Foreign-Trade Zones Act provides for “. . . the establishment . . . of foreign-trade zones in ports of entry of the United States, to expedite and encourage foreign commerce, and for other purposes,” and authorizes the Foreign-Trade Zones Board to grant to qualified corporations the privilege of establishing foreign-trade zones in or adjacent to U.S. Customs and Border Protection ports of entry;

    Whereas, the Board adopted the alternative site framework (ASF) (15 CFR Sec. 400.2(c)) as an option for the establishment or reorganization of zones;

    Whereas, the Port of Vancouver USA (the Grantee) has made application to the Board (B-29-2016, docketed May 4, 2016), requesting the establishment of a foreign-trade zone under the ASF with a service area of Clark County, Washington, within and adjacent to the Portland, Oregon U.S. Customs and Border Protection port of entry, and proposed Site 1 would be categorized as a magnet site;

    Whereas, notice inviting public comment has been given in the Federal Register (81 FR 29251-29252, May 11, 2016) and the application has been processed pursuant to the FTZ Act and the Board's regulations; and,

    Whereas, the Board adopts the findings and recommendations of the examiner's report, and finds that the requirements of the FTZ Act and the Board's regulations are satisfied;

    Now, therefore, the Board hereby grants to the Grantee the privilege of establishing a foreign-trade zone, designated on the records of the Board as Foreign-Trade Zone No. 296, as described in the application, and subject to the FTZ Act and the Board's regulations, including Section 400.13, and to the Board's standard 2,000-acre activation limit.

    Signed at Washington, DC, October 18, 2016. Penny Pritzker, Secretary of Commerce, Chairman and Executive Officer, Foreign-Trade Zones Board. Andrew McGilvray, Executive Secretary.
    [FR Doc. 2016-26757 Filed 11-3-16; 8:45 am] BILLING CODE 3510-DS-P
    DEPARTMENT OF COMMERCE Foreign-Trade Zones Board [B-72-2016] Foreign-Trade Zone (FTZ) 80—San Antonio, Texas; Notification of Proposed Production Activity; CGT U.S., Ltd.; (Polyvinyl Chloride (PVC) Coated Upholstery Fabric Cover Stock); New Braunfels, Texas

    CGT U.S., Ltd. (CGT), submitted a notification of proposed production activity to the FTZ Board for its facility in New Braunfels, Texas. The notification conforming to the requirements of the regulations of the FTZ Board (15 CFR 400.22) was received on October 18, 2016.

    A separate application for subzone designation at the CGT facility was submitted and will be processed under Section 400.38 of the Board's regulations. The facility is used for the production of PVC coated upholstery fabric cover stock. Pursuant to 15 CFR 400.14(b), FTZ activity would be limited to the specific foreign-status materials and components and specific finished product described in the submitted notification (as described below) and subsequently authorized by the FTZ Board.

    Production under FTZ procedures could exempt CGT from customs duty payments on the foreign-status components used in export production. On its domestic sales, CGT would be able to choose the duty rate during customs entry procedures that applies to the PVC coated upholstery fabric cover stock (duty free) for the foreign-status inputs noted below. Customs duties also could possibly be deferred or reduced on foreign-status production equipment.

    The components and materials sourced from abroad include: Compound stabilizer for plastics; antimony trioxide (low-tint); flat release paper; polyester knit fabric; polycotton knit fabric; polyurethane top finish dull; polyurethane top finish gloss; polyvinyl chloride dispersion resin; carbodimide crosslinker; aqueous (water base) polyurethane top finish; polyurethane top finish; aqueous (water base) silicone modifier; aqueous (water base) silicone hand modifier; polyurethane; polyisocyanate crosslinker; defoamer; polyfunctional aziridine crosslinker; wetting agent top coat; and, stabilizers (duty rates range from duty free to 10%).

    The request indicates that CGT will admit foreign-status polyester and polycotton knit fabrics (HTSUS 6006.31.00) in privileged foreign status (19 CFR 146.43), thereby precluding inverted tariff benefits on these inputs.

    Public comment is invited from interested parties. Submissions shall be addressed to the FTZ Board's Executive Secretary at the address below. The closing period for their receipt is December 14, 2016.

    A copy of the notification will be available for public inspection at the Office of the Executive Secretary, Foreign-Trade Zones Board, Room 21013, U.S. Department of Commerce, 1401 Constitution Avenue NW., Washington, DC 20230-0002, and in the “Reading Room” section of the FTZ Board's Web site, which is accessible via www.trade.gov/ftz.

    For further information, contact Diane Finver at [email protected] or (202) 482-1367.

    Dated: October 27, 2016. Andrew McGilvray, Executive Secretary.
    [FR Doc. 2016-26741 Filed 11-3-16; 8:45 am] BILLING CODE 3510-DS-P
    DEPARTMENT OF COMMERCE Foreign-Trade Zones Board [S-152-2016] Foreign-Trade Zone 44—Morris County, New Jersey; Application for Subzone; AGFA Corporation; Branchburg, New Jersey

    An application has been submitted to the Foreign-Trade Zones Board (the Board) by the New Jersey Department of State, grantee of FTZ 44, requesting subzone status for the facility of AGFA Corporation, located in Branchburg, New Jersey. The application was submitted pursuant to the provisions of the Foreign-Trade Zones Act, as amended (19 U.S.C. 81a-81u), and the regulations of the Board (15 CFR part 400). It was formally docketed on October 28, 2016.

    The proposed subzone (39 acres) is located at 50 Meister Avenue, Branchburg. A notification of proposed production activity has been submitted and will be published separately for public comment. The proposed subzone would be subject to the existing activation limit of FTZ 44.

    In accordance with the Board's regulations, Kathleen Boyce of the FTZ Staff is designated examiner to review the application and make recommendations to the Executive Secretary.

    Public comment is invited from interested parties. Submissions shall be addressed to the Board's Executive Secretary at the address below. The closing period for their receipt is December 14, 2016. Rebuttal comments in response to material submitted during the foregoing period may be submitted during the subsequent 15-day period to December 29, 2016.

    A copy of the application will be available for public inspection at the Office of the Executive Secretary, Foreign-Trade Zones Board, Room 21013, U.S. Department of Commerce, 1401 Constitution Avenue NW., Washington, DC 20230-0002, and in the “Reading Room” section of the Board's Web site, which is accessible via www.trade.gov/ftz.

    For further information, contact Kathleen Boyce at [email protected] or (202) 482-1346.

    Dated: October 28, 2016. Camille R. Evans, Acting Executive Secretary.
    [FR Doc. 2016-26746 Filed 11-3-16; 8:45 am] BILLING CODE 3510-DS-P
    DEPARTMENT OF COMMERCE Foreign-Trade Zones Board [Order No. 2016] Approval of Subzone Status; Westlake Chemical Corporation; Sulphur, Louisiana

    Pursuant to its authority under the Foreign-Trade Zones Act of June 18, 1934, as amended (19 U.S.C. 81a-81u), the Foreign-Trade Zones Board (the Board) adopts the following Order:

    Whereas, the Foreign-Trade Zones Act provides for “. . . the establishment . . . of foreign-trade zones in ports of entry of the United States, to expedite and encourage foreign commerce, and for other purposes,” and authorizes the Foreign-Trade Zones Board to grant to qualified corporations the privilege of establishing foreign-trade zones in or adjacent to U.S. Customs and Border Protection ports of entry;

    Whereas, the Board's regulations (15 CFR part 400) provide for the establishment of subzones for specific uses;

    Whereas, the Lake Charles Harbor & Terminal District, grantee of Foreign-Trade Zone 87, has made application to the Board for the establishment of a subzone at the facilities of Westlake Chemical Corporation, located in Sulphur, Louisiana (FTZ Docket B-38-2016, docketed May 25, 2016);

    Whereas, notice inviting public comment has been given in the Federal Register (81 FR 35297-35298, June 2, 2016) and the application has been processed pursuant to the FTZ Act and the Board's regulations; and,

    Whereas, the Board adopts the findings and recommendations of the examiner's memorandum, and finds that the requirements of the FTZ Act and the Board's regulations are satisfied;

    Now, therefore, the Board hereby approves subzone status at the facilities of Westlake Chemical Corporation, located in Sulphur, Louisiana (Subzone 87F), as described in the application and Federal Register notice, subject to the FTZ Act and the Board's regulations, including Section 400.13.

    Signed at Washington, DC, October 25, 2016. Ronald K. Lorentzen Acting Assistant Secretary of Commerce for Enforcement and Compliance, Alternate Chairman, Foreign-Trade Zones Board.
    [FR Doc. 2016-26748 Filed 11-3-16; 8:45 am] BILLING CODE 3510-DS-P
    DEPARTMENT OF COMMERCE Foreign-Trade Zones Board [B-44-2016] Foreign-Trade Zone (FTZ) 44—Morris County, New Jersey; Authorization of Production Activity; Givaudan Flavors Corporation (Flavor Products); East Hanover, New Jersey

    On June 13, 2016, the State of New Jersey, Department of State, grantee of FTZ 44, submitted a notification of proposed production activity to the Foreign-Trade Zones (FTZ) Board on behalf of Givaudan Flavors Corporation, within Subzone 44H in East Hanover, New Jersey.

    The notification was processed in accordance with the regulations of the FTZ Board (15 CFR part 400), including notice in the Federal Register inviting public comment (81 FR 42649, June 30, 2016). The FTZ Board has determined that no further review of the activity is warranted at this time. The production activity described in the notification is authorized, subject to the FTZ Act and the Board's regulations, including Section 400.14.

    Dated: October 27, 2016. Andrew McGilvray, Executive Secretary.
    [FR Doc. 2016-26739 Filed 11-3-16; 8:45 am] BILLING CODE 3510-DS-P
    DEPARTMENT OF COMMERCE International Trade Administration [A-570-045] 1-Hydroxyethylidene-1, 1-Diphosphonic Acid From the People's Republic of China: Affirmative Preliminary Determination of Sales at Less Than Fair Value, and Postponement of Final Determination AGENCY:

    Enforcement and Compliance, International Trade Administration, Department of Commerce.

    SUMMARY:

    The Department of Commerce (“Department”) preliminarily determines that 1-Hydroxyethylidene-1, 1-Diphosphonic Acid (“HEDP”) from the People's Republic of China (“PRC”) is being, or is likely to be, sold in the United States at less than fair value (“LTFV”), as provided in section 733 of the Tariff Act of 1930, as amended (“the Act”). The period of investigation (“POI”) is July 1, 2015 through December 31, 2015. The estimated weighted-average dumping margins are shown in the “Preliminary Determination” section of this notice. Interested parties are invited to comment on this preliminary determination.

    DATES:

    Effective November 4, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Omar Qureshi or Kenneth Hawkins, AD/CVD Operations, Office V, Enforcement and Compliance, International Trade Administration, U.S. Department of Commerce, 1401 Constitution Avenue NW., Washington, DC 20230; telephone: (202) 482-5307 or (202) 482-6491, respectively.

    SUPPLEMENTARY INFORMATION:

    Background

    The Department published the notice of initiation of this investigation on April 28, 2016.1 For a complete description of the events that followed the initiation of this investigation, see the Preliminary Decision Memorandum, which is dated concurrently with and hereby adopted by this notice.2 A list of topics included in the Preliminary Decision Memorandum is included as Appendix II to this notice. The Preliminary Decision Memorandum is a public document and is on file electronically via Enforcement and Compliance's Antidumping and Countervailing Duty Centralized Electronic Service System (“ACCESS”). ACCESS is available to registered users at https://access.trade.gov, and to all parties in the Central Records Unit, room B8024 of the main Department of Commerce building. In addition, a complete version of the Preliminary Decision Memorandum can be found at http://enforcement.trade.gov/frn/. The signed Preliminary Decision Memorandum and the electronic version of the Preliminary Decision Memorandum are identical in content.

    1See 1-Hydroxyethylidene-1, 1-Diphosphonic Acid from People's Republic of China: Initiation of Less-Than-Fair-Value Investigation, 81 FR 25377 (April 28, 2016) (“Initiation Notice”).

    2See Memorandum from Christian Marsh, Deputy Assistant Secretary for Antidumping and Countervailing Duty Operations, to Paul Piquado, Assistant Secretary for Enforcement and Compliance “Decision Memorandum for the Preliminary Determination in the Antidumping Duty Investigation of 1-Hydroxyethylidene-1, 1-Diphosphonic Acid from People's Republic of China,” dated concurrently with and hereby adopted by this notice (“Preliminary Decision Memorandum”).

    Scope of the Investigation

    The product covered by this investigation is HEDP from the PRC. For a full description of the scope of this investigation, see the “Scope of the Investigation,” in Appendix I.

    Scope Comments

    In accordance with the preamble to the Department's regulations,3 the Initiation Notice set aside a period of time for parties to raise issues regarding product coverage (i.e., scope).4 Certain interested parties commented on the scope of the investigation as it appeared in the Initiation Notice. For a summary of the product coverage comments and rebuttal responses submitted to the record for this preliminary determination, and accompanying discussion and analysis of all comments timely received, see the Preliminary Decision Memorandum.5

    3See Antidumping Duties; Countervailing Duties, 62 FR 27296, 27323 (May 19, 1997).

    4See Initiation Notice, 81 FR 25377.

    5See Preliminary Decision Memorandum.

    Methodology

    The Department is conducting this investigation in accordance with section 731 of the Act. We calculated export prices and constructed export prices in accordance with section 772 of the Act. Because the PRC is a non-market economy within the meaning of section 771(18) of the Act, we calculated normal value (“NV”) in accordance with section 773(c) of the Act. In addition, the Department relied on adverse facts available under sections 776(a) and (b) of the Act. Specifically, the Department did not receive timely responses to its Q&V questionnaire or separate rate applications from numerous PRC exporters and/or producers of merchandise under consideration that were named in the Petition and to whom the Department issued Q&V questionnaires.6 Because non-responsive PRC companies have not demonstrated that they are eligible for separate rate status, the Department considers them to be part of the PRC-wide entity.7 For a full description of the methodology underlying our preliminary conclusions, see the Preliminary Decision Memorandum.

    6See Q&V Delivery Confirmation Memo.

    7See Certain Cut-to-Length Carbon Steel Plate from the People's Republic of China: Final Results of Antidumping Duty Administrative Review; 2013-2014, 80 FR 75966 (December 7, 2015) and accompanying Issues and Decision Memorandum at Comment 1.

    Combination Rates

    In the Initiation Notice, the Department stated that it would calculate combination rates for the respondents that are eligible for a separate rate in this investigation. Policy Bulletin 05.1 describes this practice.8

    8See Enforcement and Compliance's Policy Bulletin No. 05.1, regarding, “Separate-Rates Practice and Application of Combination Rates in Antidumping Investigations involving Non-Market Economy Countries,” (April 5, 2005) (Policy Bulletin 05.1), available on the Department's Web site at http://enforcement.trade.gov/policy/bull05-1.pdf.

    Preliminary Determination

    The Department preliminarily determines that the following weighted-average dumping margins exist during the POI:

    Producer Exporter Weighted-average dumping margin Nanjing University of Chemical Technology Changzhou Wujin Water Quality Stabilizer Factory Nanjing University of Chemical Technology Changzhou Wujin Water Quality Stabilizer Factory and Nantong Uniphos Chemicals Co., Ltd. (collectively, “WW Group”) 179.97 Shandong Taihe Water Treatment Technologies Co., Ltd Shandong Taihe Chemicals Co., Ltd. (“Taihe”) 137.61 Henan Qingshuiyuan Technology Co., Ltd Henan Qingshuiyuan Technology Co., Ltd. (“Qingshuiyuan”) 168.95 Jianghai Environmental Protection Co., Ltd Jianghai Environmental Protection Co., Ltd. (“Jianghai”) 168.95 PRC-Wide Entity 179.97 Non-Selected Separate Rate

    In calculating rates for non-individually investigated respondents in the context of non-market economy cases, the Department looks to section 735(c)(5)(A)-(B) of the Act, which provides instructions for calculating the all-others rate in an investigation. Section 735(c)(5)(A) of the Act provides that the estimated all-others rate shall be equivalent to the weighted average of the estimated weighted-average dumping margins calculated for exporters and producers individually investigated, excluding any margins that are zero, de minimis, or based entirely on facts available. Section 735(c)(5)(B) of the Act provides that where all individually investigated exporters or producers receive rates that are zero, de minimis, or based entirely on facts available, then the Department may use “any reasonable method” to establish the all-others rate for those companies not individually investigated.

    Apart from the mandatory respondents in this investigation, two other PRC exporters of the subject merchandise during the POI established entitlement to a separate rate.9 Thus, separate rates are being assigned in this segment to Jianghai and Qingshuiyuan. There currently exist no individually investigated respondents that have failed to cooperate in this investigation, and there are no zero or de minimis margins. Therefore, we are preliminarily determining the separate rate for non-selected companies (Jianghai and Qingshuiyuan) based on a weighted-average of the calculated rates determined for the mandatory respondents,10 in accordance with section 735(c)(5)(A) of the Act.

    9See Preliminary Decision Memo.

    10 We have calculated (A) a weighted-average of the dumping margins calculated for the mandatory respondents; (B) a simple average of the dumping margins calculated for the mandatory respondents; and (C) a weighted-average of the dumping margins calculated for the mandatory respondents using each company's publicly-ranged values for the merchandise under consideration. We would compare (B) and (C) to (A) and select the rate closest to (A) as the most appropriate rate for all other companies. See Ball Bearings and Parts Thereof from France, Germany, Italy, Japan, and the United Kingdom: Final Results of Antidumping Duty Administrative Reviews, Final Results of Changed-Circumstances Review, and Revocation of an Order in Part, 75 FR 53661, 53663 (September 1, 2010).

    Suspension of Liquidation

    In accordance with section 733(d)(2) of the Act, the Department will direct U.S. Customs and Border Protection (CBP) to suspend liquidation of all entries of HEDP from the PRC, as described in the “Scope of the Investigation” section, entered, or withdrawn from warehouse, for consumption on or after the date of publication of this notice in the Federal Register.

    Pursuant to section 733(d)(1)(B) of the Act and 19 CFR 351.205(d), the Department will instruct CBP to require a cash deposit 11 equal to the weighted-average amount by which NV exceeds U.S. price as follows: (1) The cash deposit rate for the exporter/producer combination listed in the table above will be the rate identified for that combination in the table; (2) for all combinations of PRC exporters/producers of merchandise under consideration that have not received their own separate rate above, the cash-deposit rate will be the cash deposit rate established for the PRC-wide entity; and (3) for all non-PRC exporters of the merchandise under consideration which have not received their own separate rate above, the cash-deposit rate will be the cash deposit rate applicable to the PRC exporter/producer combination that supplied that non-PRC exporter. These suspension of liquidation instructions will remain in effect until further notice.

    11See Modification of Regulations Regarding the Practice of Accepting Bonds During the Provisional Measures Period in Antidumping and Countervailing Duty Investigations, 76 FR 61042 (October 3, 2011).

    We normally adjust antidumping duty cash deposit rates by the amount of export subsidies, where appropriate. In the companion CVD investigation, we preliminarily found that the WW Group did not receive export subsidies.12 Therefore, no offset to the WW Group's cash deposit rate for export subsidies is necessary.13 With respect to Taihe, because its countervailing duty rate in the companion investigation included an amount for export subsidies, an offset of 0.28 percent will be made to its cash deposit rate.14 With respect to the separate-rate companies, we find that an export subsidy adjustment of 0.14 percent to the cash deposit rate is warranted because this is the export subsidy rate included in the countervailing duty “all others” rate to which the separate-rate companies are subject. For the PRC-wide entity, which received an adverse facts available rate in this preliminary determination, as an extension of the adverse inference found necessary pursuant to section 776(b) of the Act, the Department has not adjusted the PRC-wide entity's AD cash deposit rate by the lowest export subsidy rate determined for any party in the companion CVD proceeding, because the lowest export subsidy rate determined in the companion CVD proceeding is 0.00 percent.15,16

    12See Countervailing Duty Investigation of 1-Hydroxyethylidene-1, 1-Diphosphonic Acid from the People's Republic of China: Preliminary Affirmative Determination and Alignment of Final Determination with Final Antidumping Duty Determination, 81 FR 62084 (September 8, 2016) (“HEDP CVD Prelim”), and accompanying Preliminary Decision Memorandum at 13-19.

    13Id.

    14Id.

    15See, e.g., Certain Passenger Vehicle and Light Truck Tires From the People's Republic of China: Preliminary Determination of Sales at Less Than Fair Value; Preliminary Affirmative Determination of Critical Circumstances; In Part and Postponement of Final Determination, 80 FR 4250 (January 27, 2015), and accompanying Issues and Decision Memorandum at 35.

    16See HEDP CVD Prelim at 81 FR 62085.

    Pursuant to section 777A(f) of the Act, we normally adjust preliminary cash deposit rates for estimated domestic subsidy pass-through, where appropriate. However, in this case there is no basis to grant a domestic subsidy pass-through adjustment.17

    17See Preliminary Decision Memorandum at 28-29.

    Disclosure and Public Comment

    We intend to disclose the calculations performed to interested parties in this proceeding within five days of the date of announcement of this preliminary determination in accordance with 19 CFR 351.224(b). Interested parties may submit case briefs, rebuttal briefs, and hearing requests.18 For a schedule of the deadlines for filing case briefs, rebuttal briefs, and hearing requests, see the Preliminary Decision Memorandum at Section IX.

    18See 19 CFR 351.309(c)-(d), 19 CFR 351.310(c).

    International Trade Commission (“ITC”) Notification

    In accordance with section 733(f) of the Act, we will notify the ITC of our affirmative preliminary determination of sales at LTFV. If our final determination is affirmative, the ITC will determine before the later of 120 days after the date of this preliminary determination or 45 days after our final determination whether these imports are materially injuring, or threaten material injury to, the U.S. industry.19

    19See section 735(b)(2) of the Act.

    Postponement of Final Determination and Extension of Provisional Measures

    Section 735(a)(2) of the Act provides that a final determination may be postponed until not later than 135 days after the date of the publication of the preliminary determination if, in the event of an affirmative preliminary determination, a request for such postponement is made by exporters who account for a significant proportion of exports of the subject merchandise, or in the event of a negative preliminary determination, a request for such postponement is made by Petitioners. 19 CFR 351.210(e)(2) requires that requests by respondents for postponement of a final antidumping determination be accompanied by a request for extension of provisional measures from a four-month period to a period not more than six months in duration.

    On October 19 and 20, 2016, pursuant to 19 CFR 351.210(b) and (e), the WW Group and Taihe, respectively, requested that, contingent upon an affirmative preliminary determination of sales at LTFV, the Department postpone the final determination and that provisional measures be extended to a period not to exceed six months.20

    20See the WW Group's Letter (October 19, 2016); Taihe's Letter (October 20, 2016).

    In accordance with section 735(a)(2)(A) of the Act and 19 CFR 351.210(b)(2)(ii), because (1) our preliminary determination is affirmative; (2) the requesting exporters account for a significant proportion of exports of the subject merchandise; and (3) no compelling reasons for denial exist, we are postponing the final determination and extending the provisional measures from a four-month period to a period not greater than six months. Accordingly, we will make our final determination no later than 135 days after the date of publication of this preliminary determination, pursuant to section 735(a)(2) of the Act.21

    21See also 19 CFR 351.210(e).

    This determination is issued and published in accordance with sections 733(f) and 777(i)(1) of the Act and 19 CFR 351.205(c).

    Dated: October 27, 2016. Paul Piquado, Assistant Secretary for Enforcement and Compliance. Appendix I Scope of the Investigation

    The merchandise covered by this investigation includes all grades of aqueous acidic (non-neutralized) concentrations of 1-hydroxyethylidene-1, 1-diphosphonic acid (HEDP), also referred to as hydroxyethylidenendiphosphonic acid, hydroxyethanediphosphonic acid, acetodiphosphonic acid, and etidronic acid. The CAS (Chemical Abstract Service) registry number for HEDP is 2809-21-4.

    The merchandise subject to this investigation is currently classified in the Harmonized Tariff Schedule of the United States (HTSUS) at subheading 2931.90.9043. It may also enter under HTSUS subheadings 2811.19.6090 and 2931.90.9041. While HTSUS subheadings and the CAS registry number are provided for convenience and customs purposes only, the written description of the scope of this investigation is dispositive.

    Appendix II List of Topics Discussed in the Preliminary Decision Memorandum I. Summary II. Background III. Period of Investigation IV. Postponement of Final Determination and Extension of Provisional Measures V. Scope Comments VI. Selection of Respondents VII. Scope of the Investigation VIII. Discussion of the Methodology a. Non-Market Economy Country b. Surrogate Country and Surrogate Values Comments c. Separate Rates d. Combination Rates e. Collapsing and Affiliation f. The PRC-Wide Entity g. Application of Facts Available and Adverse Inferences h. Date of Sale i. Comparisons to Fair Value j. Normal Value k. Factor Valuation Methodology l. Determination of the Comparison Method IX. Currency Conversion X. Export Subsidy Adjustment XI. Adjustment Under Section 777A(f) of the Act XII. Disclosure and Public Comment XIII. Verification XIV. Recommendation
    [FR Doc. 2016-26755 Filed 11-3-16; 8:45 am] BILLING CODE 3510-DS-P
    DEPARTMENT OF COMMERCE International Trade Administration Meeting of the United States Travel and Tourism Advisory Board AGENCY:

    International Trade Administration, U.S. Department of Commerce.

    ACTION:

    Notice of an Open Meeting.

    SUMMARY:

    The United States Travel and Tourism Advisory Board (Board) will hold an open meeting on Friday, November 18, 2016. The Board was re-chartered in August 2015 and advises the Secretary of Commerce on matters relating to the U.S. travel and tourism industry. The purpose of the meeting is for Board members to discuss and prioritize longer-term travel and tourism issues and considerations regarding recommendations from the Board. The final agenda will be posted on the Department of Commerce Web site for the Board at http://trade.gov/ttab, at least one week in advance of the meeting.

    DATES:

    Friday, November 18, 2016. The deadline for members of the public to register, including requests to make comments during the meeting and for auxiliary aids, or to submit written comments for dissemination prior to the meeting, is 5 p.m. EDT on November 11, 2016.

    ADDRESSES:

    The meeting will be held at Dulles International Airport, 1 Saarinen Cir, Dulles, VA 20166.

    Requests to register (including to speak or for auxiliary aids) and any written comments should be submitted to: U.S. Travel and Tourism Advisory Board, U.S. Department of Commerce, Room 4043, 1401 Constitution Avenue NW., Washington, DC 20230, [email protected]. Members of the public are encouraged to submit registration requests and written comments via email to ensure timely receipt.

    FOR FURTHER INFORMATION CONTACT:

    Li Zhou, the United States Travel and Tourism Advisory Board, Room 4043, 1401 Constitution Avenue NW., Washington, DC 20230, telephone: 202-482-4501, email: [email protected].

    SUPPLEMENTARY INFORMATION:

    Background: The Board advises the Secretary of Commerce on matters relating to the U.S. travel and tourism industry.

    Public Participation: The meeting will be open to the public and will be accessible to people with disabilities. All guests are required to register in advance by the deadline identified under the DATES caption. Requests for auxiliary aids must be submitted by the registration deadline. Last minute requests will be accepted, but may not be possible to fill. There will be fifteen (15) minutes allotted for oral comments from members of the public joining the meeting. To accommodate as many speakers as possible, the time for public comments may be limited to three (3) minutes per person. Individuals wishing to reserve speaking time during the meeting must submit a request at the time of registration, as well as the name and address of the proposed speaker. If the number of registrants requesting to make statements is greater than can be reasonably accommodated during the meeting, the International Trade Administration may conduct a lottery to determine the speakers. Speakers are requested to submit a written copy of their prepared remarks by 5:00 p.m. EDT on Friday, November 11, 2016, for inclusion in the meeting records and for circulation to the members of the Travel and Tourism Advisory Board.

    In addition, any member of the public may submit pertinent written comments concerning the Board's affairs at any time before or after the meeting. Comments may be submitted to Li Zhou at the contact information indicated above. To be considered during the meeting, comments must be received no later than 5:00 p.m. EDT on Friday, November 11, 2016, to ensure transmission to the Board prior to the meeting. Comments received after that date and time will be distributed to the members but may not be considered on the call. Copies of Board meeting minutes will be available within 90 days of the meeting.

    Dated: October 31, 2016. Li Zhou, Executive Secretary, United States Travel and Tourism Advisory Board.
    [FR Doc. 2016-26713 Filed 11-3-16; 8:45 am] BILLING CODE 3510-DR-P
    DEPARTMENT OF COMMERCE International Trade Administration Quarterly Update to Annual Listing of Foreign Government Subsidies on Articles of Cheese Subject to an In-Quota Rate of Duty AGENCY:

    Enforcement and Compliance, International Trade Administration Department of Commerce.

    DATES:

    Effective November 4, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Stephanie Moore, AD/CVD Operations, Office III, Enforcement and Compliance, International Trade Administration, U.S. Department of Commerce, 14th Street and Constitution Ave. NW., Washington, DC 20230, telephone: (202) 482-3692.

    SUPPLEMENTARY INFORMATION:

    Section 702 of the Trade Agreements Act of 1979 (as amended) (the Act) requires the Department of Commerce (the Department) to determine, in consultation with the Secretary of Agriculture, whether any foreign government is providing a subsidy with respect to any article of cheese subject to an in-quota rate of duty, as defined in section 702(h) of the Act, and to publish quarterly updates to the type and amount of those subsidies. We hereby provide the Department's quarterly update of subsidies on articles of cheese that were imported during the periods April 1, 2016, through June 30, 2016.

    The Department has developed, in consultation with the Secretary of Agriculture, information on subsidies, as defined in section 702(h) of the Act, being provided either directly or indirectly by foreign governments on articles of cheese subject to an in-quota rate of duty. The appendix to this notice lists the country, the subsidy program or programs, and the gross and net amounts of each subsidy for which information is currently available. The Department will incorporate additional programs which are found to constitute subsidies, and additional information on the subsidy programs listed, as the information is developed.

    The Department encourages any person having information on foreign government subsidy programs which benefit articles of cheese subject to an in-quota rate of duty to submit such information in writing to the Assistant Secretary for Enforcement and Compliance, U.S. Department of Commerce, 14th Street and Constitution Ave. NW., Washington, DC 20230.

    This determination and notice are in accordance with section 702(a) of the Act.

    Dated: October 28, 2016. Ronald K. Lorentzen, Acting Assistant Secretary for Enforcement and Compliance. Appendix—Subsidy Programs on Cheese Subject to an In-Quota Rate of Duty Country Program(s) Gross 1
  • subsidy
  • ($/lb)
  • Net 2
  • subsidy
  • ($/lb)
  • 28 European Union Member States 3 European Union Restitution Payments $0.00 $0.00 Canada Export Assistance on Certain Types of Cheese 0.46 0.46 Norway Indirect (Milk) Subsidy 0.00 0.00 Consumer Subsidy 0.00 0.00 Total 0.00 0.00 Switzerland Deficiency Payments 0.00 0.00 1 Defined in 19 U.S.C. 1677(5). 2 Defined in 19 U.S.C. 1677(6). 3 The 28 member states of the European Union are: Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden, and the United Kingdom.
    [FR Doc. 2016-26751 Filed 11-3-16; 8:45 am] BILLING CODE 3510-DS-P
    DEPARTMENT OF COMMERCE International Trade Administration Antidumping or Countervailing Duty Order, Finding, or Suspended Investigation; Opportunity To Request Administrative Review AGENCY:

    Enforcement and Compliance, International Trade Administration, Department of Commerce.

    FOR FURTHER INFORMATION CONTACT:

    Brenda E. Waters, Office of AD/CVD Operations, Customs Liaison Unit, Enforcement and Compliance, International Trade Administration, U.S. Department of Commerce, 14th Street and Constitution Avenue NW., Washington, DC 20230, telephone: (202) 482-4735.

    Background

    Each year during the anniversary month of the publication of an antidumping or countervailing duty order, finding, or suspended investigation, an interested party, as defined in section 771(9) of the Tariff Act of 1930, as amended (“the Act”), may request, in accordance with 19 CFR 351.213, that the Department of Commerce (“the Department”) conduct an administrative review of that antidumping or countervailing duty order, finding, or suspended investigation.

    All deadlines for the submission of comments or actions by the Department discussed below refer to the number of calendar days from the applicable starting date.

    Respondent Selection

    In the event the Department limits the number of respondents for individual examination for administrative reviews initiated pursuant to requests made for the orders identified below, the Department intends to select respondents based on U.S. Customs and Border Protection (“CBP”) data for U.S. imports during the period of review. We intend to release the CBP data under Administrative Protective Order (“APO”) to all parties having an APO within five days of publication of the initiation notice and to make our decision regarding respondent selection within 21 days of publication of the initiation Federal Register notice. Therefore, we encourage all parties interested in commenting on respondent selection to submit their APO applications on the date of publication of the initiation notice, or as soon thereafter as possible. The Department invites comments regarding the CBP data and respondent selection within five days of placement of the CBP data on the record of the review.

    In the event the Department decides it is necessary to limit individual examination of respondents and conduct respondent selection under section 777A(c)(2) of the Act:

    In general, the Department finds that determinations concerning whether particular companies should be “collapsed” (i.e., treated as a single entity for purposes of calculating antidumping duty rates) require a substantial amount of detailed information and analysis, which often require follow-up questions and analysis. Accordingly, the Department will not conduct collapsing analyses at the respondent selection phase of this review and will not collapse companies at the respondent selection phase unless there has been a determination to collapse certain companies in a previous segment of this antidumping proceeding (i.e., investigation, administrative review, new shipper review or changed circumstances review). For any company subject to this review, if the Department determined, or continued to treat, that company as collapsed with others, the Department will assume that such companies continue to operate in the same manner and will collapse them for respondent selection purposes. Otherwise, the Department will not collapse companies for purposes of respondent selection. Parties are requested to (a) identify which companies subject to review previously were collapsed, and (b) provide a citation to the proceeding in which they were collapsed. Further, if companies are requested to complete the Quantity and Value Questionnaire for purposes of respondent selection, in general each company must report volume and value data separately for itself. Parties should not include data for any other party, even if they believe they should be treated as a single entity with that other party. If a company was collapsed with another company or companies in the most recently completed segment of this proceeding where the Department considered collapsing that entity, complete quantity and value data for that collapsed entity must be submitted.

    Deadline for Withdrawal of Request for Administrative Review

    Pursuant to 19 CFR 351.213(d)(1), a party that requests a review may withdraw that request within 90 days of the date of publication of the notice of initiation of the requested review. The regulation provides that the Department may extend this time if it is reasonable to do so. In order to provide parties additional certainty with respect to when the Department will exercise its discretion to extend this 90-day deadline, interested parties are advised that, with regard to reviews requested on the basis of anniversary months on or after November 2016, the Department does not intend to extend the 90-day deadline unless the requestor demonstrates that an extraordinary circumstance prevented it from submitting a timely withdrawal request. Determinations by the Department to extend the 90-day deadline will be made on a case-by-case basis.

    The Department is providing this notice on its Web site, as well as in its “Opportunity to Request Administrative Review” notices, so that interested parties will be aware of the manner in which the Department intends to exercise its discretion in the future.

    OPPORTUNITY TO REQUEST A REVIEW:

    Not later than the last day of November 2016,1 interested parties may request administrative review of the following orders, findings, or suspended investigations, with anniversary dates in November for the following periods:

    1 Or the next business day, if the deadline falls on a weekend, federal holiday or any other day when the Department is closed.

    Period of review Antidumping Duty Proceedings Brazil: Circular Welded Non-Alloy Steel Pipe, A-351-809 11/1/15-10/31/16 Indonesia: Certain Coated Paper Suitable for High-Quality Print Graphics Using Sheet-Fed Presses A-560-823 11/1/15-10/31/16 Monosodium Glutamate, A-560-826 11/1/15-10/31/16 Mexico: Certain Circular Welded Non-Alloy Steel Pipe, A-201-805 11/1/15-10/31/16 Seamless Refined Copper Pipe and Tube, A-201-838 11/1/15-10/31/16 Steel Concrete Reinforcing Bar, A-201-844 11/1/15-10/31/16 Republic of Korea: Certain Circular Welded Non-Alloy Steel Pipe, A-580-809 11/1/15-10/31/16 Taiwan: Certain Hot-Rolled Carbon Steel Flat Products, A-583-835 11/1/15-10/31/16 Certain Circular Welded Non-Alloy Steel Pipe, A-583-814 11/1/15-10/31/16 Thailand: Certain Hot-Rolled Carbon Steel Flat Products, A-549-817 11/1/15-10/31/16 The People's Republic of China: Certain Cut-To-Length Carbon Steel, A-570-849 11/1/15-10/31/16 Certain Hot-Rolled Carbon Steel Flat Products, A-570-865 11/1/15-10/31/16 Certain Coated Paper Suitable For High-Quality Print Graphics Using Sheet-Fed Presses, A-570-958 11/1/15-10/31/16 Diamond Sawblades and Parts Thereof, A-570-900 11/1/15-10/31/16 Fresh Garlic, A-570-831 11/1/15-10/31/16 Lightweight Thermal Paper, A-570-920 11/1/15-10/31/16 Monosodium Glutamate, A-570-992 11/1/15-10/31/16 Paper Clips, A-570-826 11/1/15-10/31/16 Polyethylene Terephthalate (Pet) Film, A-570-924 11/1/15-10/31/16 Pure Magnesium in Granular Form, A-570-864 11/1/15-10/31/16 Refined Brown Aluminum Oxide, A-570-882 11/1/15-10/31/16 Seamless Carbon and Alloy Steel Standard, Line, and Pressure Pipe, A-570-956 11/1/15-10/31/16 Seamless Refined Copper Pipe and Tube, A-570-964 11/1/15-10/31/16 Ukraine: Certain Hot-Rolled Carbon Steel Flat Products, A-823-811 11/1/15-10/31/16 United Arab Emirates: Polyethylene Terephthalate (Pet) Film, A-520-803 11/1/15-10/31/16 Countervailing Duty Proceedings Indonesia: Certain Coated Paper Suitable for High-Quality Print Graphics Using Sheet-Fed Presses, C-560-824 1/1/15-12/31/15 The People's Republic of China: Certain Coated Paper Suitable For High-Quality Print Graphics Using Sheet-Fed Presses, C-570-959 1/1/15-12/31/15 Chlorinated Isocyanurates, C-570-991 1/1/15-12/31/15 Lightweight Thermal Paper, C-570-921 1/1/15-12/31/15 Seamless Carbon and Alloy Steel Standard, Line, and Pressure Pipe, C-570-957 1/1/15-12/31/15 Turkey: Steel Concrete Reinforcing Bar, C-489-819 1/1/15-12/31/15 Suspension Agreements Ukraine: Certain Cut-To-Length Carbon Steel, A-823-808 11/1/15-10/31/16

    In accordance with 19 CFR 351.213(b), an interested party as defined by section 771(9) of the Act may request in writing that the Secretary conduct an administrative review. For both antidumping and countervailing duty reviews, the interested party must specify the individual producers or exporters covered by an antidumping finding or an antidumping or countervailing duty order or suspension agreement for which it is requesting a review. In addition, a domestic interested party or an interested party described in section 771(9)(B) of the Act must state why it desires the Secretary to review those particular producers or exporters. If the interested party intends for the Secretary to review sales of merchandise by an exporter (or a producer if that producer also exports merchandise from other suppliers) which was produced in more than one country of origin and each country of origin is subject to a separate order, then the interested party must state specifically, on an order-by-order basis, which exporter(s) the request is intended to cover.

    Note that, for any party the Department was unable to locate in prior segments, the Department will not accept a request for an administrative review of that party absent new information as to the party's location. Moreover, if the interested party who files a request for review is unable to locate the producer or exporter for which it requested the review, the interested party must provide an explanation of the attempts it made to locate the producer or exporter at the same time it files its request for review, in order for the Secretary to determine if the interested party's attempts were reasonable, pursuant to 19 CFR 351.303(f)(3)(ii).

    As explained in Antidumping and Countervailing Duty Proceedings: Assessment of Antidumping Duties, 68 FR 23954 (May 6, 2003), and Non-Market Economy Antidumping Proceedings: Assessment of Antidumping Duties, 76 FR 65694 (October 24, 2011) the Department clarified its practice with respect to the collection of final antidumping duties on imports of merchandise where intermediate firms are involved. The public should be aware of this clarification in determining whether to request an administrative review of merchandise subject to antidumping findings and orders.2

    2See also the Enforcement and Compliance Web site at http://trade.gov/enforcement/.

    Further, as explained in Antidumping Proceedings: Announcement of Change in Department Practice for Respondent Selection in Antidumping Duty Proceedings and Conditional Review of the Nonmarket Economy Entity in NME Antidumping Duty Proceedings, 78 FR 65963 (November 4, 2013), the Department clarified its practice with regard to the conditional review of the non-market economy (NME) entity in administrative reviews of antidumping duty orders. The Department will no longer consider the NME entity as an exporter conditionally subject to administrative reviews. Accordingly, the NME entity will not be under review unless the Department specifically receives a request for, or self-initiates, a review of the NME entity.3 In administrative reviews of antidumping duty orders on merchandise from NME countries where a review of the NME entity has not been initiated, but where an individual exporter for which a review was initiated does not qualify for a separate rate, the Department will issue a final decision indicating that the company in question is part of the NME entity. However, in that situation, because no review of the NME entity was conducted, the NME entity's entries were not subject to the review and the rate for the NME entity is not subject to change as a result of that review (although the rate for the individual exporter may change as a function of the finding that the exporter is part of the NME entity). Following initiation of an antidumping administrative review when there is no review requested of the NME entity, the Department will instruct CBP to liquidate entries for all exporters not named in the initiation notice, including those that were suspended at the NME entity rate.

    3 In accordance with 19 CFR 351.213(b)(1), parties should specify that they are requesting a review of entries from exporters comprising the entity, and to the extent possible, include the names of such exporters in their request.

    All requests must be filed electronically in Enforcement and Compliance's Antidumping and Countervailing Duty Centralized Electronic Service System (“ACCESS”) on Enforcement and Compliance's ACCESS Web site at http://access.trade.gov. 4 Further, in accordance with 19 CFR 351.303(f)(l)(i), a copy of each request must be served on the petitioner and each exporter or producer specified in the request.

    4See Antidumping and Countervailing Duty Proceedings: Electronic Filing Procedures; Administrative Protective Order Procedures, 76 FR 39263 (July 6, 2011).

    The Department will publish in the Federal Register a notice of “Initiation of Administrative Review of Antidumping or Countervailing Duty Order, Finding, or Suspended Investigation” for requests received by the last day of November 2016. If the Department does not receive, by the last day of November 2016, a request for review of entries covered by an order, finding, or suspended investigation listed in this notice and for the period identified above, the Department will instruct CBP to assess antidumping or countervailing duties on those entries at a rate equal to the cash deposit of (or bond for) estimated antidumping or countervailing duties required on those entries at the time of entry, or withdrawal from warehouse, for consumption and to continue to collect the cash deposit previously ordered.

    For the first administrative review of any order, there will be no assessment of antidumping or countervailing duties on entries of subject merchandise entered, or withdrawn from warehouse, for consumption during the relevant provisional-measures “gap” period of the order, if such a gap period is applicable to the period of review.

    This notice is not required by statute but is published as a service to the international trading community.

    Dated: October 27, 2016. Christian Marsh, Deputy Assistant Secretary for Antidumping and Countervailing Duty Operations.
    [FR Doc. 2016-26749 Filed 11-3-16; 8:45 am] BILLING CODE 3510-DS-P
    DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration RIN 0648-XA341 Marine Mammals; File No. 15324 AGENCY:

    National Marine Fisheries Service (NMFS), National Oceanic and Atmospheric Administration (NOAA), Commerce.

    ACTION:

    Notice; issuance of permit amendment.

    SUMMARY:

    Notice is hereby given that Alaska Department of Fish and Game (ADF&G), Division of Wildlife Conservation, Juneau, AK (Responsible Party: Robert Small, Ph.D.) has been issued a minor amendment to Scientific Permit No. 15324.

    ADDRESSES:

    The amendment and related documents are available for review upon written request or by appointment in the Permits and Conservation Division, Office of Protected Resources, NMFS, 1315 East-West Highway, Room 13705, Silver Spring, MD 20910; phone (301) 427-8401; fax (301) 713-0376.

    FOR FURTHER INFORMATION CONTACT:

    Sara Young or Amy Sloan, phone (301) 427-8401.

    SUPPLEMENTARY INFORMATION:

    The requested amendment has been granted under the authority of the Marine Mammal Protection Act of 1972, as amended (16 U.S.C. 1361 et seq.) and the regulations governing the taking and importing of marine mammals (50 CFR part 216).

    The original permit (No. 15324), issued on May 25, 2011 (76 FR 30309) authorized taking spotted (Phoca largha), ringed (Phoca hispida), bearded (Erignathus barbatus), and ribbon seals (Histriophoca fasciata) in the Bering, Chukchi, and Beaufort Seas of Alaska to monitor the status and health of all four species by analyzing samples from the subsistence harvest and by documenting movements and habitat use by tracking animals with satellite transmitters through December 31, 2016. The major amendment (No. 15324-01) amended the permit to include: (1) Takes by harassment during aerial and vessel surveys to monitor seal distribution relative to changes in sea ice; (2) increased takes by incidental harassment; (3) the use of additional sedative drugs during capture activities; and (4) the use of remote dart-delivery as a method for capturing bearded seals. reduced the number of authorized mortalities for bearded seals and ringed seals. A second amendment (No. 15324-02) was issued to: (1) Modify the annual research time period from March-November to year-round; (2) authorize captures on land (in addition to already permitted captures in water and on ice); (3) modify open water capture techniques for bearded seals to include the use of a non-lethal deterrent (e.g., rubber bullets, bean bags, or paintballs) aimed near but not on seals while pursuing a seal prior to capture in a net to increase the chances of successful capture. The minor amendment (No. 15324-03) extends the duration of the permit through December 31, 2017, but does not change any other terms or conditions of the permit.

    Dated: October 31, 2016. Julia Harrison, Chief, Permits and Conservation Division, Office of Protected Resources, National Marine Fisheries Service.
    [FR Doc. 2016-26677 Filed 11-3-16; 8:45 am] BILLING CODE 3510-22-P
    DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration U.S. Integrated Ocean Observing System (IOOS®) Advisory Committee AGENCY:

    National Ocean Service, National Oceanic and Atmospheric Administration (NOAA), Department of Commerce.

    ACTION:

    Notice of open meeting (via teleconference).

    SUMMARY:

    Notice is hereby given of a virtual meeting of the U.S. Integrated Ocean Observing System (IOOS®) Advisory Committee (Committee).

    DATES AND TIMES:

    The public meeting will be held on Tuesday, November 22, 2016, from 3:00 p.m. to 4:00 p.m. ET. These times and the agenda topics described below are subject to change. Refer to the Web page listed below for the most up-to-date meeting agenda.

    FOR FURTHER INFORMATION CONTACT:

    Jessica Snowden, Designated Federal Official, U.S. IOOS Advisory Committee, U.S. IOOS Program, 1315 East-West Highway, 2nd Floor, Silver Spring, MD 20910, Silver Spring, MD 20910; Phone 240-533-9466; Fax 301-713-3281; Email [email protected] or visit the U.S. IOOS Advisory Committee Web site at https://ioos.noaa.gov/community/u-s-ioos-advisory-committee/.

    SUPPLEMENTARY INFORMATION:

    The Committee meeting will be held via teleconference. Members of the public who wish to participate in the meeting must register in advance by 5:00 p.m. ET on November 21. 2016. Please register by contacting Jessica Snowden, Designated Federal Official by email at [email protected] or telephone at 240-533-9466. Teleconference information will be provided to registrants prior to the meeting. While the meeting will be open to the public, teleconference capacity may be limited.

    The Committee was established by the NOAA Administrator as directed by Section 12304 of the Integrated Coastal and Ocean Observation System Act, part of the Omnibus Public Land Management Act of 2009 (Pub. L. 111-11). The Committee advises the NOAA Administrator and the Interagency Ocean Observation Committee (IOOC) on matters related to the responsibilities and authorities set forth in section 12302 of the Integrated Coastal and Ocean Observation System Act of 2009 and other appropriate matters as the Under Secretary refers to the Committee for review and advice.

    The Committee will provide advice on:

    (a) administration, operation, management, and maintenance of the System;

    (b) expansion and periodic modernization and upgrade of technology components of the System;

    (c) identification of end-user communities, their needs for information provided by the System, and the System's effectiveness in dissemination information to end-user communities and to the general public; and

    (d) any other purpose identified by the Under Secretary of Commerce for Oceans and Atmosphere or the Interagency Ocean Observation Committee.

    The meeting will be open to public participation with a 10-minute public comment period on November 22, 2016, from 3:45 p.m. to 3:55 p.m. (check agenda on Web site to confirm time.) The Committee expects that public statements presented at its meetings will not be repetitive of previously submitted verbal or written statements. In general, each individual or group making a verbal presentation will be limited to a total time of three (3) minutes. Written comments should be received by the Designated Federal Official by November 18, 2016 to provide sufficient time for Committee review. Written comments received after November 18, 2016, will be distributed to the Committee, but may not be reviewed prior to the meeting date.

    Matters To Be Considered: The meeting will focus on review of a draft statement to the next administration on U.S. IOOS and its benefit to the nation. The agenda is subject to change. The latest version will be posted at https://ioos.noaa.gov/community/u-s-ioos-advisory-committee/.

    Dated: October 24, 2016. Donne Rivelli, Chief Financial Officer (acting), National Ocean Service.
    [FR Doc. 2016-26676 Filed 11-3-16; 8:45 am] BILLING CODE P
    COMMITTEE FOR PURCHASE FROM PEOPLE WHO ARE BLIND OR SEVERELY DISABLED Procurement List; Addition AGENCY:

    Committee for Purchase From People Who Are Blind or Severely Disabled.

    ACTION:

    Addition to the Procurement List.

    SUMMARY:

    This action adds a service to the Procurement List that will be provided by a nonprofit agency employing persons who are blind or have other severe disabilities.

    DATES:

    Effective on December 4, 2016.

    ADDRESSES:

    Committee for Purchase From People Who Are Blind or Severely Disabled, 1401 S. Clark Street, Suite 715, Arlington, Virginia, 22202-4149.

    FOR FURTHER INFORMATION CONTACT:

    Barry S. Lineback, Telephone: (703) 603-7740, Fax: (703) 603-0655, or email [email protected].

    SUPPLEMENTARY INFORMATION:

    Addition

    On 10/7/2016 (81 FR 69789-69790), the Committee for Purchase From People Who Are Blind or Severely Disabled published notice of proposed addition to the Procurement List.

    After consideration of the material presented to it concerning capability of qualified nonprofit agency to provide the service and impact of the addition on the current or most recent contractors, the Committee has determined that the service listed below is suitable for procurement by the Federal Government under 41 U.S.C. 8501-8506 and 41 CFR 51-2.4.

    Regulatory Flexibility Act Certification

    I certify that the following action will not have a significant impact on a substantial number of small entities. The major factors considered for this certification were:

    1. The action will not result in any additional reporting, recordkeeping or other compliance requirements for small entities other than the small organization that will provide the service to the Government.

    2. The action will result in authorizing small entities to provide the service to the Government.

    3. There are no known regulatory alternatives which would accomplish the objectives of the Javits-Wagner-O'Day Act (41 U.S.C. 8501-8506) in connection with the service proposed for addition to the Procurement List.

    End of Certification

    Accordingly, the following service is added to the Procurement List:

    Service Service Type: Custodial Service. Service Mandatory for: Architect of the Capitol, Capitol Power Plant & Coal Yard, 25 E Street SE., & 42 I Street, Washington, DC. Mandatory Source(s) of Supply: Anchor Mental Health Association, Washington, DC. Contracting Activity: Architect of the Capitol, U.S. Capitol Building, Washington, DC. Barry S. Lineback, Director, Business Operations.
    [FR Doc. 2016-26732 Filed 11-3-16; 8:45 am] BILLING CODE 6353-01-P
    COMMITTEE FOR PURCHASE FROM PEOPLE WHO ARE BLIND OR SEVERELY DISABLED Procurement List; Proposed Addition and Deletions AGENCY:

    Committee for Purchase From People Who Are Blind or Severely Disabled.

    ACTION:

    Proposed Addition to and Deletions from the Procurement List.

    SUMMARY:

    The Committee is proposing to add a service to the Procurement List that will be furnished by nonprofit agency employing persons who are blind or have other severe disabilities and, deletes a product and services previously furnished by such agencies.

    DATES:

    Comments must be received on or before December 4, 2016.

    ADDRESSES:

    Committee for Purchase From People Who Are Blind or Severely Disabled, 1401 S. Clark Street, Suite 715, Arlington, Virginia, 22202-4149.

    FOR FURTHER INFORMATION CONTACT:

    Barry S. Lineback, Telephone: (703) 603-7740, Fax: (703) 603-0655, or email [email protected].

    SUPPLEMENTARY INFORMATION:

    This notice is published pursuant to 41 U.S.C. 8503 (a)(2) and 41 CFR 51-2.3. Its purpose is to provide interested persons an opportunity to submit comments on the proposed actions.

    Addition

    If the Committee approves the proposed addition, the entities of the Federal Government identified in this notice will be required to procure the service listed below from nonprofit agency employing persons who are blind or have other severe disabilities.

    The following service is proposed for addition to the Procurement List for production by the nonprofit agency listed:

    Service Service Type: Janitorial/Custodial and Related Service. Mandatory for: GSA PBS Region 10, Pioneer Courthouse, 700 SW 6th Avenue, Portland, OR. Mandatory Source(s) of Supply: Portland Habilitation Center, Inc., Portland, OR. Contracting Activity: GSA/PUBLIC BUILDINGS SERVICE, Auburn, WA. Deletions

    The following product and services are proposed for deletion from the Procurement List:

    Product NSN(s)—Product Name(s): 8460-01-433-8398—Briefcase, Black. Mandatory Source(s) of Supply: Helena Industries, Inc., Helena, MT. Contracting Activity: General Services Administration, Fort Worth, TX. Services Service Type: Food Service Attendant. Mandatory for: Kirtland Air Force Base, Kirtland AFB, NM. Mandatory Source(s) of Supply: LifeROOTS, Inc., Albuquerque, NM. Contracting Activity: Dept of the Air Force, FA7014 AFDW PK. Service Type: Facilities Maintenance Service. Mandatory for: Buckley Annex and Building 667, Buckley AFB, CO. Mandatory Source(s) of Supply: Professional Contract Services, Inc., Austin, TX. Contracting Activity: Dept of the Air Force, FA2543 460 CONF LGC. Barry S. Lineback, Director, Business Operations.
    [FR Doc. 2016-26731 Filed 11-3-16; 8:45 am] BILLING CODE 6353-01-P
    BUREAU OF CONSUMER FINANCIAL PROTECTION [Docket No. CFPB-2016-0046] Agency Information Collection Activities: Comment Request AGENCY:

    Bureau of Consumer Financial Protection.

    ACTION:

    Notice and request for comment.

    SUMMARY:

    In accordance with the Paperwork Reduction Act of 1995 (PRA), the Bureau of Consumer Financial Protection (Bureau) is requesting to renew the Office of Management and Budget (OMB) approval for an existing information collection titled, “Truth In Lending Act (Regulation Z)—Appraisals For Higher-Priced Mortgage Loans.”

    DATES:

    Written comments are encouraged and must be received on or before January 3, 2017 to be assured of consideration.

    ADDRESSES:

    You may submit comments, identified by the title of the information collection, OMB Control Number (see below), and docket number (see above), by any of the following methods:

    Electronic: http://www.regulations.gov. Follow the instructions for submitting comments.

    Mail: Consumer Financial Protection Bureau (Attention: PRA Office), 1700 G Street NW., Washington, DC 20552.

    Hand Delivery/Courier: Consumer Financial Protection Bureau (Attention: PRA Office), 1275 First Street NE., Washington, DC 20002.

    Please note that comments submitted after the comment period will not be accepted. In general, all comments received will become public records, including any personal information provided. Sensitive personal information, such as account numbers or Social Security numbers, should not be included.

    FOR FURTHER INFORMATION CONTACT:

    Documentation prepared in support of this information collection request is available at www.regulations.gov. Requests for additional information should be directed to the Consumer Financial Protection Bureau, (Attention: PRA Office), 1700 G Street NW., Washington, DC 20552, (202) 435-9575, or email: [email protected]. Please do not submit comments to this mailbox.

    SUPPLEMENTARY INFORMATION:

    Title of Collection: Truth In Lending Act (Regulation Z)—Appraisals for Higher-Priced Mortgage Loans.”

    OMB Control Number: 3170-0026.

    Type of Review: Extension without change of a currently approved information collection.

    Affected Public: Private sector (depository institutions, credit unions and non-depository financial institutions).

    Estimated Number of Respondents: 2,047.

    Estimated Total Annual Burden Hours: 516.

    Abstract: The Truth in Lending Act (TILA) to requires creditors originating mortgages with an annual percentage rate that exceeds the average prime offer rate by a specified percentage (higher-risk mortgage loans) to obtain an appraisal or appraisals meeting certain specified standards, provide applicants with a notification regarding the use of appraisals, and give applicants a copy of written appraisals used. These changes were enacted as part of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act), Public Law 111-203, 1471, 124 Stat. 1376, 2185 (2010). Section 1471 of the Dodd-Frank Act adds a new section to TILA, section 129H, addressing appraisal requirements for higher-risk mortgage loans.

    Request for Comments: Comments are invited on: (a) Whether the collection of information is necessary for the proper performance of the functions of the Bureau, including whether the information will have practical utility; (b) The accuracy of the Bureau's estimate of the burden of the collection of information, including the validity of the methods and the assumptions used; (c) Ways to enhance the quality, utility, and clarity of the information to be collected; and (d) Ways to minimize the burden of the collection of information on respondents, including through the use of automated collection techniques or other forms of information technology. Comments submitted in response to this notice will be summarized and/or included in the request for OMB approval. All comments will become a matter of public record.

    Dated: November 1, 2016. Darrin A. King, Paperwork Reduction Act Officer, Bureau of Consumer Financial Protection.
    [FR Doc. 2016-26678 Filed 11-3-16; 8:45 am] BILLING CODE 4810-AM-P
    CORPORATION FOR NATIONAL AND COMMUNITY SERVICE Information Collection; Submission for OMB Review, Comment Request AGENCY:

    Corporation for National and Community Service.

    ACTION:

    Notice.

    SUMMARY:

    The Corporation for National and Community Service (CNCS) has submitted a public information collection request (ICR) entitled AmeriCorps Affiliate Application Instructions for review and approval in accordance with the Paperwork Reduction Act of 1995, Public Law 104-13, (44 U.S.C. Chapter 35). Copies of this ICR, with applicable supporting documentation, may be obtained by calling the Corporation for National and Community Service, Patti Stengel, at 202-606-6745 or email to [email protected]. Individuals who use a telecommunications device for the deaf (TTY-TDD) may call 1-800-833-3722 between 8:00 a.m. and 8:00 p.m. Eastern Time, Monday through Friday.

    DATES:

    Comments may be submitted, identified by the title of the information collection activity, within December 5, 2016.

    ADDRESSES:

    Comments may be submitted, identified by the title of the information collection activity, to the Office of Information and Regulatory Affairs, Attn: Ms. Sharon Mar, OMB Desk Officer for the Corporation for National and Community Service, by any of the following two methods within 30 days from the date of publication in the Federal Register:

    (1) By fax to: 202-395-6974, Attention: Ms. Sharon Mar, OMB Desk Officer for the Corporation for National and Community Service; or

    (2) By email to: [email protected].

    SUPPLEMENTARY INFORMATION:

    The OMB is particularly interested in comments which:

    • Evaluate whether the proposed collection of information is necessary for the proper performance of the functions of CNCS, including whether the information will have practical utility;

    • Evaluate the accuracy of the agency's estimate of the burden of the proposed collection of information, including the validity of the methodology and assumptions used;

    • Propose ways to enhance the quality, utility, and clarity of the information to be collected; and

    • Propose ways to minimize the burden of the collection of information on those who are to respond, including through the use of appropriate automated, electronic, mechanical, or other technological collection techniques or other forms of information technology.

    Comments

    A 60-day Notice requesting public comment was published in the Federal Register on July 19, 2016 at Vol. 81 No. 138 FR 46913. This comment period ended September 19, 2016. No responsive comments were received from this Notice.

    Description: This information collection consists of the questions applicants answer to request to be an AmeriCorps Affiliate sponsor organization.

    Type of Review: New.

    Agency: Corporation for National and Community Service.

    Title: AmeriCorps Affiliate Application Instructions.

    OMB Number: TBD.

    Agency Number: None.

    Affected Public: The public affected are applicant organizations for AmeriCorps Affiliate.

    Total Respondents: 20.

    Frequency: Approximately annually.

    Average Time per Response: 20 hours.

    Estimated Total Burden Hours: 400 hours.

    Total Burden Cost (capital/startup): None.

    Total Burden Cost (operating/maintenance): None.

    Dated: October 31, 2016. Kim Mansaray, Chief of Program Operations.
    [FR Doc. 2016-26633 Filed 11-3-16; 8:45 am] BILLING CODE 6050-28-P
    DEPARTMENT OF DEFENSE Department of the Army Change to the Freight Carrier Registration Program (FCRP) Open Season AGENCY:

    Department of the Army, DOD.

    ACTION:

    Notice.

    SUMMARY:

    The Military Surface Deployment and Distribution Command (SDDC) will officially have an Open Season, effective 09 Jan 17 thru 28 Feb 17 (Applications will not be accepted prior to 9 Jan 17). This will affect domestic motor Transportation Service Providers (TSPs) only. TSPs must be registered in the Federal Motor Carrier Safety Administration (FMSCA) and have valid Department of Transportation (DOT) authority for three (3) consecutive years (without a break) prior to the 09 Jan 17. New TSPs will indicate their small business status via the Freight Carrier Registration Program (FCRP) during registration. Registration for other modes will continue to be accepted (barge, ocean, pipeline, and international carriers).

    ADDRESSES:

    Submit comments to Military Surface Deployment and Distribution Command, ATTN: AMSSD-OPM, 1 Soldier Way, Scott AFB, IL 62225-5006. Request for additional information may be sent by email to: [email protected].

    FOR FURTHER INFORMATION CONTACT:

    FCRP Team, (618) 220-6470.

    SUPPLEMENTARY INFORMATION:

    References: Military Freight Traffic Unified Rules Publication-1 (MFTURP-1).

    Miscellaneous: This announcement can be accessed via the SDDC Web site at: http://www.sddc.army.mil/.

    Daniel J. Bradley, Deputy Chief, Domestic Movement Support Division.
    [FR Doc. 2016-26672 Filed 11-3-16; 8:45 am] BILLING CODE 5001-03-P
    DEPARTMENT OF DEFENSE Office of the Secretary Government-Industry Advisory Panel; Notice of Federal Advisory Committee Meeting AGENCY:

    Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics), Department of Defense (DoD).

    ACTION:

    Federal advisory committee meeting notice.

    SUMMARY:

    The Department of Defense is publishing this notice to announce the following Federal advisory committee meeting of the Government-Industry Advisory Panel. This meeting is open to the public.

    DATES:

    The meeting will be held from 9:00 a.m. to 5:00 p.m. on Thursday, November 10, 2016. Public registration will begin at 8:45 a.m. For entrance into the meeting, you must meet the necessary requirements for entrance into the Pentagon. For more detailed information, please see the following link: http://www.pfpa.mil/access.html.

    ADDRESSES:

    Pentagon Library, Washington Headquarters Services, 1155 Defense Pentagon, Washington, DC 20301-1155. The meeting will be held in Room B10. The Pentagon Library is located in the Pentagon Library and Conference Center (PLC2) across the Corridor 8 bridge.

    FOR FURTHER INFORMATION CONTACT:

    LTC Andrew Lunoff, Office of the Assistant Secretary of Defense (Acquisition), 3090 Defense Pentagon, Washington, DC 20301-3090, email: [email protected], phone: 571-256-9004.

    SUPPLEMENTARY INFORMATION:

    Due to circumstances beyond the control of the Designated Federal Officer and the Department of Defense, the Government-Industry Advisory Panel is unable to provide public notification, as required by 41 CFR 102-3.150(a), for its meeting on Thursday, November 10, 2016. Accordingly, the Advisory Committee Management Officer for the Department of Defense, pursuant to 41 CFR 102-3.150(b), waives the 15-calendar day notification requirement.

    Purpose of the Meeting: This meeting is being held under the provisions of the Federal Advisory Committee Act of 1972 (FACA) (5 U.S.C., Appendix, as amended), the Government in the Sunshine Act of 1976 (5 U.S.C. 552b, as amended), and 41 CFR 102-3.150. The Government-Industry Advisory Panel will review sections 2320 and 2321 of title 10, United States Code (U.S.C.), regarding rights in technical data and the validation of proprietary data restrictions and the regulations implementing such sections, for the purpose of ensuring that such statutory and regulatory requirements are best structured to serve the interest of the taxpayers and the national defense. The scope of the panel is as follows: (1) Ensuring that the Department of Defense (DoD) does not pay more than once for the same work, (2) Ensuring that the DoD contractors are appropriately rewarded for their innovation and invention, (3) Providing for cost-effective reprocurement, sustainment, modification, and upgrades to the DoD systems, (4) Encouraging the private sector to invest in new products, technologies, and processes relevant to the missions of the DoD, and (5) Ensuring that the DoD has appropriate access to innovative products, technologies, and processes developed by the private sector for commercial use.

    Agenda: This will be the ninth meeting of the Government-Industry Advisory Panel with a series of meetings planned through December 14, 2016. The panel will cover details of 10 U.S.C. 2320 and 2321, begin understanding the implementing regulations, and detail the necessary groups within the private sector and government to provide supporting documentation for their review of these codes and regulations during follow-on meetings. Agenda items for this meeting will include the following: (1) Final discussions and deliberations on 10 U.S.C. 2320 and 2321 tension points; (2) Briefing from contractor logistics manager; (3) Report framework and collaboration; (4) Comment Adjudication & Planning for follow-on meeting.

    Availability of Materials for the Meeting: A copy of the agenda or any updates to the agenda for the November 10, 2016 meeting will be available as requested or at the following site: https://database.faca.gov/committee/meetings.aspx?cid=2561. It will also be distributed upon request.

    Minor changes to the agenda will be announced at the meeting. All materials will be posted to the FACA database after the meeting.

    Public Accessibility to the Meeting: Pursuant to 5 U.S.C. 552b, as amended, and 41 CFR 102-3.140 through 102-3.165, and subject to the availability of space, this meeting is open to the public. Registration of members of the public who wish to attend the meeting will begin upon publication of this meeting notice and end three business days (November 7) prior to the start of the meeting. All members of the public must contact LTC Lunoff at the phone number or email listed in the FOR FURTHER INFORMATION CONTACT section to make arrangements for Pentagon escort, if necessary. Public attendees should arrive at the Pentagon's Visitor's Center, located near the Pentagon Metro Station's south exit and adjacent to the Pentagon Transit Center bus terminal with sufficient time to complete security screening no later than 8:30 a.m. on November 10. To complete security screening, please come prepared to present two forms of identification of which one must be a pictured identification card. Government and military DoD CAC holders are not required to have an escort, but are still required to pass through the Visitor's Center to gain access to the Building. Seating is limited and is on a first-to-arrive basis. Attendees will be asked to provide their name, title, affiliation, and contact information to include email address and daytime telephone number to the Designated Federal Officer (DFO) listed in the FOR FURTHER INFORMATION CONTACT section. Any interested person may attend the meeting, file written comments or statements with the committee, or make verbal comments from the floor during the public meeting, at the times, and in the manner, permitted by the committee.

    Special Accommodations: The meeting venue is fully handicap accessible, with wheelchair access.

    Individuals requiring special accommodations to access the public meeting or seeking additional information about public access procedures, should contact LTC Lunoff, the committee DFO, at the email address or telephone number listed in the FOR FURTHER INFORMATION CONTACT section, at least five (5) business days prior to the meeting so that appropriate arrangements can be made.

    Written Comments or Statements: Pursuant to 41 CFR 102-3.105(j) and 102-3.140 and section 10(a)(3) of the Federal Advisory Committee Act, the public or interested organizations may submit written comments or statements to the Government-Industry Advisory Panel about its mission and/or the topics to be addressed in this public meeting. Written comments or statements should be submitted to LTC Lunoff, the committee DFO, via electronic mail, the preferred mode of submission, at the email address listed in the FOR FURTHER INFORMATION CONTACT section in the following formats: Adobe Acrobat or Microsoft Word. The comment or statement must include the author's name, title, affiliation, address, and daytime telephone number. Written comments or statements being submitted in response to the agenda set forth in this notice must be received by the committee DFO at least five (5) business days prior to the meeting so that they may be made available to the Government-Industry Advisory Panel for its consideration prior to the meeting. Written comments or statements received after this date may not be provided to the panel until its next meeting. Please note that because the panel operates under the provisions of the Federal Advisory Committee Act, as amended, all written comments will be treated as public documents and will be made available for public inspection.

    Verbal Comments: Members of the public will be permitted to make verbal comments during the meeting only at the time and in the manner allowed herein. If a member of the public is interested in making a verbal comment at the open meeting, that individual must submit a request, with a brief statement of the subject matter to be addressed by the comment, at least three (3) business days in advance to the committee DFO, via electronic mail, the preferred mode of submission, at the email address listed in the FOR FURTHER INFORMATION CONTACT section. The committee DFO will log each request to make a comment, in the order received, and determine whether the subject matter of each comment is relevant to the panel's mission and/or the topics to be addressed in this public meeting. A 30-minute period near the end of the meeting will be available for verbal public comments. Members of the public who have requested to make a verbal comment and whose comments have been deemed relevant under the process described in this paragraph, will be allotted no more than five (5) minutes during this period, and will be invited to speak in the order in which their requests were received by the DFO.

    Dated: November 1, 2016. Aaron Siegel, Alternate OSD Federal Register Liaison Officer, Department of Defense.
    [FR Doc. 2016-26669 Filed 11-3-16; 8:45 am] BILLING CODE 5001-06-P
    DEPARTMENT OF DEFENSE Office of the Secretary [Docket ID: DOD-2016-HA-0109] Proposed Collection; Comment Request AGENCY:

    Office of the Assistant Secretary of Defense for Health Affairs, DoD.

    ACTION:

    Notice.

    SUMMARY:

    In compliance with the Paperwork Reduction Act of 1995, the Office of the Assistant Secretary of Defense for Health Affairs announces a proposed public information collection and seeks public comment on the provisions thereof. Comments are invited on: Whether the proposed collection of information is necessary for the proper performance of the functions of the agency, including whether the information shall have practical utility; the accuracy of the agency's estimate of the burden of the proposed information collection; ways to enhance the quality, utility, and clarity of the information to be collected; and ways to minimize the burden of the information collection on respondents, including through the use of automated collection techniques or other forms of information technology.

    DATES:

    Consideration will be given to all comments received by January 3, 2017.

    ADDRESSES:

    You may submit comments, identified by docket number and title, by any of the following methods:

    Federal eRulemaking Portal: http://www.regulations.gov. Follow the instructions for submitting comments.

    Mail: Department of Defense, Office of the Deputy Chief Management Officer, Directorate for Oversight and Compliance, 4800 Mark Center Drive, Mailbox #24, Alexandria, VA 22350-1700.

    Instructions: All submissions received must include the agency name, docket number and title for this Federal Register document. The general policy for comments and other submissions from members of the public is to make these submissions available for public viewing on the Internet at http://www.regulations.gov as they are received without change, including any personal identifiers or contact information.

    Any associated form(s) for this collection may be located within this same electronic docket and downloaded for review/testing. Follow the instructions at http://www.regulations.gov for submitting comments. Please submit comments on any given form identified by docket number, form number, and title.

    FOR FURTHER INFORMATION CONTACT:

    To request more information on this proposed information collection or to obtain a copy of the proposal and associated collection instruments, please write to Mr. Doug McBroom, Defense Health Agency, TRICARE Policy & Benefits Office, 7700 Arlington Blvd., Suite 5101, Falls Church, VA 22042-5101, telephone 303-676-3533.

    SUPPLEMENTARY INFORMATION:

    Title; Associated Form; and OMB Number: Certification of Non-Contributory TRICARE Supplemental Insurance Plan; OMB Control Number 0720-0044.

    Needs and Uses: Section 707 of the John Warner National Defense Authorization Act for Fiscal Year 2007 added section 1097c to Title 10. Section 1097c prohibits employers from offering financial or other incentives to certain TRICARE-eligible employees to not enroll in an employer-offered group health plan. In other words, employers may no longer offer TRICARE supplemental insurance plans as part of an employee benefit package. Employers may, however, offer TRICARE supplemental insurance plans as part of an employee benefit package provided the plan is not paid for in whole or in part by the employer and is not endorsed by the employer. When such TRICARE supplemental plans are offered, the employer must properly document that they did not provide any payment for the benefit nor receive any direct or indirect consideration or compensation for offering the benefit; the employer's only involvement is providing the administrative support. That certification will be provided upon request to the Department of Defense.

    Affected Public: Business, or other for profit; Not-for-profit institutions.

    Annual Burden Hours: 20.

    Number of Respondents: 10.

    Responses per Respondent: 1.

    Annual Responses: 10.

    Average Burden per Response: 2 hours.

    Frequency: On Occasion.

    Respondents are limited to employers who make available non-contributory TRICARE supplemental insurance plan to their employees. One certification must be completed per employer and kept on file by the employer for as long as such plans are offered.

    Dated: November 1, 2016. Aaron Siegel, Alternate OSD Federal Register Liaison Officer, Department of Defense.
    [FR Doc. 2016-26697 Filed 11-3-16; 8:45 am] BILLING CODE 5001-06-P
    DEPARTMENT OF DEFENSE Office of the Secretary [Transmittal No. 16-49] 36(b)(1) Arms Sales Notification AGENCY:

    Department of Defense, Defense Security Cooperation Agency.

    ACTION:

    Notice.

    SUMMARY:

    The Department of Defense is publishing the unclassified text of a section 36(b)(1) arms sales notification. This is published to fulfill the requirements of section 155 of Public Law 104-164 dated July 21, 1996.

    FOR FURTHER INFORMATION CONTACT:

    Chang Suh, DSCA/SA&E/RAN, (703) 697-8975.

    The following is a copy of a letter to the Speaker of the House of Representatives, Transmittal 16-49 with attached Policy Justification and Sensitivity of Technology.

    Dated: November 1, 2016. Aaron Siegel, Alternate OSD Federal Register Liaison Officer, Department of Defense. BILLING CODE 5001-06-P EN04NO16.319 BILLING CODE 5001-06-C Transmittal No. 16-49 Notice of Proposed Issuance of Letter of Offer Pursuant to Section 36(b)(1) of the Arms Export Control Act, as amended

    (i) Prospective Purchaser: The Government of Egypt

    (ii) Total Estimated Value:

    Major Defense Equipment * $56.4 million Other $25.0 million Total $81.4 million

    (iii) Description and Quantity or Quantities of Articles or Services under Consideration for Purchase:

    Major Defense Equipment (MDE): Sixty-seven (67) AN/AAR-57 Common Missile Warning Systems (CMWS)

    Non-MDE: This request also includes the following Non-MDE: OCONUS Installation/Integration, Installation Mounting Kits, Countermeasure Dispenser Test Set AN/ALM-294, Technical Assistance, U.S. Government Training and OCONUS Contractor Training, publications and technical documents, quality assurance and other related elements of logistics and program support.

    (iv) Military Department: Army (VGJ)

    (v) Prior Related Cases, if any: EG-B-VBT, A04 (02 JUL 15, TCV: $17.8M)

    (vi) Sales Commission, Fee, etc., Paid, Offered, or Agreed to be Paid: None

    (vii) Sensitivity of Technology Contained in the Defense Article or Defense Services Proposed to be Sold: See Annex attached.

    (viii) Date Report Delivered to Congress: October 6, 2016

    * as defined in Section 47(6) of the Arms Export Control Act.

    POLICY JUSTIFICATION Government of Egypt—Description of Sale: Common Missile Warning System (CMWS) for AH-64E Apache, UH-60 Blackhawks and CH-47 Chinook Helicopters

    The Government of Egypt has requested a possible sale of:

    Major Defense Equipment (MDE): Sixty-seven (67) AN/AAR-57 Common Missile Warning Systems (CMWS).

    This request also includes the following Non-MDE: OCONUS Installation/Integration, Installation Mounting Kits, Countermeasure Dispenser Test Set AN/ALM-294, Technical Assistance, U.S. Government Training and OCONUS Contractor Training, publications and technical documents, quality assurance and other related elements of logistics and program support. The estimated cost is $81.4 million.

    This proposed sale will contribute to the foreign policy and national security of the United States by helping to improve the security of a strategic partner that has been and continues to be an important force for political stability and economic progress in the Middle East.

    The proposed sale of the CMWS will equip the Egyptian Air Force's fleet of multi mission helicopters with a detection system for infrared missile threats. Egypt will have no difficulty absorbing this equipment into its armed forces.

    The proposed sale of this equipment and support will not alter the basic military balance in the region.

    The prime contractors will be BAE Systems and DynCorp. There are no known offset agreements proposed in connection with this potential sale.

    Implementation of this proposed sale will require the assignment of two (2) U.S. Government and two (2) contractor representatives to Egypt to support delivery of such equipment, installation and integration, maintenance and to provide technical support and equipment familiarization. Additionally, this program will require multiple trips involving U.S. Government and contractor personnel to participate in technical reviews, training and installation.

    There will be no adverse impact on U.S. defense readiness as a result of this proposed sale.

    Transmittal No. 16-49 Notice of Proposed Issuance of Letter of Offer Pursuant to Section 36(b)(1) of the Arms Export Control Act Annex Item No. vii

    (vii) Sensitivity of Technology:

    1. AN/AAR-57—Common Missile Warning System (CMWS)—The Common Missile Warning System (CMWS) provides superior detection of infrared missile threats for rotary-wing, transport, and tactical aircraft. It is the detection component of a suite of countermeasures to increase survivability of current generation combat, airlift, and special operations aircraft against the threat posed by infrared guided missiles. It also provides automatic, passive missile detection, threat declaration, crew warning, software reprograming, false alarm suppression and cues to other on-board systems, such as dispensers, which may be utilized for flare decoys. Each platform includes: Electro-optical Missile Sensors, and Electronic Control Unit (ECU), Sequencer, and the Improved Countermeasures Dispenser (ICMD). The ECU hardware is classified CONFIDENTIAL; releasable technical manuals for operation and maintenance are classified SECRET.

    2. If a technologically advanced adversary were to obtain knowledge of the specific hardware and software equipment, the information could be used to develop countermeasures or equivalent systems which may reduce weapon system effectiveness or be used in the development of a system with similar or advanced capabilities.

    3. A determination has been made that Egypt can provide substantially the same degree of protection for this technology as the U.S. Government. This proposed sale is necessary in furtherance of the U.S. foreign policy and national security objectives outlined in the Policy Justification.

    4. All defense articles and services listed in this transmittal have been authorized for release and export to Egypt.

    [FR Doc. 2016-26735 Filed 11-3-16; 8:45 am] BILLING CODE 5001-06-P
    DEPARTMENT OF DEFENSE Office of the Secretary [Transmittal No. 16-37] 36(b)(1) Arms Sales Notification AGENCY:

    Defense Security Cooperation Agency, Department of Defense.

    ACTION:

    Notice.

    SUMMARY:

    The Department of Defense is publishing the unclassified text of a section 36(b)(1) arms sales notification. This is published to fulfill the requirements of section 155 of Public Law 104-164 dated July 21, 1996.

    FOR FURTHER INFORMATION CONTACT:

    Chang Suh, DSCA/SA&E/RAN, (703) 697-8975.

    The following is a copy of a letter to the Speaker of the House of Representatives, Transmittal 16-37 with attached Policy Justification and Sensitivity of Technology.

    Dated: November 1, 2016. Aaron Siegel, Alternate OSD Federal Register Liaison Officer, Department of Defense. BILLING CODE 5001-06-P EN04NO16.318 BILLING CODE 5001-06-C Transmittal No. 16-37 Notice of Proposed Issuance of Letter of Offer Pursuant to Section 36(b)(1) of the Arms Export Control Act, as amended

    (i) Prospective Purchaser: The Government of Egypt

    (ii) Total Estimated Value:

    Major Defense Equipment * $40 million Other $30 million Total $70 million

    (iii) Description and Quantity or Quantities of Articles or Services under Consideration for Purchase:

    Major Defense Equipment (MDE): Eight (8) Sentinel AN/MPQ-64 F1 Radars

    Non-MDE: This request also includes the following Non-MDE: Software and training, as well as spares and support equipment, technical manuals, Single Channel Ground and Airborne Radio System (SINCGARS) VRC-92E Radios, 16 High Mobility Multipurpose Wheeled Vehicles (HMMWV) Ml 152 with Shelter Carrier Kit, U.S. Government and contractor support, training and other associated support, equipment and services.

    (iv) Military Department: U.S. Army (VGU)

    (v) Prior Related Cases, if any:

    EG-B-VDP (21 May 12, TCV: $31.8M) EG-B-UUJ (26 Nov 12, TCV: $43.7M)

    (vi) Sales Commission, Fee, etc., Paid, Offered, or Agreed to be Paid: None

    (vii) Sensitivity of Technology Contained in the Defense Article or Defense Services Proposed to be Sold: See Annex attached.

    (viii) Date Report Delivered to Congress: September 16, 2016

    * as defined in Section 47(6) of the Arms Export Control Act.

    POLICY JUSTIFICATION The Government of Egypt—8 Sentinel AN/MPQ-64F1 Radars and Related Equipment and Support

    The Government of Egypt has requested a possible sale of eight (8) Sentinel AN/MPQ-64F1 Radars and software and training, as well as spares and support equipment, technical manuals, Single Channel Ground and Airborne Radio System (SINCGARS) VRC-92E Radios, 16 High Mobility Multipurpose Wheeled Vehicles (HMMWV) Ml 152 with Shelter Carrier Kit, U.S. Government and contractor support, training and other associated support, equipment and services. The total estimated value of MDE is $40 million. The total overall estimated cost is $70 million.

    This proposed sale will contribute to the foreign policy and national security of the United States by helping to improve the security of a strategic partner that has been and continues to be an important force for political stability and economic progress in the Middle East.

    The Government of Egypt intends to expand its existing air defense architecture to counter threats posed by air attack. This will contribute to Egypt's military goal of updating its capabilities while further enhancing interoperability among Egypt, the United States, and other allies. Egypt will have no difficulty absorbing this equipment into its armed forces.

    The proposed sale of this equipment and support will not alter the basic military balance in the region.

    The principal contractor involved in this program is Thales Raytheon Systems, Fullerton, California. There are no known offset agreements proposed in connection with this potential sale.

    Implementation of this proposed sale will require ten (10) U.S. Government or contractor representatives to travel to Egypt for a period of 8 weeks for equipment checkout and training.

    There will be no adverse impact on U.S. defense readiness as a result of this proposed sale.

    Transmittal No. 16-37 Notice of Proposed Issuance of Letter of Offer Pursuant to Section 36(b)(1) of the Arms Export Control Act Annex Item No. vii

    (vii) Sensitivity of Technology:

    1. The AN/MPQ-64 Sentinel Radar System is a fielded air defense radar system in the Army inventory. Sentinel is a derivative of the AN/TPQ-36 Firefinder System used for artillery detection and the AN/TPQ-36A Norwegian adapted Hawk system. The Sentinel radar (AN/MPQ-64) is the sensor for the Short Range Air Defense (SHORAD) weapon systems, including the Avenger and any ground launcher Stinger platforms. Sentinel is a mobile phased array radar that provides highly accurate 3 dimensional radar track data to using systems via the Forward Area Air Defense (FAAD) Command, Control, and Intelligence (C2I) node. Sentinel's detection range, mobility, and 360 degree azimuth coverage allow it to support SHORAD weapons located throughout the division area. Sentinel acquires, tracks, and reports cruise missiles, unmanned aerial vehicles, fixed and rotary wing aircraft in clutter and electronic counter measures environments. The Sentinel Export configuration (AN/MPQ-64Fl) is a derivative of the U.S. Army's Improved Sentinel Radar.

    2. The Sentinel consists of a radar-based sensor system with the Ml 152 HMMWV as the prime mover and the MEP-1041 Advanced Mobile Medium Power Source (AMMPS) Tactical Quiet Generator as the power source. The sensor is an advanced battlefield radar capable of X-Band air defense phased-array with an instrumented range of 75 kilometers with a rotating antenna providing 360 degree azimuth coverage for acquisition and tracking.

    3. Sentinel has only one item currently designated Critical Program Information (CPI) and that is the Sentinel software modules containing routines for electronic counter-counter measures (ECCM) that have been determined to be CPI.

    4. These items are classified IAW EO 12958 section 1.5, Classification categories as category 1.5(e) because they contain scientific, technological, or economic matters relative to the national security. Reports, test data, and all Sentinel related media that discloses operational parameters, performance, characteristics, ECCM techniques, vulnerabilities, limitations or performance weaknesses shall be classified at the highest level based on the information being conveyed as referenced in the Sentinel Security Classification Guide. Distribution of technical performance and system capabilities reports and data shall only be released up to the CONFIDENTIAL level. It is not possible to obtain the Sentinel wartime reserved frequencies by reverse engineering, testing, or analyzing the unclassified Sentinel end item.

    5. All defense articles and services listed in this transmittal are authorized for release and export to the Government of Egypt.

    [FR Doc. 2016-26689 Filed 11-3-16; 8:45 am] BILLING CODE 5001-06-P
    DEPARTMENT OF DEFENSE Office of the Secretary Charter Renewal of Department of Defense Federal Advisory Committees AGENCY:

    Department of Defense.

    ACTION:

    Renewal of Federal Advisory Committee.

    SUMMARY:

    The Department of Defense (DoD) is publishing this notice to announce that it is renewing the charter for the Department of Defense Wage Committee (“the Committee”).

    FOR FURTHER INFORMATION CONTACT:

    Jim Freeman, Advisory Committee Management Officer for the Department of Defense, 703-692-5952.

    SUPPLEMENTARY INFORMATION:

    The Committee's charter is being renewed in accordance with the Federal Advisory Committee Act (FACA) of 1972 (5 U.S.C., Appendix, as amended) and 41 CFR 102-3.50(a). The Committee's charter and contact information for the Committee's Designated Federal Officer (DFO) can be found at http://www.facadatabase.gov/.The Committee provides the Secretary of Defense and the Deputy Secretary of Defense, through the Under Secretary of Defense for Personnel and Readiness, independent advice and recommendations on all matters relating to the conduct of wage surveys and the establishment of wage schedules for all appropriated fund and non-appropriated fund wage areas of blue-collar employees within the Department of Defense.

    The Committee is composed of seven members—a chair and six additional members. The remaining six positions consist of two labor organization representatives and four members who are regular government employees (RGE) and are divided into two broad categories—labor and management. Each category has two voting members; in the case of management category, the two voting members will change depending upon which two DoD Components, as determined by the Chair, have the largest number of wage employees in the wage areas under consideration. Those individuals representing the labor organizations are selected by the labor organizations to provide the committee with the points of view of nongovernment entities or a recognizable group of persons that have interests in the subject matter under consideration by the Committee. Whereas, those individuals, to include the Chair, who represent the DoD Components, and are RGE members are appointed by the Secretary of Defense to exercise their own individual best judgement on behalf of the government. Except for reimbursement of official Committee-related travel and per diem, Committee members serve without compensation.

    The public or interested organizations may submit written statements to the Committee membership about the Committee's mission and functions. Written statements may be submitted at any time or in response to the stated agenda of planned meeting of the Committee. All written statements shall be submitted to the DFO for the Committee, and this individual will ensure that the written statements are provided to the membership for their consideration.

    Dated: November 1, 2016. Aaron Siegel, Alternate OSD Federal Register Liaison Officer, Department of Defense.
    [FR Doc. 2016-26665 Filed 11-3-16; 8:45 am] BILLING CODE 5001-06-P
    DEPARTMENT OF DEFENSE Office of the Secretary [Transmittal No. 16-45] 36(b)(1) Arms Sales Notification AGENCY:

    Defense Security Cooperation Agency, Department of Defense.

    ACTION:

    Notice.

    SUMMARY:

    The Department of Defense is publishing the unclassified text of a section 36(b)(1) arms sales notification. This is published to fulfill the requirements of section 155 of Public Law 104-164 dated July 21, 1996.

    FOR FURTHER INFORMATION CONTACT:

    Chang Suh, DSCA/SA&E/RAN, (703) 697-8975.

    The following is a copy of a letter to the Speaker of the House of Representatives, Transmittal 16-45 with attached Policy Justification.

    Dated: November 1, 2016. Aaron Siegel, Alternate OSD Federal Register Liaison Officer, Department of Defense. BILLING CODE 5001-06-P EN04NO16.321 BILLING CODE 5001-06-C Transmittal No. 16-45 Notice of Proposed Issuance of Letter of Offer Pursuant to Section 36(b)(1) of the Arms Export Control Act, as amended

    (i) (U) Prospective Purchaser: United Arab Emirates (UAE)

    (ii) (U) Total Estimated Value:

    Major Defense Equipment * $ 0 million Other $75 million Total $75 million

    (iii) (U) Description and Quantity or Quantities of Articles or Services under Consideration for Purchase:

    Non-MDE:

    The United Arab Emirates Air Force requests participation in military exercises, aerial refueling, airlift and ferry support, training aids/devices/munitions, technical and logistics support services, and other related elements of logistical and program support. There is no MDE associated with this potential sale. The total estimated cost is $75.0 million.

    (iv) (U) Military Department: Air Force (X7-D-NAF Amendment 4)

    (v) (U) Prior Related Cases, if any: AE-D-NAF-$49M-20 Mar 12

    (vi) (U) Sales Commission, Fee, etc., Paid, Offered, or Agreed to be Paid: None

    (vii) (U) Sensitivity of Technology Contained in the Defense Article or Defense Services Proposed to be Sold: None.

    (viii) (U) Date Report Delivered to Congress: October 21, 2016

    * as defined in Section 47(6) of the Arms Export Control Act.

    POLICY JUSTIFICATION (U) United Arab Emirates (UAE)—Exercise Participation Support

    (U) The Government of the UAE requested a possible sale to include participation in military exercises, aerial refueling, airlift and ferry support, training aids/devices/munitions, technical and logistics support services, and other related elements of logistical and program support. The estimated cost is $75 million.

    (U) This proposed sale contributes to the foreign policy and national security of the United States by helping to improve the security of a major regional ally which has been, and continues to be, an important force for political stability and economic progress in the Middle East.

    (U) This proposed sale contributes to the foreign policy and national security of the United States by helping to improve the ability of the UAE to employ its fighter aircraft in a multi-country coalition environment, such as Red Flag and Green Flag exercises. Participating in major exercises has enhanced the UAE's continued and consistent role in support of Coalition Operations. The UAE is a steadfast coalition partner in the fight against radical Islamic forces such as ISIL and Al Qaeda (AQAP) in the Arabian Peninsula.

    (U) The proposed sale of this equipment and support does not alter the basic military balance in the region.

    (U) Implementation of this proposed sale will not require the assignment of any additional U.S. Government or contractor representatives to the UAE.

    (U) There will be no adverse impact on U.S. defense readiness as a result of this proposed sale. All defense articles and services are approved for release by our foreign disclosure office.

    [FR Doc. 2016-26714 Filed 11-3-16; 8:45 am] BILLING CODE 5001-06-P
    DEPARTMENT OF DEFENSE Office of the Secretary [Transmittal No. 16-38] 36(b)(1) Arms Sales Notification AGENCY:

    Defense Security Cooperation Agency, Department of Defense.

    ACTION:

    Notice.

    SUMMARY:

    The Department of Defense is publishing the unclassified text of a section 36(b)(1) arms sales notification. This is published to fulfill the requirements of section 155 of Public Law 104-164 dated July 21, 1996.

    FOR FURTHER INFORMATION CONTACT:

    Chang Suh, DSCA/SA&E/RAN, (703) 697-8975.

    The following is a copy of a letter to the Speaker of the House of Representatives, Transmittal 16-38 with attached Policy Justification.

    Dated: November 1, 2016. Aaron Siegel, Alternate OSD Federal Register Liaison Officer, Department of Defense. BILLING CODE 5001-06-P EN04NO16.320 BILLING CODE 5001-06-C Transmittal No. 16-38 Notice of Proposed Issuance of Letter of Offer Pursuant to Section 36(b)(1) of the Arms Export Control Act, as amended

    (i) Prospective Purchaser: Government of Kuwait

    (ii) Total Estimated Value:

    Major Defense Equipment * $ 62 million Other $132 million Total $194 million

    (iii) Description and Quantity or Quantities of Articles or Services under Consideration for Purchase:

    Major Defense Equipment (MDE): Six (6) AN/MPQ-64 Sentinel F1 Radars

    Non-Major Defense Equipment (MDE): The Government of Kuwait requested a limited competition between three (3) U.S. vendors to procure a total of six (6) Short Range, Gap Filler Radars (e.g., AN/MPQ-64 Sentinel F1, AN/TPS-77, or AN/TPS-703) and one (1) Long Range Radar (e.g., AN/TPS-77 or AN/TPS-78). Only one of the radars under consideration, the AN/MPQ-64 is Major Defense Equipment (MDE). The remaining radars identified by Kuwait for consideration are non-MDE. Additionally, Kuwait is requesting one (1) Long Range Radar with Primary Surveillance Radar (PSR) and Secondary Surveillance Radar (SSR) capability on the Long Range Radar, upgrades to existing AN/FPS 117 (V) 3 Long Range Radars, upgrades to airfield radome and communications systems, upgrade secure Identification Friend or Foe (IFF) systems, site surveys, installation and checkout, site acceptance testing, interim contractor support, construction, contractor logistics support (CLS), spares, support equipment and training. Cost for additional non-MDE is $132 million. The total overall estimated cost is $194 million.

    (iv) Military Department: Air Force (X7-D-DAB)

    (v) Prior Related Cases, if any: None

    (vi) Sales Commission, Fee, etc., Paid, Offered, or Agreed to be Paid: None

    (vii) Sensitivity of Technology Contained in the Defense Article or Defense Services Proposed to be Sold: See Annex attached.

    (viii) Date Report Delivered to Congress: October 13, 2016

    * as defined in Section 47(6) of the Arms Export Control Act.

    POLICY JUSTIFICATION The Government of Kuwait-Radar Field System

    The Government of Kuwait has requested a possible total sale of six (6) Short Range Radars, otherwise known as Gap Filler Radars, one (1) Long Range Radar with Primary Surveillance Radar (PSR) and Secondary Surveillance Radar (SSR) arrays, upgrades to existing AN/FPS 117 (V) 3 Long Range Radar, upgrades to airfield radome and communications systems, upgrade to secure Identification Friend or Foe (IFF) systems, site surveys, installation and checkout, site acceptance testing, interim contractor support, construction, contractor logistics support, spares, support equipment, and training. The total estimated value of this sale is $194 million.

    The Government of Kuwait requested a limited competition between three (3) U.S. vendors to procure a total of six (6) Short Range, Gap Filler Radars (e.g., AN/MPQ-64 Sentinel F1, AN/TPS-77, or AN/TPS-703) and one (1) Long Range Radar (e.g., AN/TPS-77 or AN/TPS-78). Only one of the radars under consideration, the AN/MPQ-64 is Major Defense Equipment (MDE). The remaining radars identified by Kuwait for consideration are non-MDE.

    This proposed sale supports U.S. Government national security goals by aiding a Major non-NATO Ally in the reduction of transnational threats, weapons proliferation, and the movement and support of international terrorists.

    The Government of Kuwait desires the radar field system in order to improve early warning, enhance internal and external security, and protect national sovereignty. The system provides situational awareness for Kuwaiti security forces to detect and interdict fixed and rotary wing aircraft. This procurement provides coverage for Kuwait's northern and eastern boarders.

    The prime contractor will be determined by competition between Lockheed Martin, Bethesda Maryland, Northrop Grumman, Falls Church, Virginia, and the Raytheon Company, Waltham, Massachusetts. There are no known offset agreements proposed in connection with this potential sale.

    This procurement includes a small number of U.S. contractor system and maintenance advisors under a long-term operations and maintenance support package. The exact number of personnel and period of performance is yet to be finalized. This purchase will not substantially alter the U.S. Government presence in Kuwait.

    There will be no adverse impact on U.S. defense readiness as a result of this proposed sale.

    Transmittal No. 16-38 Notice of Proposed Issuance of Letter of Offer Pursuant to Section 36(b)(l) of the Arms Export Control Act Annex Item No. vii

    (vii) Sensitivity of Technology:

    1. The AN/MPQ-64 Sentinel Radar System is a fielded air defense radar system in the Army inventory. Sentinel is a derivative of the AN/TPQ-36 Firefinder System used for artillery detection and the AN/TPQ-36A Norwegian adapted Hawk system. Sentinel is a mobile, phased-array radar that provides highly accurate 3 dimensional radar track data to using systems via the Forward Area Air Defense (FAAD) Command, Control, and Intelligence (C2I) node. Sentinel acquires, tracks, and reports cruise missiles, unmanned aerial vehicles, fixed and rotary wing aircraft in clutter and electronic counter measures environments. The Sentinel Export configuration (AN/MPQ-64Fl) is a derivative of the U.S. Army's Improved Sentinel Radar.

    2. The Sentinel consists of a radar-based sensor system with the M1152 HighMobility Multipurpose Wheeled Vehicle (HMMWV) as the prime mover and the MEP-1041 Advanced Mobile Medium Power Source (AMMPS) Tactical Quiet Generator as the power source. The sensor is an advanced battlefield radar capable of X Band air defense phased-array with an instrumented range of 75 kilometers with a rotating antenna providing 360 degree azimuth coverage for acquisition and tracking.

    3. Sentinel has only one item currently designated Critical Program Information (CPI) and that is the Sentinel software modules containing routines for electronic counter-counter measures (ECCM) that have been determined to be a CPI.

    4. These items are classified IAW EO 12958 section 1.5, Classification categories as category 1.5(e) because they contain scientific, technological, or economic matters relative to the national security. Reports, test data, and all Sentinel related media that discloses operational parameters, performance, characteristics, ECCM techniques, vulnerabilities, limitations or performance weaknesses shall be classified at the highest level based on the information being conveyed as referenced in the Sentinel Security Classification Guide. Distribution of technical performance and system capabilities reports and data shall only be released up to the CONFIDENTIAL level. It is not possible to obtain the Sentinel wartime reserved frequencies by reverse engineering, testing, or analyzing the unclassified Sentinel end item.

    5. This sale is necessary in furtherance of the U.S. foreign policy and national security objectives outlined in the Policy Justification. Moreover, the benefits to be derived from this sale, as outlined in the Policy Justification, outweigh the potential damage that could result if the sensitive technology were revealed to unauthorized persons.

    6. All defense articles and services listed in this transmittal are authorized for release and export to the Government of Kuwait.

    [FR Doc. 2016-26711 Filed 11-3-16; 8:45 am] BILLING CODE 5001-06-P
    DEPARTMENT OF DEFENSE Department of the Army, Corps of Engineers Intent To Prepare a Draft Environmental Impact Statement, and Intent To Conduct Public Scoping Meetings for the Upper Susquehanna River Basin Comprehensive Flood Damage Reduction Study, New York AGENCY:

    Department of the Army, U.S. Army Corps of Engineers, DOD.

    ACTION:

    Notice of intent.

    SUMMARY:

    In accordance with the National Environmental Policy Act (NEPA), the Baltimore District, U.S. Army Corps of Engineers (USACE), will prepare a Feasibility Report and EIS comprehensively evaluating flood-risk management (FRM) needs and opportunities in the upper Susquehanna River Basin in New York.

    DATES:

    The public scoping meeting dates are:

    1. November 21, 2016, at 1:30 p.m. and 6:30 p.m. in the Hubbard Auditorium of the Tioga County Office Building.

    2. November 22, 2016 at 1:30 p.m. and 6:30 p.m. at the Town of Chenango Community Meeting Room.

    3. November 30, 2016, at 6:30 p.m. in the Village of Sidney.

    ADDRESSES:

    Two scoping meetings will be held in the Village of Owego on Monday, November 21, 2016, at 1:30 p.m. and 6:30 p.m. in the Hubbard Auditorium of the Tioga County Office Building, 56 Main Street. Two scoping meetings will be held in the Town of Chenango on Tuesday, November 22, 2016 at 1:30 p.m. and 6:30 p.m. at the Town of Chenango Community Meeting Room at 1529 Upper Front Street (NY Route 12). One meeting will be held on November 30, 2016, at 6:30 p.m. in the Village of Sidney in the Memorial Public Library located at 8 River Street.

    FOR FURTHER INFORMATION CONTACT:

    Questions about the proposed scoping meetings, requests to be placed on the project information distribution list, information requests or written comments on the scope of the EIS and the comprehensive FRM study can be addressed to Mr. David W. Robbins, U.S. Army Corps of Engineers, ATTN: CENAB-PL-P, 10 S. Howard Street, Baltimore, MD 21201, telephone 410-962-0685; email address: [email protected]. Please contact me if you wish to speak at the meetings or should you have special needs (sign language interpreters, access needs) at the above address. Information about the study, including public scoping meetings, is available at the study Web site http://www.nab.usace.army.mil/USRB_Feasibility_Study/.

    SUPPLEMENTARY INFORMATION:

    This Notice initiates formal scoping for the Environmental Impact Statement (EIS), provides information on the nature of the proposed Project, invites participation in the EIS process, and identifies potential environmental effects to be considered. It also invites comments from interested members of the public, tribes, and agencies on the scope of the EIS and announces upcoming public scoping meetings. Comments should address (1) feasible alternatives that may better achieve the Project's need and purposes with fewer adverse impacts and (2) any significant environmental impacts relating to the alternatives.

    Scoping meeting information will be posted online by Baltimore District via Web site postings and social media. Meeting information will be provided electronically via the study's Web page and in printed form to local libraries, government offices, as well as mailed to interested public.

    For all meetings, staff will be available to answer questions. All interested parties are invited to speak at the public meetings. The public scoping peiod will begin on the date of publication of this Notice and will continue through 30 days following the last public scoping meeting.

    The study was authorized by a Resolution of the House Committee on Transportation and Infrastructure, on 24 September, 2008. The upper Susquehanna Basin includes the portions of Tioga, Broome, Chenango, Cortland, Otsego, Delaware, Schoharie, Herkimer, Oneida, Madison, Onondaga, Tompkins, Schuyler, and Chemung Counties in the Susquehanna River Watershed of New York upstream of the Chemung River confluence near Waverly. The upper Susquehanna River Basin repeatedly experiences flooding damages, with recent notable events occurring in 2006 and 2011. USACE is undertaking the FRM study in partnership with the New York State Department of Environmental Conservation (NYSDEC).

    USACE and NYSDEC are seeking public input to identify areas with flooding concerns which may be of interest to address in the context of the study, and learn of area-specific considerations important in formulating any FRM plans. Study efforts will be coordinated with the Federal Emergency Management Area (FEMA), the U.S. Geological Survey (USGS), as well as other Federal and state agencies and local governments.

    An initial conceptual effort will be completed using existing information to identify areas of the basin that currently do not have FRM infrastructure in place and screen these areas for FRM needs and opportunities. The study will evaluate the level of FRM currently provided by existing FRM infrastructure under current conditions and projected future conditions. Within the study area, there are 20 existing USACE FRM projects, as well as other non-Federal FRM projects. The study will investigate FRM strategies to reduce flood risk, as well as reduce residual risk in areas with existing FRM infrastructure. Structural and non-structural FRM will be considered. Hydrologic and hydraulic modeling will be developed for the majority of the Susquehanna River main stem and major tributaries in the basin to aid plan formulation.

    The study will be conducted in compliance with applicable federal laws including the Clean Water Act, the Endangered Species Act, the Clean Air Act, the U.S. Fish and Wildlife Coordination Act, the National Historic Preservation Act, and the Farmland Protection Policy Act. All appropriate compliance documentation will be obtained and included as part of the EIS. It is currently anticipated that the study will take three years and may lead to the implementation of one or more FRM projects. The EIS is expected to be publicly released in Spring 2018.

    David W. Robbins, Acting Chief, Civil Project Development Branch, Planning Division.
    [FR Doc. 2016-26699 Filed 11-3-16; 8:45 am] BILLING CODE 3720-58-P
    DEPARTMENT OF DEFENSE Department of the Navy Notice of Availability of Record of Decision for the Northwest Training and Testing Final Environmental Impact Statement/Overseas Environmental Impact Statement AGENCY:

    Department of the Navy, DoD.

    ACTION:

    Notice.

    SUMMARY:

    The Department of the Navy (DoN), after carefully weighing the strategic, operational, and environmental consequences of the proposed action, announces its decision to continue and enhance training activities as identified in Alternative 1 in the Northwest Training and Testing Final Environmental Impact Statement/Overseas Environmental Impact Statement. This alternative includes adjustments to existing training and testing activities, new training and testing activities to support future requirements, and activities not previously analyzed, such as pierside sonar maintenance and testing. Implementation of Alternative 1 will enable the DoN to achieve the levels of operational readiness required under Section 5062 Title 10 U.S.C. without resulting in significant environmental impacts.

    SUPPLEMENTARY INFORMATION:

    The complete text of the Record of Decision is available at http://nwtteis.com. Single copies of the Record of Decision are available upon request by contacting: NWTT EIS/OEIS Project Manager, Naval Facilities Engineering Command Northwest, 1101 Tautog Circle, Suite 203, Silverdale, Washington, 98315-1101.

    Dated: October 31, 2016. C. Mora, Commander, Judge Advocate General's Corps, U.S. Navy, Federal Register Liaison Officer.
    [FR Doc. 2016-26685 Filed 11-3-16; 8:45 am] BILLING CODE 3810-FF-P
    DEPARTMENT OF ENERGY President's Council of Advisors on Science and Technology AGENCY:

    Office of Science, Department of Energy.

    ACTION:

    Notice of partially-closed meeting.

    SUMMARY:

    This notice sets forth the schedule and summary agenda for a partially-closed meeting of the President's Council of Advisors on Science and Technology (PCAST), and describes the functions of the Council. The Federal Advisory Committee Act requires that public notice of these meetings be announced in the Federal Register.

    DATES:

    November 18, 2016 9:00 a.m. to 12:00 p.m.

    ADDRESSES:

    The meeting will be held at the National Academy of Sciences, 2101 Constitution Avenue NW., Washington, DC in the Lecture Room.

    FOR FURTHER INFORMATION CONTACT:

    Information regarding the meeting agenda, time, location, and how to register for the meeting is available on the PCAST Web site at: http://whitehouse.gov/ostp/pcast. A live video webcast and an archive of the webcast after the event are expected to be available at http://whitehouse.gov/ostp/pcast. The archived video will be available within one week of the meeting. Questions about the meeting should be directed to Ms. Jennifer Michael at [email protected], (202) 456-4444. Please note that public seating for this meeting is limited and is available on a first-come, first-served basis.

    SUPPLEMENTARY INFORMATION:

    The President's Council of Advisors on Science and Technology (PCAST) is an advisory group of the nation's leading scientists and engineers, appointed by the President to augment the science and technology advice available to him from inside the White House, cabinet departments, and other Federal agencies. See the Executive Order at http://www.whitehouse.gov/ostp/pcast. PCAST is consulted about and provides analyses and recommendations concerning a wide range of issues where understandings from the domains of science, technology, and innovation may bear on the policy choices before the President. PCAST is co-chaired by Dr. John P. Holdren, Assistant to the President for Science and Technology, and Director, Office of Science and Technology Policy, Executive Office of the President, The White House; and Dr. Eric S. Lander, President, Broad Institute of the Massachusetts Institute of Technology and Harvard.

    Type of Meeting: Open and Closed.

    Proposed Schedule and Agenda: The President's Council of Advisors on Science and Technology (PCAST) is scheduled to meet in open session on November 18, 2016 from 9:00 a.m. to 12:00 p.m.

    Open Portion of Meeting: During this open meeting, PCAST is scheduled to discuss its studies on drinking water science and technology and semiconductors. They will also hear from speakers who will remark on the National Nanotechnology Initiative and other science and technology topics. Additional information and the agenda, including any changes that arise, will be posted at the PCAST Web site at: http://whitehouse.gov/ostp/pcast.

    Closed Portion of the Meeting: PCAST may hold a closed meeting of approximately one hour with the President on November 18, 2016, which must take place in the White House for the President's scheduling convenience and to maintain Secret Service protection. This meeting will be closed to the public because such portion of the meeting is likely to disclose matters that are to be kept secret in the interest of national defense or foreign policy under 5 U.S.C. 552b(c)(1).

    Public Comments: It is the policy of the PCAST to accept written public comments of any length, and to accommodate oral public comments whenever possible. The PCAST expects that public statements presented at its meetings will not be repetitive of previously submitted oral or written statements.

    The public comment period for this meeting will take place on November 18, 2016 at a time specified in the meeting agenda posted on the PCAST Web site at http://whitehouse.gov/ostp/pcast. This public comment period is designed only for substantive commentary on PCAST's work, not for business marketing purposes.

    Oral Comments: To be considered for the public speaker list at the meeting, interested parties should register to speak at http://whitehouse.gov/ostp/pcast, no later than 9:00 a.m. Eastern Time on November 14, 2016. Phone or email reservations will not be accepted. To accommodate as many speakers as possible, the time for public comments will be limited to two (2) minutes per person, with a total public comment period of up to 15 minutes. If more speakers register than there is space available on the agenda, PCAST will randomly select speakers from among those who applied. Those not selected to present oral comments may always file written comments with the committee. Speakers are requested to bring at least 25 copies of their oral comments for distribution to the PCAST members.

    Written Comments: Although written comments are accepted continuously, written comments should be submitted to PCAST no later than 12:00 p.m. Eastern Time on November 14, 2016 so that the comments may be made available to the PCAST members prior to this meeting for their consideration. Information regarding how to submit comments and documents to PCAST is available at http://whitehouse.gov/ostp/pcast in the section entitled “Connect with PCAST.”

    Please note that because PCAST operates under the provisions of FACA, all public comments and/or presentations will be treated as public documents and will be made available for public inspection, including being posted on the PCAST Web site.

    Meeting Accommodations: Individuals requiring special accommodation to access this public meeting should contact Ms. Jennifer Michael at least ten business days prior to the meeting so that appropriate arrangements can be made.

    Issued in Washington, DC, on November 1, 2016. LaTanya R. Butler, Deputy Committee Management Officer.
    [FR Doc. 2016-26698 Filed 11-3-16; 8:45 am] BILLING CODE 6450-01-P
    DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER17-204-000] Quantum Power Corp; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket Section 204 Authorization

    This is a supplemental notice in the above-referenced proceeding of Quantum Power Corp's application for market-based rate authority, with an accompanying rate tariff, noting that such application includes a request for blanket authorization, under 18 CFR part 34, of future issuances of securities and assumptions of liability.

    Any person desiring to intervene or to protest should file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in accordance with Rules 211 and 214 of the Commission's Rules of Practice and Procedure (18 CFR 385.211 and 385.214). Anyone filing a motion to intervene or protest must serve a copy of that document on the Applicant.

    Notice is hereby given that the deadline for filing protests with regard to the applicant's request for blanket authorization, under 18 CFR part 34, of future issuances of securities and assumptions of liability, is November 21, 2016.

    The Commission encourages electronic submission of protests and interventions in lieu of paper, using the FERC Online links at http://www.ferc.gov. To facilitate electronic service, persons with Internet access who will eFile a document and/or be listed as a contact for an intervenor must create and validate an eRegistration account using the eRegistration link. Select the eFiling link to log on and submit the intervention or protests.

    Persons unable to file electronically should submit an original and 5 copies of the intervention or protest to the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426.

    The filings in the above-referenced proceeding are accessible in the Commission's eLibrary system by clicking on the appropriate link in the above list. They are also available for electronic review in the Commission's Public Reference Room in Washington, DC. There is an eSubscription link on the Web site that enables subscribers to receive email notification when a document is added to a subscribed docket(s). For assistance with any FERC Online service, please email [email protected]. or call (866) 208-3676 (toll free). For TTY, call (202) 502-8659.

    Dated: October 31, 2016. Nathaniel J. Davis, Sr., Deputy Secretary.
    [FR Doc. 2016-26708 Filed 11-3-16; 8:45 am] BILLING CODE 6717-01-P
    DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. EL17-10-000] Midcontinent Independent System Operator, Inc.; Notice of Institution of Section 206 Proceeding and Refund Effective Date

    On October 31, 2016, the Commission issued an order in Docket No. EL17-10-000, pursuant to section 206 of the Federal Power Act (FPA), 16 U.S.C. 824e (2012), instituting an investigation into the justness and reasonableness of Midcontinent Independent System Operator, Inc.'s depreciation rates. Midcontinent Indep. Sys. Operator, Inc., 157 FERC ¶ 61,068 (2016).

    The refund effective date in Docket No. EL17-10-000, established pursuant to section 206(b) of the FPA, will be the date of publication of this notice in the Federal Register.

    Any interested person desiring to be heard in Docket No. EL17-10-000 must file a notice of intervention or motion to intervene, as appropriate, with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in accordance with Rule 214 of the Commission's Rules of Practice and Procedure, 18 CFR 385.214 (2016), within 21 days of the date of issuance of the order.

    Dated: October 31, 2016. Nathaniel J. Davis, Sr., Deputy Secretary.
    [FR Doc. 2016-26709 Filed 11-3-16; 8:45 am] BILLING CODE 6717-01-P
    DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER17-227-000] Innovative Solar 47, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket Section 204 Authorization

    This is a supplemental notice in the above-referenced proceeding of Innovative Solar 47, LLC `s application for market-based rate authority, with an accompanying rate tariff, noting that such application includes a request for blanket authorization, under 18 CFR part 34, of future issuances of securities and assumptions of liability.

    Any person desiring to intervene or to protest should file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in accordance with Rules 211 and 214 of the Commission's Rules of Practice and Procedure (18 CFR 385.211 and 385.214). Anyone filing a motion to intervene or protest must serve a copy of that document on the Applicant.

    Notice is hereby given that the deadline for filing protests with regard to the applicant's request for blanket authorization, under 18 CFR part 34, of future issuances of securities and assumptions of liability, is November 21, 2016.

    The Commission encourages electronic submission of protests and interventions in lieu of paper, using the FERC Online links at http://www.ferc.gov. To facilitate electronic service, persons with Internet access who will eFile a document and/or be listed as a contact for an intervenor must create and validate an eRegistration account using the eRegistration link. Select the eFiling link to log on and submit the intervention or protests.

    Persons unable to file electronically should submit an original and 5 copies of the intervention or protest to the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426.

    The filings in the above-referenced proceeding are accessible in the Commission's eLibrary system by clicking on the appropriate link in the above list. They are also available for electronic review in the Commission's Public Reference Room in Washington, DC. There is an eSubscription link on the Web site that enables subscribers to receive email notification when a document is added to a subscribed docket(s). For assistance with any FERC Online service, please email [email protected]. or call (866) 208-3676 (toll free). For TTY, call (202) 502-8659.

    Dated: October 31, 2016. Nathaniel J. Davis, Sr., Deputy Secretary.
    [FR Doc. 2016-26710 Filed 11-3-16; 8:45 am] BILLING CODE 6717-01-P
    DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings #1

    Take notice that the Commission received the following electric rate filings:

    Docket Numbers: ER10-2984-028.

    Applicants: Merrill Lynch Commodities, Inc.

    Description: Notice of Non-Material Change in Status of Merrill Lynch Commodities, Inc.

    Filed Date: 10/28/16.

    Accession Number: 20161028-5251.

    Comments Due: 5 p.m. ET 11/18/16.

    Docket Numbers: ER12-229-001.

    Applicants: ISO New England Inc.

    Description: Informational Compliance Filing regarding ISO's re-examination of the currently effective Base Capacity Cost Rate of ISO New England Inc. Supplement also filed.

    Filed Date: 10/28/16.

    Accession Number: 20161028-5259; 20161028-5270.

    Comments Due: 5 p.m. ET 11/18/16.

    Docket Numbers: ER15-1499-004.

    Applicants: Southwest Power Pool, Inc.

    Description: Compliance filing: City of Independence 2017 Stated Rate Compliance Filing to be effective 1/1/2017.

    Filed Date: 10/31/16.

    Accession Number: 20161031-5084.

    Comments Due: 5 p.m. ET 11/21/16.

    Docket Numbers: ER16-1993-001.

    Applicants: CleanChoice Energy, Inc.

    Description: Notice of Change in Status of CleanChoice Energy, Inc.

    Filed Date: 10/28/16.

    Accession Number: 20161028-5250.

    Comments Due: 5 p.m. ET 11/18/16.

    Docket Numbers: ER17-210-001.

    Applicants: Sabine Cogen, LP.

    Description: Tariff Amendment: Amendment to Reactive Revenue Rate Schedule to be effective 11/1/2016.

    Filed Date: 10/31/16.

    Accession Number: 20161031-5137.

    Comments Due: 5 p.m. ET 11/21/16.

    Docket Numbers: ER17-210-002.

    Applicants: Sabine Cogen, LP.

    Description: Tariff Amendment: Further Amendment to Reactive Rate Tariff to be effective 12/1/2016.

    Filed Date: 10/31/16.

    Accession Number: 20161031-5148.

    Comments Due: 5 p.m. ET 11/21/16.

    Docket Numbers: ER17-225-000.

    Applicants: San Diego Gas & Electric Company.

    Description: Compliance filing: SDGEs Order Nos. 827 and 828 Compliance Filing to be effective 10/17/2016.

    Filed Date: 10/28/16.

    Accession Number: 20161028-5216.

    Comments Due: 5 p.m. ET 11/18/16.

    Docket Numbers: ER17-226-000.

    Applicants: Southwest Power Pool, Inc.

    Description: § 205(d) Rate Filing: 3125R3 Basin Electric Power Cooperative NITSA and NOA to be effective 10/1/2016.

    Filed Date: 10/28/16.

    Accession Number: 20161028-5217.

    Comments Due: 5 p.m. ET 11/18/16.

    Docket Numbers: ER17-227-000.

    Applicants: Innovative Solar 47, LLC.

    Description: Baseline eTariff Filing: Innovative Solar 47, LLC MBR Tariff to be effective 11/1/2016.

    Filed Date: 10/31/16.

    Accession Number: 20161031-5069.

    Comments Due: 5 p.m. ET 11/21/16.

    Docket Numbers: ER17-228-000.

    Applicants: King Forest Industries, Inc.

    Description: Baseline eTariff Filing: King Forest MBR Application to be effective 12/1/2016.

    Filed Date: 10/31/16.

    Accession Number: 20161031-5078.

    Comments Due: 5 p.m. ET 11/21/16.

    Docket Numbers: ER17-229-000.

    Applicants: New England Power Pool Participants Committee.

    Description: § 205(d) Rate Filing: Nov 2016 Membership Filing to be effective 10/1/2016.

    Filed Date: 10/31/16.

    Accession Number: 20161031-5139.

    Comments Due: 5 p.m. ET 11/21/16.

    Docket Numbers: ER17-230-000.

    Applicants: Midcontinent Independent System Operator, Inc.

    Description: § 205(d) Rate Filing: 2016-10-31_SA 2963 MidAmerican-MidAmerican GIA (J498) to be effective 11/1/2016.

    Filed Date: 10/31/16.

    Accession Number: 20161031-5141.

    Comments Due: 5 p.m. ET 11/21/16.

    Docket Numbers: ER17-231-000.

    Applicants: Duke Energy Carolinas, LLC.

    Description: § 205(d) Rate Filing: NCMPA1 RS No. 318 Amendment (2017) to be effective 12/31/2016.

    Filed Date: 10/31/16.

    Accession Number: 20161031-5143.

    Comments Due: 5 p.m. ET 11/21/16.

    Docket Numbers: ER17-232-000.

    Applicants: Southern California Edison Company.

    Description: § 205(d) Rate Filing: 2017 RSBAA Update Filing to be effective 1/1/2017.

    Filed Date: 10/31/16.

    Accession Number: 20161031-5144.

    Comments Due: 5 p.m. ET 11/21/16.

    Docket Numbers: ER17-233-000.

    Applicants: Pennsylvania Electric Company.

    Description: Notice of Cancellation of Pennsylvania Electric Company Rate Schedule F.P.C. No. 56.

    Filed Date: 10/31/16.

    Accession Number: 20161031-5149.

    Comments Due: 5 p.m. ET 11/21/16.

    Docket Numbers: ER17-234-000.

    Applicants: Kentucky Utilities Company.

    Description: § 205(d) Rate Filing: Recovery of Asset Retirement Obligation to be effective 7/1/2016.

    Filed Date: 10/31/16.

    Accession Number: 20161031-5150.

    Comments Due: 5 p.m. ET 11/21/16.

    Docket Numbers: ER17-235-000.

    Applicants: NSTAR Electric Company.

    Description: Initial rate filing: NSTAR-National Grid Facilities Support Agreement—Edgar Station to be effective 11/1/2016.

    Filed Date: 10/31/16.

    Accession Number: 20161031-5178.

    Comments Due: 5 p.m. ET 11/21/16.

    Docket Numbers: ER17-47-001.

    Applicants: DifWind Farms LTD VI.

    Description: Tariff Amendment: Supplement to MBR Application to be effective 12/7/2016.

    Filed Date: 10/31/16.

    Accession Number: 20161031-5136.

    Comments Due: 5 p.m. ET 11/21/16.

    Docket Numbers: ER17-48-001.

    Applicants: Terra-Gen Mojave Windfarms, LLC.

    Description: Tariff Amendment: Supplement to MBR Application to be effective 12/7/2016.

    Filed Date: 10/31/16.

    Accession Number: 20161031-5133.

    Comments Due: 5 p.m. ET 11/21/16.

    Take notice that the Commission received the following electric securities filings:

    Docket Numbers: ES17-7-000.

    Applicants: Mid-Atlantic Interstate Transmission, LL.

    Description: Application of Mid-Atlantic Interstate Transmission, LLC for Authorization Under Section 204 of the Federal Power Act.

    Filed Date: 10/28/16.

    Accession Number: 20161028-5220.

    Comments Due: 5 p.m. ET 11/18/16.

    The filings are accessible in the Commission's eLibrary system by clicking on the links or querying the docket number.

    Any person desiring to intervene or protest in any of the above proceedings must file in accordance with Rules 211 and 214 of the Commission's Regulations (18 CFR 385.211 and § 385.214) on or before 5:00 p.m. Eastern time on the specified comment date. Protests may be considered, but intervention is necessary to become a party to the proceeding.

    eFiling is encouraged. More detailed information relating to filing requirements, interventions, protests, service, and qualifying facilities filings can be found at: http://www.ferc.gov/docs-filing/efiling/filing-req.pdf. For other information, call (866) 208-3676 (toll free). For TTY, call (202) 502-8659.

    Dated: October 31, 2016. Nathaniel J. Davis, Sr., Deputy Secretary.
    [FR Doc. 2016-26706 Filed 11-3-16; 8:45 am] BILLING CODE 6717-01-P
    DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings

    Take notice that the Commission has received the following Natural Gas Pipeline Rate and Refund Report filings:

    Filings Instituting Proceedings

    Docket Numbers: RP17-77-000.

    Applicants: OkTex Pipeline Company, L.L.C.

    Description: Compliance filing 2015-2016—Gas Sales and Purchases Report.

    Filed Date: 10/27/16.

    Accession Number: 20161027-5073.

    Comments Due: 5 p.m. ET 11/8/16.

    Docket Numbers: RP17-78-000.

    Applicants: Rockies Express Pipeline LLC.

    Description: § 4(d) Rate Filing: Neg Rate 2016-10-27 BP for 10-30-16 to be effective 10/30/2016.

    Filed Date: 10/27/16.

    Accession Number: 20161027-5079.

    Comments Due: 5 p.m. ET 11/8/16.

    Docket Numbers: RP17-79-000.

    Applicants: Elba Express Company, L.L.C.

    Description: § 4(d) Rate Filing: 2016 Expansion Negotiated Rate Filing to be effective 12/1/2016.

    Filed Date: 10/27/16.

    Accession Number: 20161027-5080.

    Comments Due: 5 p.m. ET 11/8/16.

    Docket Numbers: RP17-80-000.

    Applicants: Pine Needle LNG Company, LLC.

    Description: § 4(d) Rate Filing: Negotiated Rate Authority to be effective 11/27/2016.

    Filed Date: 10/27/16.

    Accession Number: 20161027-5087.

    Comments Due: 5 p.m. ET 11/8/16.

    Docket Numbers: RP17-81-000.

    Applicants: Wyoming Interstate Company, L.L.C.

    Description: § 4(d) Rate Filing: L&U and Fuel Filing to be effective 12/1/2016.

    Filed Date: 10/27/16.

    Accession Number: 20161027-5103.

    Comments Due: 5 p.m. ET 11/8/16.

    Docket Numbers: RP17-82-000.

    Applicants: Transcontinental Gas Pipe Line Company.

    Description: § 4(d) Rate Filing: 2016 GSS LSS Tracker (EP & TCRA) to be effective 11/1/2016.

    Filed Date: 10/27/16.

    Accession Number: 20161027-5128.

    Comments Due: 5 p.m. ET 11/8/16.

    Docket Numbers: RP17-83-000.

    Applicants: Transcontinental Gas Pipe Line Company.

    Description: § 4(d) Rate Filing: Negotiated Rates—Cherokee AGL—Replacement Shippers—Nov 2016 to be effective 11/1/2016.

    Filed Date: 10/27/16.

    Accession Number: 20161027-5140.

    Comments Due: 5 p.m. ET 11/8/16.

    Docket Numbers: RP17-84-000.

    Applicants: Natural Gas Pipeline Company of America.

    Description: § 4(d) Rate Filing: Occidental Energy Negotiated Rate to be effective 11/1/2016.

    Filed Date: 10/27/16.

    Accession Number: 20161027-5162.

    Comments Due: 5 p.m. ET 11/8/16.

    Docket Numbers: RP17-85-000.

    Applicants: Iroquois Gas Transmission System, L.P.

    Description: § 4(d) Rate Filing: 10/27/16 Negotiated Rates—Trafigura Trading LLC (HUB) 7445-89 to be effective 11/1/2016.

    Filed Date: 10/27/16.

    Accession Number: 20161027-5171.

    Comments Due: 5 p.m. ET 11/8/16.

    Docket Numbers: RP17-86-000.

    Applicants: Algonquin Gas Transmission, LLC.

    Description: § 4(d) Rate Filing: Salem Lateral Non-Conforming Agreements Filing to be effective 11/1/2016.

    Filed Date: 10/27/16.

    Accession Number: 20161027-5174.

    Comments Due: 5 p.m. ET 11/8/16.

    Docket Numbers: RP17-87-000.

    Applicants: Algonquin Gas Transmission, LLC.

    Description: Compliance filing Salem Lateral—Negotiated Rates Filing to be effective 11/1/2016.

    Filed Date: 10/27/16.

    Accession Number: 20161027-5179.

    Comments Due: 5 p.m. ET 11/8/16.

    Docket Numbers: RP17-88-000.

    Applicants: Trunkline Gas Company, LLC.

    Description: Compliance filing Annual Interruptible Storage Revenue Credit filed 10-28-16.

    Filed Date: 10/28/16.

    Accession Number: 20161028-5006.

    Comments Due: 5 p.m. ET 11/9/16.

    Docket Numbers: RP17-89-000.

    Applicants: Destin Pipeline Company, L.L.C.

    Description: § 4(d) Rate Filing: Fuel Retention Adjustment Oct 2016 to be effective 12/1/2016.

    Filed Date: 10/28/16.

    Accession Number: 20161028-5018.

    Comments Due: 5 p.m. ET 11/9/16.

    Docket Numbers: RP17-90-000.

    Applicants: Dominion Transmission, Inc.

    Description: § 4(d) Rate Filing: DTI—October 28, 2016 Negotiated Rate Agreements to be effective 11/1/2016

    Filed Date: 10/28/16.

    Accession Number: 20161028-5032.

    Comments Due: 5 p.m. ET 11/9/16.

    Docket Numbers: RP17-91-000.

    Applicants: Transcontinental Gas Pipe Line Company.

    Description: Compliance filing Penalty Revenue Sharing Report—2016.

    Filed Date: 10/28/16.

    Accession Number: 20161028-5038.

    Comments Due: 5 p.m. ET 11/9/16.

    Docket Numbers: RP17-92-000.

    Applicants: Iroquois Gas Transmission System, L.P.

    Description: § 4(d) Rate Filing: 10/28/16 Negotiated Rates—Statoil Natural Gas LLC (RTS)—7120-03 to be effective 11/1/2016.

    Filed Date: 10/28/16.

    Accession Number: 20161028-5069.

    Comments Due: 5 p.m. ET 11/9/16.

    Docket Numbers: RP17-93-000.

    Applicants: Texas Eastern Transmission, LP.

    Description: § 4(d) Rate Filing: PCB TETLP DEC 2016 FILING to be effective 12/1/2016.

    Filed Date: 10/28/16.

    Accession Number: 20161028-5111.

    Comments Due: 5 p.m. ET 11/9/16.

    Docket Numbers: RP17-94-000.

    Applicants: Rockies Express Pipeline LLC.

    Description: § 4(d) Rate Filing: Neg Rate 2016-10-28 4 K's to be effective 11/1/2016.

    Filed Date: 10/28/16.

    Accession Number: 20161028-5128.

    Comments Due: 5 p.m. ET 11/9/16.

    Docket Numbers: RP17-95-000.

    Applicants: Vector Pipeline L.P.

    Description: § 4(d) Rate Filing: Tariff Revisions Filing (10-31-2016) to be effective 12/1/2016.

    Filed Date: 10/31/16.

    Accession Number: 20161031-5024.

    Comments Due: 5 p.m. ET 11/14/16.

    Docket Numbers: RP17-96-000.

    Applicants: Texas Eastern Transmission, LP.

    Description: § 4(d) Rate Filing: Negotiated Rates ConocoPhillips contract 911388 to be effective 11/1/2016.

    Filed Date: 10/31/16.

    Accession Number: 20161031-5030.

    Comments Due: 5 p.m. ET 11/14/16.

    The filings are accessible in the Commission's eLibrary system by clicking on the links or querying the docket number.

    Any person desiring to intervene or protest in any of the above proceedings must file in accordance with Rules 211 and 214 of the Commission's Regulations (18 CFR 385.211 and § 385.214) on or before 5:00 p.m. Eastern time on the specified comment date. Protests may be considered, but intervention is necessary to become a party to the proceeding.

    eFiling is encouraged. More detailed information relating to filing requirements, interventions, protests, service, and qualifying facilities filings can be found at: http://www.ferc.gov/docs-filing/efiling/filing-req.pdf. For other information, call (866) 208-3676 (toll free). For TTY, call (202) 502-8659.

    Dated: October 31, 2016. Nathaniel J. Davis, Sr., Deputy Secretary.
    [FR Doc. 2016-26707 Filed 11-3-16; 8:45 am] BILLING CODE 6717-01-P
    ENVIRONMENTAL PROTECTION AGENCY [FRL-9954-72-OAR] Clean Air Act Advisory Committee (CAAAC): Notice of Meeting AGENCY:

    Environmental Protection Agency (EPA).

    ACTION:

    Notice.

    SUMMARY:

    The Environmental Protection Agency (EPA) announces an upcoming meeting for the Clean Air Act Advisory Committee (CAAAC). The EPA established the CAAAC on November 19, 1990, to provide independent advice and counsel to EPA on policy issues associated with implementation of the Clean Air Act of 1990. The Committee advises on economic, environmental, technical, scientific and enforcement policy issues.

    DATES:

    The CAAAC will hold its next face-to-face meetings on December 1, 2016 from 08:30 a.m. to 04:30 p.m.

    ADDRESSES:

    The meeting will take place at the DuPont Circle Hotel, 1500 New Hampshire Avenue NW., Washington, DC 20036.

    FOR FURTHER INFORMATION CONTACT:

    Any member of the public who wants further information concerning the CAAAC's public teleconference may contact Tamara Saltman of the Office of Air and Radiation, U.S. EPA at [email protected]. Additional information about this meeting, the CAAAC, and its subcommittees and workgroups can be found on the CAAAC Web site: http://www.epa.gov/oar/caaac/.

    SUPPLEMENTARY INFORMATION:

    Inspection of Committee Documents: The committee agenda and any documents prepared for the meeting will be publicly available on the CAAAC Web site at http://www.epa.gov/oar/caaac/ prior to the meeting. Thereafter, these documents, together with CAAAC meeting minutes, will be available on the CAAAC Web site or by contacting the Office of Air and Radiation Docket and requesting information under docket EPA-HQ-OAR-2004-0075. The docket office can be reached by email at: [email protected] or FAX: 202-566-9744.

    For information on access or services for individuals with disabilities, please contact Lorraine Reddick at [email protected], preferably at least 10 days prior to the meeting to give EPA as much time as possible to process your request.

    Authority:

    5 U.S.C. App. 2 Section 10(a)(2).

    Dated: October 24, 2016. Tamara Saltman, Interim Designated Federal Officer, Clean Air Act Advisory Committee, Office of Air and Radiation.
    [FR Doc. 2016-26738 Filed 11-3-16; 8:45 am] BILLING CODE 6560-50-P
    ENVIRONMENTAL PROTECTION AGENCY [ER-FRL-9029-9] Environmental Impact Statements; Notice of Availability

    Responsible Agency: Office of Federal Activities, General Information (202) 564-7146 or http://www.epa.gov/nepa.

    Weekly receipt of Environmental Impact Statements (EISs) Filed 10/24/2016 Through 10/28/2016 Pursuant to 40 CFR 1506.9 Notice

    Section 309(a) of the Clean Air Act requires that EPA make public its comments on EISs issued by other Federal agencies. EPA's comment letters on EISs are available at: http://www.epa.gov/compliance/nepa/eisdata.html.

    EIS No. 20160255, Draft, NPS, ND, Knife River Indian Villages National Historic Site Archeological Resources Management Plan, Comment Period Ends: 01/03/2017, Contact: James Lange 402-661-1900 EIS No. 20160256, Draft Supplement, USACE, MO, Mississippi River between the Ohio and Missouri Rivers (Regulating Works), Comment Period Ends: 12/19/2016, Contact: Kip Runyon 314-331-8396 EIS No. 20160257, Final, Caltrans, CA, State Route 79 Realignment Project: Domenigoni Parkway to Gilman Springs Road, Review Period Ends: 12/05/2016, Contact: Aaron P. Burton 909-383-2841 EIS No. 20160258, Final, NRC, FL, Combine Licenses (COLs) for Turkey Point Nuclear Plant Units 6 and 7, Review Period Ends: 12/05/2016, Contact: Alicia Williamson Dickerson 301-415-1878 EIS No. 20160259, Draft, USFS, CO, Upper Monument Creek Landscape Restoration, Comment Period Ends: 12/19/2016, Contact: Carin Vadala 719-636-1602 EIS No. 20160260, Draft, USACE, ND, Mouse River Enhanced Flood Protection Project, Comment Period Ends: 12/22/2016, Contact: Derek Ingvalson 651-290-5252 Dated: November 1, 2016. Karin Leff, Acting Director, NEPA Compliance Division, Office of Federal Activities.
    [FR Doc. 2016-26734 Filed 11-3-16; 8:45 am] BILLING CODE 6560-50-P
    FEDERAL COMMUNICATIONS COMMISSION [OMB 3060-0178 and 3060-0706] Information Collections Being Reviewed by the Federal Communications Commission Under Delegated Authority AGENCY:

    Federal Communications Commission.

    ACTION:

    Notice and request for comments.

    SUMMARY:

    As part of its continuing effort to reduce paperwork burdens, and as required by the Paperwork Reduction Act (PRA) of 1995, the Federal Communications Commission (FCC or Commission) invites the general public and other Federal agencies to take this opportunity to comment on the following information collections. Comments are requested concerning: Whether the proposed collection of information is necessary for the proper performance of the functions of the Commission, including whether the information shall have practical utility; the accuracy of the Commission's burden estimate; ways to enhance the quality, utility, and clarity of the information collected; ways to minimize the burden of the collection of information on the respondents, including the use of automated collection techniques or other forms of information technology; and ways to further reduce the information collection burden on small business concerns with fewer than 25 employees.

    The FCC may not conduct or sponsor a collection of information unless it displays a currently valid OMB control number. No person shall be subject to any penalty for failing to comply with a collection of information subject to the PRA that does not display a valid OMB control number.

    DATES:

    Written PRA comments should be submitted on or before January 3, 2017. If you anticipate that you will be submitting comments, but find it difficult to do so within the period of time allowed by this notice, you should advise the contact listed below as soon as possible.

    ADDRESSES:

    Direct all PRA comments to Cathy Williams, FCC, via email [email protected] and to [email protected].

    FOR FURTHER INFORMATION CONTACT:

    For additional information about the information collection, contact Cathy Williams at (202) 418-2918.

    SUPPLEMENTARY INFORMATION:

    OMB Control Number: 3060-0178.

    Title: Section 73.1560, Operating Power and Mode Tolerances.

    Form Number: N/A.

    Type of Review: Extension of a currently approved collection.

    Respondents: Business or other for-profit entities.

    Number of Respondents and Responses: 80 respondents; 80 responses.

    Estimated Time per Response: 1 hour.

    Frequency of Response: On occasion reporting requirement.

    Obligation to Respond: Required to obtain or retain benefits. The statutory authority for this collection of information is contained in Section 154(i) of the Communications Act of 1934, as amended.

    Total Annual Burden: 80 hours.

    Total Annual Cost: None.

    Privacy Impact Assessment: No impact(s).

    Nature and Extent of Confidentiality: There is no need for confidentiality with this collection of information.

    Needs and Uses: 47 CFR part 73.1560(d) requires that licensees of AM, FM or TV stations file a notification with the FCC when operation at reduced power will exceed ten consecutive days and upon restoration of normal operations. If causes beyond the control of the licensee prevent restoration of authorized power within a 30-day period, an informal written request must be made for any additional time as may be necessary to restore normal operations.

    OMB Control Number: 3060-0706.

    Title: Sections 76.952 and 76.990, Cable Act Reform.

    Type of Review: Extension a currently approved collection.

    Respondents: Business or other for-profit entities; State, Local or Tribal Government.

    Number of Respondents and Responses: 70 respondents; 70 responses.

    Estimated Time per Response: 1-8 hours.

    Frequency of Response: On occasion reporting requirement; Third party disclosure requirement.

    Obligation to Respond: Required to obtain or retain benefits. The statutory authority for this collection of information is contained in the Telecommunications Act of 1996, Public Law 104-104, Sections 301 and 302, 110 Stat. 56, 114-124.

    Total Annual Burden: 210 hours.

    Total Annual Cost: None.

    Privacy Act Impact Assessment: No impact(s).

    Nature and Extent of Confidentiality: There is no need for confidentiality with this collection of information.

    Needs and Uses: 47 CFR 76.952 states that all cable operators must provide to the subscribers on monthly bills the name, mailing address and phone number of the franchising authority, unless the franchising authority in writing requests that the cable operator omits such information. The cable operator must also provide subscribers with the FCC community unit identifier for the cable system in their communities.

    47 CFR 76.990(b)(1) requires that a small cable operator may certify in writing to its franchise authority at any time that it meets all criteria necessary to qualify as a small operator. Upon request of the local franchising authority, the operator shall identify in writing all of its affiliates that provide cable service, the total subscriber base of itself and each affiliate, and the aggregate gross revenues of its cable and non-cable affiliates. Within 90 days of receiving the original certification, the local franchising authority shall determine whether the operator qualifies for deregulation and shall notify the operator in writing of its decision, although this 90-day period shall be tolled for so long as it takes the operator to respond to a proper request for information by the local franchising authority. An operator may appeal to the Commission a local franchise authority's information request if the operator seeks to challenge the information request as unduly or unreasonably burdensome. If the local franchising authority finds that the operator does not qualify for deregulation, its notice shall state the grounds for that decision. The operator may appeal the local franchising authority's decision to the Commission within 30 days. Federal Communications Commission. Marlene H. Dortch, Secretary. Office of the Secretary.
    [FR Doc. 2016-26639 Filed 11-3-16; 8:45 am] BILLING CODE 6712-01-P
    FEDERAL RESERVE SYSTEM Formations of, Acquisitions by, and Mergers of Bank Holding Companies

    The companies listed in this notice have applied to the Board for approval, pursuant to the Bank Holding Company Act of 1956 (12 U.S.C. 1841 et seq.) (BHC Act), Regulation Y (12 CFR part 225), and all other applicable statutes and regulations to become a bank holding company and/or to acquire the assets or the ownership of, control of, or the power to vote shares of a bank or bank holding company and all of the banks and nonbanking companies owned by the bank holding company, including the companies listed below.

    The applications listed below, as well as other related filings required by the Board, are available for immediate inspection at the Federal Reserve Bank indicated. The applications will also be available for inspection at the offices of the Board of Governors. Interested persons may express their views in writing on the standards enumerated in the BHC Act (12 U.S.C. 1842(c)). If the proposal also involves the acquisition of a nonbanking company, the review also includes whether the acquisition of the nonbanking company complies with the standards in section 4 of the BHC Act (12 U.S.C. 1843). Unless otherwise noted, nonbanking activities will be conducted throughout the United States.

    Unless otherwise noted, comments regarding each of these applications must be received at the Reserve Bank indicated or the offices of the Board of Governors not later than December 2, 2016.

    A. Federal Reserve Bank of Kansas City (Dennis Denney, Assistant Vice President) 1 Memorial Drive, Kansas City, Missouri 64198-0001:

    1. Central Kansas Bancshares, Inc., Woodbine, Kansas; to become a bank holding company by acquiring 100 percent of the voting shares of The Citizens State Bank and Trust Company, Woodbine, Kansas, and Roxbury Bank, Roxbury Kansas.

    Board of Governors of the Federal Reserve System, November 1, 2016. Michele Taylor Fennell, Assistant Secretary of the Board.
    [FR Doc. 2016-26701 Filed 11-3-16; 8:45 am] BILLING CODE 6210-01-P
    FEDERAL RESERVE SYSTEM Change in Bank Control Notices; Acquisitions of Shares of a Bank or Bank Holding Company

    The notificants listed below have applied under the Change in Bank Control Act (12 U.S.C. 1817(j)) and § 225.41 of the Board's Regulation Y (12 CFR 225.41) to acquire shares of a bank or bank holding company. The factors that are considered in acting on the notices are set forth in paragraph 7 of the Act (12 U.S.C. 1817(j)(7)).

    The notices are available for immediate inspection at the Federal Reserve Bank indicated. The notices also will be available for inspection at the offices of the Board of Governors. Interested persons may express their views in writing to the Reserve Bank indicated for that notice or to the offices of the Board of Governors. Comments must be received not later than November 21, 2016.

    A. Federal Reserve Bank of Richmond (Adam M. Drimer, Assistant Vice President) 701 East Byrd Street, Richmond, Virginia 23261-4528. Comments can also be sent electronically to or [email protected]:

    1. Kenneth R. Lehman, Arlington, Virginia; to acquire voting shares of Virginia Partners Bank, Fredericksburg, Virginia.

    Board of Governors of the Federal Reserve System, November 1, 2016. Michele Taylor Fennell, Assistant Secretary of the Board.
    [FR Doc. 2016-26702 Filed 11-3-16; 8:45 am] BILLING CODE 6210-01-P
    DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention [30Day-16-16JD] Agency Forms Undergoing Paperwork Reduction Act Review

    The Centers for Disease Control and Prevention (CDC) has submitted the following information collection request to the Office of Management and Budget (OMB) for review and approval in accordance with the Paperwork Reduction Act of 1995. The notice for the proposed information collection is published to obtain comments from the public and affected agencies.

    Written comments and suggestions from the public and affected agencies concerning the proposed collection of information are encouraged. Your comments should address any of the following: (a) Evaluate whether the proposed collection of information is necessary for the proper performance of the functions of the agency, including whether the information will have practical utility; (b) Evaluate the accuracy of the agencies estimate of the burden of the proposed collection of information, including the validity of the methodology and assumptions used; (c) Enhance the quality, utility, and clarity of the information to be collected; (d) Minimize the burden of the collection of information on those who are to respond, including through the use of appropriate automated, electronic, mechanical, or other technological collection techniques or other forms of information technology, e.g., permitting electronic submission of responses; and (e) Assess information collection costs.

    To request additional information on the proposed project or to obtain a copy of the information collection plan and instruments, call (404) 639-7570 or send an email to [email protected]. Written comments and/or suggestions regarding the items contained in this notice should be directed to the Attention: CDC Desk Officer, Office of Management and Budget, Washington, DC 20503 or by fax to (202) 395-5806. Written comments should be received within 30 days of this notice.

    Proposed Project

    Cohort Study of HIV, STIs and Preventive Interventions among Young MSM in Thailand—New—National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention (NCHHSTP), Centers for Disease Control and Prevention (CDC).

    Background and Brief Description

    CDC requests OMB approval for a new three-year information collection.

    In Thailand, there is a very high HIV incidence in men who have sex with men (MSM) and transgender women (TGW). It is estimated that over 50% of all new HIV infections are occurring in MSM and TGW. At Silom Community Clinic @Tropical Medicine (SCC @TropMed), there is a reported average HIV prevalence of 28% and HIV incidence of 8 per 100 person-years in young men (YMSM).

    Areas with gaps of understanding regarding the HIV epidemic in Thailand, as well as globally, are the epidemiology, risk factors, and HIV beliefs and knowledge of gay identified and transgender youth. In 2013, the Joint United Nations Programme on HIV and AIDS reported that 95% of new HIV infections were in low- and middle-income countries, where more than one third of new infections were among young people (<18 years) who were unaware of their HIV status. Adolescents living with HIV are more likely to die from AIDS, and there is little tracking of the HIV epidemic and outcomes in adolescents.

    We propose a study of males aged 15-29 years at risk for HIV. This study includes a longitudinal assessment (cohort) to assess HIV and sexually transmitted infection incidence and prevalence. This study will also generate critical data on HIV and STD incidence and prevalence in young men and adolescent males.

    This is the first study of its kind in Bangkok to collect data on HIV and STI incidence, access to HIV prevention, and attitudes about HIV prevention in adolescents ages 15-17 years. In addition to the cohort activities in which young persons are followed over three years, this study will collect needed qualitative data in the form of focus group discussions (FGD), and key informant interviews (KII) from teens and those that serve these teens in the community on HIV prevention, access to testing, pre-exposure prophylaxis or PrEP and other issues relevant to HIV prevention. The qualitative component will assess adolescent and key leaders' HIV prevention knowledge and practices. This study is a five-year study in total, with active follow-up over three years, and a two-year enrollment period.

    A study of young men at risk in Thailand is urgently needed to provide necessary data to assess and implement prevention strategies and inform policies for HIV prevention in Thailand, as well as globally. There is no cost to participants other than their time.

    The total estimated annualized burden hours are 814.

    Estimated Annualized Burden Hours Type of respondent Form name Number of
  • respondents
  • Number
  • responses/
  • respondent
  • Average
  • burden per
  • response
  • (in hours)
  • Community members FGD Consent Assent 10 1 30/60 FGD 10 1 2 KII Consent Assent 4 1 30/60 KII 4 1 2 Screening checklist 300 1 15/60 Potential Participant Screening consent Assent 300 1 30/60 Potential Participant Screening CASI 300 1 15/60 HIV-positive at screening HIV CASI 60 1 2/60 Participants Enrollment Consent Assent 167 1 30/60 Participants Follow-up CASI 167 4 15/60 Participants YMSM Clinical Form 167 4 20/60 HIV-positive Participants HIV CASI Cohort 46 4 1/60
    Leroy A. Richardson, Chief, Information Collection Review Office, Office of Scientific Integrity, Office of the Associate Director for Science, Office of the Director, Centers for Disease Control and Prevention.
    [FR Doc. 2016-26667 Filed 11-3-16; 8:45 am] BILLING CODE 4163-18-P
    DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare & Medicaid Services [Document Identifier: CMS-3070G-I, CMS-R-38 and CMS-10636] Agency Information Collection Activities: Proposed Collection; Comment Request AGENCY:

    Centers for Medicare & Medicaid Services, HHS.

    ACTION:

    Notice.

    SUMMARY:

    The Centers for Medicare & Medicaid Services (CMS) is announcing an opportunity for the public to comment on CMS' intention to collect information from the public. Under the Paperwork Reduction Act of 1995 (the PRA), federal agencies are required to publish notice in the Federal Register concerning each proposed collection of information (including each proposed extension or reinstatement of an existing collection of information) and to allow 60 days for public comment on the proposed action. Interested persons are invited to send comments regarding our burden estimates or any other aspect of this collection of information, including any of the following subjects: (1) The necessity and utility of the proposed information collection for the proper performance of the agency's functions; (2) the accuracy of the estimated burden; (3) ways to enhance the quality, utility, and clarity of the information to be collected; and (4) the use of automated collection techniques or other forms of information technology to minimize the information collection burden.

    DATES:

    Comments must be received by January 3, 2017.

    ADDRESSES:

    When commenting, please reference the document identifier or OMB control number. To be assured consideration, comments and recommendations must be submitted in any one of the following ways:

    1. Electronically. You may send your comments electronically to http://www.regulations.gov. Follow the instructions for “Comment or Submission” or “More Search Options” to find the information collection document(s) that are accepting comments.

    2. By regular mail. You may mail written comments to the following address: CMS, Office of Strategic Operations and Regulatory Affairs, Division of Regulations Development, Attention: Document Identifier/OMB Control Number ___, Room C4-26-05, 7500 Security Boulevard, Baltimore, Maryland 21244-1850.

    To obtain copies of a supporting statement and any related forms for the proposed collection(s) summarized in this notice, you may make your request using one of following:

    1. Access CMS' Web site address at http://www.cms.hhs.gov/PaperworkReductionActof1995.

    2. Email your request, including your address, phone number, OMB number, and CMS document identifier, to [email protected].

    3. Call the Reports Clearance Office at (410) 786-1326.

    FOR FURTHER INFORMATION CONTACT:

    Reports Clearance Office at (410) 786-1326.

    SUPPLEMENTARY INFORMATION:

    Contents

    This notice sets out a summary of the use and burden associated with the following information collections. More detailed information can be found in each collection's supporting statement and associated materials (see ADDRESSES).

    CMS-3070G-I ICF/IID Survey Report Form and Supporting Regulations. CMS-R-38 Conditions for Certification for Rural Health Clinics. CMS-10636 Three-Year Network Adequacy Review for Medicare Advantage Organizations.

    Under the PRA (44 U.S.C. 3501-3520), federal agencies must obtain approval from the Office of Management and Budget (OMB) for each collection of information they conduct or sponsor. The term “collection of information” is defined in 44 U.S.C. 3502(3) and 5 CFR 1320.3(c) and includes agency requests or requirements that members of the public submit reports, keep records, or provide information to a third party. Section 3506(c)(2)(A) of the PRA requires federal agencies to publish a 60-day notice in the Federal Register concerning each proposed collection of information, including each proposed extension or reinstatement of an existing collection of information, before submitting the collection to OMB for approval. To comply with this requirement, CMS is publishing this notice.

    Information Collection

    1. Type of Information Collection Request: Revision of a currently approved collection; Title of Information Collection: ICF/IID Survey Report Form and Supporting Regulations; Use: The information collected with forms 3070G-I is used to determine the level of compliance with Intermediate Care Facilities for Individuals with Intellectual Disabilities (ICF/IID) CoPs necessary to participate in the Medicare/Medicaid program. Information needed to monitor the State's performance as well as the ICF/IID program in general, is available to CMS only through the use of information abstracted from the survey report form. The form serves as a coding worksheet designed to facilitate data entry and retrieval into the Automated Survey Processing Environment Suite (ASPEN) in the State and at the CMS regional offices. Form Number: CMS-3070G-I (OMB Control Number: 0938-0062); Frequency: Reporting—Yearly; Affected Public: Private Sector: Business or other for-profits and Not-for-profit institutions; Number of Respondents: 6,310; Total Annual Responses: 6,310; Total Annual Hours: 18,930. (For policy questions regarding this collection contact Melissa Rice at 410-786-3270.)

    2. Type of Information Collection Request: Revision of a currently approved collection; Title of Information Collection: Conditions for Certification for Rural Health Clinics; Use: The Rural Health Clinic (RHC) conditions of certification are based on criteria prescribed in law and are designed to ensure that each facility has a properly trained staff to provide appropriate care and to assure a safe physical environment for patients. We use these conditions of participation to certify RHCs wishing to participate in the Medicare program. These requirements are similar in intent to standards developed by industry organizations such as the Joint Commission on Accreditation of Hospitals, and the National League of Nursing and the American Public Association and merely reflect accepted standards of management and care to which rural health clinics must adhere. Form Number: CMS-R-38 (OMB control number: 0938-0334); Frequency: Recordkeeping and Reporting—Annually; Affected Public: Business or other for-profits; Number of Respondents: 4,247; Total Annual Responses: 4,247; Total Annual Hours: 18,284. (For policy questions regarding this collection contact Jacqueline Leach at 410-786-4282.)

    3. Type of Information Collection Request: New collection (Request for a new OMB control number); Title of Information Collection: Three-Year Network Adequacy Review for Medicare Advantage Organizations; Use: The CMS regulations at 42 CFR 422.112(a)(1)(i) and § 422.114(a)(3)(ii) require that all Medicare Advantage organizations (MAOs) offering coordinated care plans (e.g., HMO, PPO) or other network-based plans (e.g., network-based PFFS, network-based MSA, section 1876 cost plan) maintain a network of appropriate providers that is sufficient to provide adequate access to covered services to meet the needs of the population served. To enforce this requirement, CMS has developed network adequacy criteria, which sets forth the minimum number of providers and maximum travel time and distance from enrollees to providers, for each provider specialty type in each county in the United States and its territories. MAOs must be in compliance with the current CMS network adequacy criteria. This proposed collection of information is essential to appropriate and timely compliance monitoring by CMS, in order to ensure that all active MAO contracts offering network-based plans maintain an adequate network. Currently, CMS verifies that MAOs are compliant with the current CMS network adequacy criteria by performing a contract-level network review, which occurs when CMS requests that an MAO upload provider and facility Health Service Delivery (HSD) tables for a given contract to the Health Plan Management System (HPMS). If an MAO does not have its contract-level network formally reviewed by CMS after the initial contract application process, then there is no CMS requirement for a network adequacy review unless one of the above listed triggering events occurs. Therefore, CMS is proposing this collection of information in order to improve monitoring of MAOs' network adequacy. This collection of information requires the uploading of HSD tables to the Network Management Module (NMM) in HPMS for any contract that has not had an entire network review performed by CMS in the previous three years of contract operation. The collection process will occur at the contract level for each MAO that qualifies, and CMS will assess each contract against the current CMS network adequacy criteria. Each time an MAO's contract undergoes an entire network review during any of the triggering events listed on page one, the three-year anniversary date for that contract will be reset, and CMS will maintain an HPMS report to keep track of this date for every active network-based contract. Form Number: CMS-10636 (OMB control number 0938-New); Frequency: Yearly; Affected Public: Private sector (Business or other for-profits); Number of Respondents: 484; Total Annual Responses: 1,652; Total Annual Hours: 15,692. (For policy questions regarding this collection contact Theresa Wachter at 410-786-1157.)

    Dated: November 1, 2016. William N. Parham, III, Director, Paperwork Reduction Staff, Office of Strategic Operations and Regulatory Affairs.
    [FR Doc. 2016-26745 Filed 11-3-16; 8:45 am] BILLING CODE 4120-01-P
    DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare & Medicaid Services [Document Identifiers: CMS-10191 and CMS-10305] Agency Information Collection Activities: Submission for OMB Review; Comment Request AGENCY:

    Centers for Medicare & Medicaid Services, HHS.

    ACTION:

    Notice.

    SUMMARY:

    The Centers for Medicare & Medicaid Services (CMS) is announcing an opportunity for the public to comment on CMS' intention to collect information from the public. Under the Paperwork Reduction Act of 1995 (PRA), federal agencies are required to publish notice in the Federal Register concerning each proposed collection of information, including each proposed extension or reinstatement of an existing collection of information, and to allow a second opportunity for public comment on the notice. Interested persons are invited to send comments regarding the burden estimate or any other aspect of this collection of information, including any of the following subjects: The necessity and utility of the proposed information collection for the proper performance of the agency's functions; the accuracy of the estimated burden; ways to enhance the quality, utility, and clarity of the information to be collected; and the use of automated collection techniques or other forms of information technology to minimize the information collection burden.

    DATES:

    Comments on the collection(s) of information must be received by the OMB desk officer by December 5, 2016.

    ADDRESSES:

    When commenting on the proposed information collections, please reference the document identifier or OMB control number. To be assured consideration, comments and recommendations must be received by the OMB desk officer via one of the following transmissions: OMB, Office of Information and Regulatory Affairs, Attention: CMS Desk Officer, Fax Number: (202) 395-5806 OR, Email: [email protected].

    To obtain copies of a supporting statement and any related forms for the proposed collection(s) summarized in this notice, you may make your request using one of following:

    1. Access CMS' Web site address at http://www.cms.hhs.gov/PaperworkReductionActof1995.

    2. Email your request, including your address, phone number, OMB number, and CMS document identifier, to [email protected].

    3. Call the Reports Clearance Office at (410) 786-1326.

    FOR FURTHER INFORMATION CONTACT:

    Reports Clearance Office at (410) 786-1326.

    SUPPLEMENTARY INFORMATION:

    Under the Paperwork Reduction Act of 1995 (PRA) (44 U.S.C. 3501-3520), federal agencies must obtain approval from the Office of Management and Budget (OMB) for each collection of information they conduct or sponsor. The term “collection of information” is defined in 44 U.S.C. 3502(3) and 5 CFR 1320.3(c) and includes agency requests or requirements that members of the public submit reports, keep records, or provide information to a third party. Section 3506(c)(2)(A) of the PRA (44 U.S.C. 3506(c)(2)(A)) requires federal agencies to publish a 30-day notice in the Federal Register concerning each proposed collection of information, including each proposed extension or reinstatement of an existing collection of information, before submitting the collection to OMB for approval. To comply with this requirement, CMS is publishing this notice that summarizes the following proposed collection(s) of information for public comment:

    1. Type of Information Collection Request: Revision of a currently approved collection; Title of Information Collection: Medicare Parts C and D Program Audit Protocols and Data Requests; Use: Under the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 and implementing regulations at 42 CFR parts 422 and 423, Medicare Part D plan sponsors and Medicare Advantage organizations are required to comply with all Medicare Parts C and D program requirements. In 2010, the explosive growth of these sponsoring organizations forced CMS to develop an audit strategy to ensure we continue to obtain meaningful audit results. As a result, CMS' audit strategy reflected a move to a more targeted, data-driven and risk-based audit approach. We focused on high-risk areas that have the greatest potential for beneficiary harm.

    To maximize resources, CMS will focus on assisting the industry to improve their operations to ensure beneficiaries receive access to care. One way to accomplish this is CMS will develop an annual audit strategy which describes how sponsors will be selected for audit and the areas that will be audited. CMS has developed several audit protocols and these are posted to the CMS Web site each year for use by sponsors to prepare for their audit. Currently CMS utilizes the following 7 protocols to audit sponsor performance: Formulary Administration (FA), Coverage Determinations, Appeals & Grievances (CDAG), Organization Determination, Appeals and Grievances (ODAG), Special Needs Model of Care (SNPMOC) (only administered on organizations who operate SNPs), Compliance Program Effectiveness (CPE), Medication Therapy Management (MTM) and Provider Network Accuracy (PNA). The data collected is detailed in each of these protocols and the exact fields are located in the record layouts, at the end of each protocol. In addition, questionnaires are distributed as part of our CDAG, ODAG and CPE audits. These questionnaires are also included in this package.

    As part of a robust audit process, CMS also requires sponsors who have been audited and found to have deficiencies to undergo a validation audit to ensure correction. The validation audit utilizes the same audit protocols, but only tests the elements where deficiencies were found, as opposed to re-administering the entire audit. Finally, to assist in improving the audit process, CMS sends sponsors a link to a survey (Appendix D) at the end of each audit to complete in order to obtain the sponsors' feedback. The sponsor is not required to complete the survey. Form Number: CMS-10191 (OMB control number: 0938-1000); Frequency: Yearly; Affected Public: Private Sector (business or other for-profit and not-for-profit institutions); Number of Respondents: 40; Total Annual Responses: 40; Total Annual Hours: 13,640. (For policy questions regarding this collection contact Dawn Johnson at 410-786-3159.)

    2. Type of Information Collection Request: Revision of a currently approved collection; Title of Information Collection: Medicare Part C and Part D Data Validation (42 CFR 422.516(g) and 423.514(g)); Use: Organizations contracted to offer Medicare Part C and Part D benefits are required to report data to us on a variety of measures. For the data to be useful for monitoring and performance measurement, the data must be reliable, valid, complete, and comparable among sponsoring organizations. To meet this goal, we have developed reporting standards and data validation specifications with respect to the Part C and Part D reporting requirements. These standards provide a review process for Medicare Advantage Organizations, Cost Plans, and Part D sponsors to use to conduct data validation checks on their reported Part C and Part D data.

    The FDCF is revised for the 2017 and 2018 DV collection periods by changing the scoring of six standards from a binary scale to a five-point Likert-type scale. This change is expected to improve the precision of the data validation scores by increasing overall variation in total scores among the MAOs and PDPs. The revision is not expected to alter resource requirements, since the assessment by DV contractors in scoring standards will continue to be based on the percentage of records that meet the standards. Form Number: CMS-10305 (OMB control number: 0938-1115); Frequency: Yearly; Affected Public: Private sector—Business or other for-profits; Number of Respondents: 639; Total Annual Responses: 639; Total Annual Hours: 209,271. (For policy questions regarding this collection contact Terry Lied at 410-786-8973.)

    Dated: November 1, 2016. William N. Parham, III, Director, Paperwork Reduction Staff, Office of Strategic Operations and Regulatory Affairs.
    [FR Doc. 2016-26743 Filed 11-3-16; 8:45 am] BILLING CODE 4120-01-P
    DEPARTMENT OF HEALTH AND HUMAN SERVICES Administration for Children and Families [CFDA Number: 93.676] Announcement of the Award of Nine Single-Source Program Expansion Supplement Grants Under the Unaccompanied Children's (UC) Program AGENCY:

    Office of Refugee Resettlement (ORR), Administration for Children and Families (ACF), U.S. Department of Health and Human Services (HHS).

    ACTION:

    Notice of Award of nine single-source program expansion supplement grants under the UC Program.

    SUMMARY:

    ACF, ORR announces the award of nine single-source program expansion supplement grants for a total of $21,164,141 under the UC's Program.

    Organization Location Amount BCFS Health and Human Services San Antonio, TX $2,736,000 Heartland Human Care Services, Inc. Chicago, IL 1,463,856 Youth for Tomorrow Bristow, VA 2,184,311 Children's Village Dobbs Ferry, NY 1,922,400 International Educational Services Brownsville, TX 6,551,312 Mercy First Syosset, NY 877,255 Children's Home of Kingston Kingston, NY 464,743 Cayuga Center New York, NY 3,553,107 Leake and Watts Services Yonkers, NY 1,411,157

    ORR has been identifying additional capacity to provide shelter for potential increases in apprehensions of UC at the U.S. Southern Border. Planning for increased shelter capacity is a prudent step to ensure that ORR is able to meet its responsibility, by law, to provide shelter for UC referred to its care by the Department of Homeland Security (DHS).

    The expansion supplement grants will support the need to increase shelter capacity to accommodate the increasing numbers of UCs being referred by DHS. All nine grantees have the infrastructure, licensing, experience, and appropriate level of trained staff to meet the service requirements and the urgent need for expansion of services. The grantees provide residential services to UC in the care and custody of ORR, as well as services to include counseling, case management, and additional support services to the family or to the UC and their sponsor when a UC is released from ORR's care and custody.

    DATES:

    Supplemental award funds will support activities from October 1, 2015, through September 30, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Jallyn Sualog, Director, Division of Children's Services, Office of Refugee Resettlement, 330 C Street SW., Washington, DC 20201. Email: [email protected].

    SUPPLEMENTARY INFORMATION:

    ORR is continuously monitoring its capacity to shelter the UC referred to HHS, as well as the information received from interagency partners, to inform any future decisions or actions.

    ORR has specific requirements for the provision of services. Award recipients must have the infrastructure, licensing, experience, and appropriate level of trained staff to meet those requirements. The expansion of the existing program and its services through this supplemental award is a key strategy for ORR to be prepared to meet its responsibility to provide shelter for UC referred to its care by DHS and so that the U.S. Border Patrol can continue its vital national security mission to prevent illegal migration, trafficking, and protect the borders of the United States.

    Statutory Authority: This program is authorized by—

    (A) Section 462 of the Homeland Security Act of 2002, which in March 2003, transferred responsibility for the care and custody of Unaccompanied Alien Children from the Commissioner of the former Immigration and Naturalization Service (INS) to the Director of ORR of HHS.

    (B) The Flores Settlement Agreement, Case No. CV85-4544RJK (C.D. Cal. 1996), as well as the William Wilberforce Trafficking Victims Protection Reauthorization Act of 2008 (Pub. L. 110-457), which authorizes post release services under certain conditions to eligible children. All programs must comply with the Flores Settlement Agreement, Case No. CV85-4544-RJK (C.D. Cal. 1996), pertinent regulations, and ORR policies and procedures.

    Christopher Beach, Office of Administration, Office of Financial Services, Division of Grants Policy.
    [FR Doc. 2016-26673 Filed 11-3-16; 8:45 am] BILLING CODE 4184-45-P
    DEPARTMENT OF HEALTH AND HUMAN SERVICES Administration for Children and Families Submission for OMB Review; Comment Request

    Title: Tribal Child Support Enforcement Direct Funding Request: 45 CFR 309-Plan.

    OMB No.: 0970-0218.

    Description: The final rule within 45 CFR part 309 contains a regulatory reporting requirement that in order to receive funding for a Tribal IV-D program a Tribe or Tribal organization must submit a plan describing how the Tribe or Tribal organization meets or plans to meet the objectives of section 455(f) of the Social Security Act, including establishing paternity, establishing, modifying, and enforcing support orders, and locating noncustodial parents. The plan is required for all Tribes requesting funding; however, once a Tribe has met the requirements to operate a comprehensive program, a new plan is not required annually unless a Tribe makes changes to its title IV-D program. Tribes and Tribal organizations must respond if they wish to operate a fully funded program. This paperwork collection activity is set to expire in December, 2016.

    Respondents: Tribes and Tribal Organizations.

    Annual Burden Estimates Instrument Number of
  • respondents
  • Number of
  • responses per respondent
  • Average
  • burden hours
  • per response
  • Total
  • burden hours
  • 45 CFR 309 Amended Plan 63 1 120 7,560 45 CFR 309 New Plan 2 1 480 960 Total 600 8,520 Estimated Total Annual Burden Hours 600 8,520

    Additional Information: Copies of the proposed collection may be obtained by writing to the Administration for Children and Families, Office of Planning, Research and Evaluation, 330 C Street SW., Washington, DC 20201. Attention Reports Clearance Officer. All requests should be identified by the title of the information collection. Email address: [email protected].

    OMB Comment: OMB is required to make a decision concerning the collection of information between 30 and 60 days after publication of this document in the Federal Register. Therefore, a comment is best assured of having its full effect if OMB receives it within 30 days of publication. Written comments and recommendations for the proposed information collection should be sent directly to the following: Office of Management and Budget, Paperwork Reduction Project, Email: [email protected]. Attn: Desk Officer for the Administration for Children and Families

    Robert Sargis, Reports Clearance Officer.
    [FR Doc. 2016-26615 Filed 11-3-16; 8:45 am] BILLING CODE 4184-01-P
    DEPARTMENT OF HEALTH AND HUMAN SERVICES Administration for Children and Families Withdrawal of 60-Day Notice of Proposed Information Collection: Unaccompanied Children Case Summary Form AGENCY:

    Administration for Children and Families, HHS.

    ACTION:

    Withdrawal: Notice.

    SUMMARY:

    On October 4, 2016 at 81 FR 68420, ACF published a 60 Day Notice of Proposed Information Collection entitled “Unaccompanied Children Case Summary Form.” ACF is withdrawing this notice from the Federal Register.

    FOR FURTHER INFORMATION CONTACT:

    Robert Sargis, Reports Clearance Officer, Office of Planning Research and Evaluation.

    Robert Sargis, Reports Clearance Officer.
    [FR Doc. 2016-26686 Filed 11-3-16; 8:45 am] BILLING CODE 4184-01-P
    DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health National Institute of Aging (NIA), National Institute of Mental Health (NIMH), and National Center for Advancing Translational Sciences (NCATS): Cooperative Research and Development Agreement (CRADA) and Licensing Opportunity for Ketamine for the Treatment of Depression and Other Anxiety-Related Disorders AGENCY:

    National Institutes of Health, HHS.

    ACTION:

    Notice.

    SUMMARY:

    The National Institute of Aging (NIA), National Institute of Mental Health (NIMH), and National Center for Advancing Translational Sciences (NCATS) of the National Institutes of Health (NIH), University of Maryland at Baltimore (UMB) and their collaborators are seeking Cooperative Research and Development Agreement (CRADA) partners to collaborate in the preclinical and clinical development of ketamine metabolite (2R, 6R-HNK) for the treatment of depression and other anxiety-related disorders.

    DATES:

    Interested candidate partners must submit a statement of interest and capability, no more than five pages long, to the NCATS point of contact before January 3, 2017 for consideration.

    FOR FURTHER INFORMATION CONTACT:

    Information on licensing and co-development research collaborations, and copies of the U.S. patent applications listed below may be obtained by contacting: Attn: Sury Vepa, Ph.D., J.D., Senior Licensing and Patenting Manager, National Center for Advancing Translational Sciences, NIH, 9800 Medical Center Drive, Rockville, MD 20850, Phone: 301-217-9197, Fax: 301-217-5736, or email [email protected]. A signed Confidential Disclosure Agreement may be required to receive copies of the patent applications.

    SUPPLEMENTARY INFORMATION:

    As per the Anxiety and Depression Association of America, Major depressive disorder affects 14.8 million people in America, including children, adults, and the elderly. A number of therapies currently exist to treat depression, although they suffer drawbacks such as requiring weeks to take action. One particular therapy includes the approved drug, ketamine, which has demonstrated robust and acute antidepressant activity. However, its efficacy is bridled with significant disadvantages including its addictive potential and its dissociative activities. This is the case even when administered at low doses, which limits the potential widespread use of ketamine as an antidepressant medication.

    In order to improve the treatment of depression, it is important to explore the mechanism by which ketamine exerts its antidepressant effects. That is precisely what the NIH and UMB scientists and collaborators are investigating, and have found that the metabolism of ketamine is critical to its antidepressant effects, and that the (2R,6R)-2-amino-2-(2-chlorophenyl)-6-hydroxycyclohexanone ((2R,6R)-hydroxynorketamine (HNK)) metabolite, reversed depression-like behaviors in mice without triggering anesthetic, dissociative, or addictive side effects associated with ketamine. Specifically, the researchers found that the metabolite does not inhibit the non-competitive glutamatergic N-methyl-D-aspartate (NMDA) receptor, and it exerts rapid actions that activate the α-amino 3-hydroxy-5-methyl-4-isoxazole propionic acid (AMPA) receptors. Results indicate a non-NMDA receptor dependent mechanism underlying ketamine's antidepressant properties, which involve bioactivity of a specific metabolite (2R, 6R-HNK) could be exploited for drug development. Additionally, the researchers have established appropriate salt, crystal and polymorph forms of the agent and multiple methods of synthesis. Full ADME and polypharmacology assessment is complete as well as pre-formulations studies.

    To expedite the research, development and commercialization of 2R,6R-hydroxynorketamine (a metabolite of ketamine), the National Institutes of Health, UMB and their collaborators are seeking one or more CRADA and/or license agreements with appropriate pharmaceutical or biotechnology companies in accordance with the regulations governing the transfer of Government-developed technology and its public sector objectives, as outlined below. The purpose of a CRADA is to find a partner to collaborate in the development and commercialization of a technology that is in early phases of clinical development. Under the CRADA, key activities related to the clinical development of 2R,6R-HNK as a therapeutic to treat a variety of mental health conditions including depressive disorders will be performed. Collaborators should have proven experience in drug development with specialized expertise within depression and/or related mental health disorders. Owing to NIH's commitment to public dissemination of data, a key criterion will be that all outcomes from the collaborative effort will be published including the outcomes of all clinical trials. Further, it is the goal of NIH, UMB and other collaborators to develop the technology to the fullest extent (as therapeutic for multiple clinical indications including, but not limited to, anxiety, suicidal ideation, anhedonia, PTSD, addiction, neuropathic pain, among others).

    How to Apply: Interested potential CRADA collaborators will receive detailed information on the current status of the project after signing a confidentiality disclosure agreement (CDA) with NIH, UMB and other collaborators. Interested candidate partners must submit a statement of interest and capability, no more than five pages long, to the NCATS point of contact before January 3, 2017 for consideration. Guidelines for the preparation of a full CRADA proposal will be communicated by the NIH to respondents that have demonstrated sufficient mutual interests and capabilities that indicate the partnering entity will appropriately and substantially contribute to the proposed collaboration. Capability statements submitted after the due date may be considered if a suitable CRADA collaborator has not been identified by NIH and UMB among the initial pool of respondents.

    Respondents interested in submitting a CRADA proposal should be aware that it may be necessary for them to secure a patent license to the background-patent applications in order to commercialize products arising from a CRADA. Licensing of background technology patent rights related to this CRADA opportunity and claimed in the pending patent applications are available for either exclusive or non-exclusive licensing and licensing by NIH is subject to 35 U.S.C. 207 and 37 CFR part 404. CRADA partners are afforded an option to negotiate an exclusive license from the NIH for inventions arising from the performance of the CRADA research plan.

    The full CRADA proposal should include a capability statement with a detailed description of: (1) Collaborator's Expertise with mental health disorders such as depression, (2) Collaborators' expertise in preclinical development efforts including toxicology and chemistry, manufacturing and controls (CMC), (3) Expertise in regulatory affairs, particularly at the IND filing and early stage clinical trials stages, (4) Collaborator's ability to support, directly or through contract mechanisms, and upon the successful completion of relevant milestones, the ongoing pharmacokinetics and biological studies, long term toxicity studies, process chemistry and other pre-clinical development studies needed to obtain regulatory approval of a given therapy so as to ensure a high probability of eventual successful commercialization and; (5) Collaborator's ability to provide adequate funding to support some pre-clinical studies of the project as well as clinical trials.

    Publications Zanos P, Moaddel R, Morris PJ, Georgiou P, Fischell J, Elmer GI, Manickavasagom A, Yuan P, Pribut HJ, Singh NS, Dossou KSS, Fang Y, Huang X-P, Mayo CL, Wainer IW, Albuquerque EX, Thompson SM, Thomas CJ, Zarate CA, Gould TD. NMDA receptor inhibition-independent antidepressant actions of a ketamine metabolite. Nature, May 4, 2016, doi: 10:1038/nature17998. Patent Status

    (1) “Use Of (2R,6R)-HNK, (S)-Dehydronorketamine and (R,S)-ketamine metabolites in the treatment of depression and neuropathic pain”; Irving W. Wainer, Ruin Moaddel, Michel Bernier, Carlos A. Zarate, Mary Tanga, Marc C. Torjman, Michael Goldberg; Assignees: National Institute of Aging (NIA), National Institute of Mental Health (NIMH), SRI International, University of Medicine and Dentistry of New Jersey (UMDNJ); U.S. Provisional Patent Application # 61/547,336; Filed: October 14, 2011; NIH Reference # E-092-2011.

    (2) “Methods of using (2S,6S)-HNK and (2R,6R)-HNK to treat various depressive disorders and anxiety disorders”; Craig Thomas, Todd D. Gould, Irving W. Wainer, Carlos A. Zarate, Ruin Moaddel, Patrick Morris, Panos Zanos; Assignees: National Institute of Aging (NIA), National Institute of Mental Health (NIMH), National Center for Advancing Translational Sciences (NCATS), University of Maryland at Baltimore (UMB); U.S. Provisional Patent Application # 62/313317; Filed: March 25, 2016; NIH Reference #E-036-2016.

    (3) “Crystal forms and methods of synthesis of (2R, 6R)-HNK and (2S,6S)-HNK”; Craig Thomas, Patrick Morris, Carlos A. Zarate, Ruin Moaddel, Todd D. Gould, Panos Zanos; Assignees: National Center for Advancing Translational Sciences (NCATS), National Institute of Mental Health (NIMH), National Institute of Aging (NIA), University of Maryland at Baltimore (UMB); U.S. Provisional Patent Application #62/313309; Filed: March 25, 2016; NIH Reference #E-116-2016.

    Dated: October 31, 2016. Pamela McInnes, Deputy Director, Office of the Director, National Center for Advancing Translational Sciences.
    [FR Doc. 2016-26628 Filed 11-3-16; 8:45 am] BILLING CODE 4140-01-P
    DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Center for Scientific Review; Notice of Closed Meetings

    Pursuant to section 10(d) of the Federal Advisory Committee Act, as amended (5 U.S.C. App.), notice is hereby given of the following meetings.

    The meetings will be closed to the public in accordance with the provisions set forth in sections 552b(c)(4) and 552b(c)(6), Title 5 U.S.C., as amended. The grant applications and the discussions could disclose confidential trade secrets or commercial property such as patentable material, and personal information concerning individuals associated with the grant applications, the disclosure of which would constitute a clearly unwarranted invasion of personal privacy.

    Name of Committee: Center for Scientific Review Special Emphasis Panel; SBIR: Development of Cancer Therapeutics.

    Date: December 5-6, 2016.

    Time: 8:00 a.m. to 6:00 p.m.

    Agenda: To review and evaluate grant applications.

    Place: Sheraton Reston, 11810 Sunrise Valley Drive, Reston, VA 20191.

    Contact Person: Malaya Chatterjee, Ph.D., Scientific Review Officer, Center for Scientific Review, National Institutes of Health, 6701 Rockledge Drive, Room 6192, MSC 7804, Bethesda, MD 20892, (301) 806-2515, [email protected].

    Name of Committee: Center for Scientific Review Special Emphasis Panel; Myalgic Encephalomyelitis/Chronic Fatigue Syndrome.

    Date: December 6, 2016.

    Time: 2:00 p.m. to 4:00 p.m.

    Agenda: To review and evaluate grant applications.

    Place: National Institutes of Health, 6701 Rockledge Drive, Bethesda, MD 20892 (Telephone Conference Call).

    Contact Person: M. Catherine Bennett, Ph.D., Scientific Review Officer, Center for Scientific Review, National Institutes of Health, 6701 Rockledge Drive, Room 5182, MSC 7846, Bethesda, MD 20892, 301-435-1766, [email protected].

    Name of Committee: Center for Scientific Review Special Emphasis Panel; Member Conflict: Mechanisms of Neurogenesis, Cell Fate and Maturation, and Degeneration.

    Date: December 7, 2016.

    Time: 10:00 a.m. to 6:00 p.m.

    Agenda: To review and evaluate grant applications.

    Place: National Institutes of Health, 6701 Rockledge Drive, Bethesda, MD 20892 (Virtual Meeting).

    Contact Person: Linda MacArthur, Ph.D., Scientific Review Officer, Center for Scientific Review, National Institutes of Health, 6701 Rockledge Drive, Room 4187, Bethesda, MD 20892, 301-537-9986, [email protected].

    Name of Committee: Center for Scientific Review Special Emphasis Panel; HIV/AIDS Innovative Research Applications.

    Date: December 7-8, 2016.

    Time: 10:00 a.m. to 5:00 p.m.

    Agenda: To review and evaluate grant applications.

    Place: National Institutes of Health, 6701 Rockledge Drive, Bethesda, MD 20892 (Virtual Meeting).

    Contact Person: Jingsheng Tuo, Ph.D., Scientific Review Officer, Center for Scientific Review, National Institutes of Health, 6701 Rockledge Drive, Room 5207, Bethesda, MD 20892, 301-451-8754, [email protected].

    Name of Committee: Center for Scientific Review Special Emphasis Panel; The Biomedical Technology Research Resource for Macromolecular Modeling and Bioinformatics.

    Date: December 7-9, 2016.

    Time: 4:00 p.m. to 2:00 p.m.

    Agenda: To review and evaluate grant applications.

    Place: Wyndham Garden Urbana Champaign Hotel, 1001 W Killarney Street, Urbana, IL 61801.

    Contact Person: Nitsa Rosenzweig, Ph.D., Scientific Review Officer, Center for Scientific Review, National Institutes of Health, 6701 Rockledge Drive, Room 4152, MSC 7760, Bethesda, MD 20892, (301) 404-7419, [email protected].

    (Catalogue of Federal Domestic Assistance Program Nos. 93.306, Comparative Medicine; 93.333, Clinical Research, 93.306, 93.333, 93.337, 93.393-93.396, 93.837-93.844, 93.846-93.878, 93.892, 93.893, National Institutes of Health, HHS)
    Dated: November 1, 2016. Natasha M. Copeland, Program Analyst, Office of Federal Advisory Committee Policy.
    [FR Doc. 2016-26770 Filed 11-3-16; 8:45 am] BILLING CODE 4140-01-P
    DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Proposed Collection; 60-Day Comment Request; The Atherosclerosis Risk in Communities Study (National Heart Lung and Blood Institute) AGENCY:

    National Institutes of Health, HHS.

    ACTION:

    Notice.

    SUMMARY:

    In compliance with the requirement of the Paperwork Reduction Act of 1995 to provide opportunity for public comment on proposed data collection projects, the National Institutes of Health, National Heart, Lung, and Blood Institute (NHLBI) will publish periodic summaries of propose projects to be submitted to the Office of Management and Budget (OMB) for review and approval.

    DATES:

    Comments regarding this information collection are best assured of having their full effect if received within 60 days of the date of this publication.

    FOR FURTHER INFORMATION CONTACT:

    To obtain a copy of the data collection plans and instruments, submit comments in writing, or request more information on the proposed project, contact: Dr. Jacqueline Wright, 6701 Rockledge Drive, MSC 7936, Bethesda, MD 20892, or call non-toll-free number (301) 435-0384, or Email your request to: [email protected]. Formal requests for additional plans and instruments must be requested in writing.

    SUPPLEMENTARY INFORMATION:

    Section 3506(c)(2)(A) of the Paperwork Reduction Act of 1995 requires: Written comments and/or suggestions from the public and affected agencies are invited to address one or more of the following points: (1) Whether the proposed collection of information is necessary for the proper performance of the function of the agency, including whether the information will have practical utility; (2) The accuracy of the agency's estimate of the burden of the proposed collection of information, including the validity of the methodology and assumptions used; (3) Ways to enhance the quality, utility, and clarity of the information to be collected; and (4) Ways to minimize the burden of the collection of information on those who are to respond, including the use of appropriate automated, electronic, mechanical, or other technological collection techniques or other forms of information technology.

    Proposed Collection Title: The Atherosclerosis Risk in Communities Study, 0925-0281, REVISION, National Heart, Lung, and Blood Institute (NHLBI), the National Institutes of Health (NIH).

    Need and Use of Information Collection: The ARIC study was initiated in 1985 to examine the major factors contributing to the occurrence of and the trends for cardiovascular diseases among men, women, African Americans and white persons in four U.S. communities: Forsyth County, North Carolina; Jackson, Mississippi; suburbs of Minneapolis, Minnesota; and Washington County, Maryland. The cohort in Jackson is selected to represent only African American residents of the city. The primary objectives of the study are to: (1) Investigate factors associated with both atherosclerosis and clinical cardiovascular diseases and (2) measure occurrence of and trend in coronary heart disease (CHD) and relate them to community levels of risk factors, medical care, and atherosclerosis. Some specific activities for this revision of ARIC are continued telephone follow-up of the ARIC cohort, with twice yearly calls to identify new cardiovascular events and hospitalizations, update information about risk factors, and obtain information on access to and use of medical care for heart failure risk factors and heart failure, and to re-examine the surviving ARIC cohort (target n = 5,300) over a 21-month period.

    OMB approval is requested for 3 years. There are no costs to respondents other than their time. The total estimated annualized burden hours are 23,289.

    Estimated Annualized Burden Hours Type of response Number of
  • respondents
  • Number of
  • responses
  • per
  • respondent
  • Average
  • time per
  • response
  • (hours per
  • year)
  • Total annual
  • burden hour
  • Participant: a. Recruitment and Phone Contact (Attachment 1) 7,903 1 15/60 1,976 b. Clinic Examination (Attachment 7) * 5,572 1 100/60 9,287 c. Annual Follow-up Form (Attachment 8) 7,903 6 8/60 6,322 d. Semiannual Follow-up Form (Attachment 9) 7,903 6 7/60 5,532 Subtotal (Participant) 7,903 108,311 23,117 Non-Participant: a. Coroner/Medical Examiner Form (Attachment 10) 372 1 10/60 62 b. Informant Interview Form (Attachment 11) 372 1 10/60 62 c. Heart Failure Survey (Attachment 12) 100 1 10/60 17 d. Physician Questionnaire Form (Attachment 13) 372 1 5/60 31 Subtotal (Non-Participant) 1,216 1,216 172 Total (Participant and Non-Participant) 9,119 109,527 23,289 * Participants included in item a.
    Dated: October 31, 2016. Valery Gheen, NHLBI Project Clearance Liaison, National Institutes of Health.
    [FR Doc. 2016-26627 Filed 11-3-16; 8:45 am] BILLING CODE 4140-01-P
    DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health National Institute of Mental Health Amended Notice of Meeting

    Notice is hereby given of a change in the meeting of the National Institute of Mental Health Special Emphasis Panel, November 2, 2016, 08:30 a.m. to November 2, 2016, 05:00 p.m., Washington Marriott Georgetown, 1221 22nd Street NW., Washington, DC 20037 which was published in the Federal Register on October 13, 2016, 81 FR 70693.

    The meeting notice is amended to change the location to the Marriott Wardman Park Washington DC Hotel, 2660 Woodley Road NW., Washington, DC 2008. The meeting is closed to the public.

    Dated: October 28, 2016. Carolyn A. Baum, Program Analyst, Office of Federal Advisory Committee Policy.
    [FR Doc. 2016-26625 Filed 11-3-16; 8:45 am] BILLING CODE 4140-01-P
    DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Eunice Kennedy Shriver National Institute of Child Health and Human Development; Notice of Meeting

    Pursuant to section 10(d) of the Federal Advisory Committee Act, as amended (5 U.S.C. App.), notice is hereby given of a meeting of the Board of Scientific Counselors, NICHD.

    The meeting will be open to the public as indicated below, with the attendance limited to space available. Individuals who plan to attend and need special assistance, such as sign language interpretation or other reasonable accommodations, should notify the Contact Person listed below in advance of the meeting.

    The meeting will be closed to the public as indicated below in accordance with the provisions set forth in section 552b(c)(6), Title 5 U.S.C., as amended for the review, discussion, and evaluation of individual intramural programs and projects conducted by the EUNICE KENNEDY SHRIVER NATIONAL INSTITUTE OF CHILD HEALTH AND HUMAN DEVELOPMENT, including consideration of personnel qualifications and performance, and the competence of individual investigators, the disclosure of which would constitute a clearly unwarranted invasion of personal privacy.

    Name of Committee: Board of Scientific Counselors, NICHD.

    Date: December 2, 2016.

    Open: 8:00 a.m. to 12:15 p.m.

    Agenda: A report by the Scientific Director, NICHD, on the status of the NICHD Division of Intramural Research, talks by various intramural scientists, and current organizational structure.

    Place: National Institutes of Health, Building 31A, Conference Room 2A48, 31 Center Drive, Bethesda, MD 20892.

    Closed: 12:15 p.m. to 4:00 p.m.

    Agenda: To review and evaluate personal qualifications and performance, and competence of individual investigators.

    Place: National Institutes of Health, Building 31A, Conference Room 2A48, 31 Center Drive, Bethesda, MD 20892.

    Contact Person: Constantine A. Stratakis, MD, D(med)Sci, Scientific Director, Eunice Kennedy Shriver National Institute of Child Health and Human Development, NIH, Building 31A, Room 2A46, 31 Center Drive, Bethesda, MD 20892, 301-594-5984, [email protected].

    Information is also available on the Institute's/Center's home page: https://www.nichd.nih.gov/about/meetings/Pages/index.aspx, where an agenda and any additional information for the meeting will be posted when available. (Catalogue of Federal Domestic Assistance Program Nos.93.864, Population Research; 93.865, Research for Mothers and Children; 93.929, Center for Medical Rehabilitation Research; 93.209, Contraception and Infertility Loan Repayment Program, National Institutes of Health, HHS)
    Dated: October 28, 2016. Michelle Trout, Program Analyst, Office of Federal Advisory Committee Policy.
    [FR Doc. 2016-26624 Filed 11-3-16; 8:45 am] BILLING CODE 4140-01-P
    DEPARTMENT OF HOMELAND SECURITY Coast Guard [Docket No. USCG-2016-0848] National Offshore Safety Advisory Committee; Vacancies AGENCY:

    Coast Guard, Department of Homeland Security.

    ACTION:

    Request for applications.

    SUMMARY:

    The Coast Guard seeks applications for membership on the National Offshore Safety Advisory Committee. The National Offshore Safety Advisory Committee advises the Secretary of the Department of Homeland Security on matters and actions concerning activities directly involved with or in support of the exploration of offshore mineral and energy resources insofar as they relate to matters within Coast Guard jurisdiction. Applicants selected for service on the National Offshore Safety Advisory Committee via this solicitation will not begin their respective terms until January 31, 2018.

    DATES:

    Completed applications should reach the Coast Guard on or before January 3, 2017.

    ADDRESSES:

    Applicants should send a cover letter expressing interest in an appointment to the National Offshore Safety Advisory Committee that also identifies under which membership category the applicant is applying, along with a resume detailing the applicant's experience via one of the following methods:

    By Email: [email protected].

    By Fax: (202) 372-8382.

    By Mail: Mr. Patrick W. Clark, Alternate Designated Federal Officer of the National Offshore Safety Advisory Committee, Commandant, (CG-OES-2)/NOSAC U.S. Coast Guard, 2703 Martin Luther King Jr. Avenue SE., STOP 7509, Washington, DC 20593-7509.

    FOR FURTHER INFORMATION CONTACT:

    Mr. Patrick Clark, Alternate Designated Federal Officer of the National Offshore Safety Advisory Committee, Commandant, (CG-OES-2)/NOSAC U.S. Coast Guard, 2703 Martin Luther King Jr. Avenue SE., STOP 7509, Washington, DC 20593-7509; email [email protected]; telephone (202) 372-1358; fax (202) 372-8382.

    SUPPLEMENTARY INFORMATION:

    The National Offshore Safety Advisory Committee is a federal advisory committee established in accordance with the provisions of the Federal Advisory Committee Act (Title 5 U.S.C. Appendix) to advise the Secretary of the Department of Homeland Security on matters and actions concerning activities directly involved with or in support of the exploration of offshore mineral and energy resources insofar as they relate to matters within Coast Guard jurisdiction.

    The Committee normally meets twice a year: Once in April in New Orleans, Louisiana, and then in November in Houston, Texas. Each National Offshore Safety Advisory Committee member serves a term of office up to three (3) years. Members may serve a maximum of two (2) consecutive terms. All members serve at their own expense and receive no salary or reimbursement of travel expenses, or other compensation from the Federal Government.

    We will consider applications for the 5 positions listed below that will be vacant on January 31, 2018:

    (a) One member representing companies, organizations, enterprises or similar entities engaged in offshore drilling;

    (b) One member representing companies, organizations, enterprises or similar entities engaged in the production of petroleum;

    (c) One member representing companies, organizations, enterprises or similar entities engaged in the construction of offshore facilities;

    (d) One member representing companies, organizations, enterprises or similar entities engaged in the support, by offshore supply vessel or other vessels, of offshore operations; and,

    (e) One member representing employees of companies, organizations, enterprises or similar entities engaged in offshore operations, who should have recent practical experience on vessels or units involved in the offshore industry.

    To be eligible, applicants for positions (a-e) should be employed by companies, organizations, enterprises or similar entities associated with the exploration for, and the recovery of oil, gas and other mineral resources on the U.S. Outer Continental Shelf; and have expertise, knowledge and experience regarding the technology, equipment and techniques that are used or are being developed for use in the exploration for, and the recovery of, offshore mineral resources.

    The Department of Homeland Security does not discriminate in selection of Committee members on the basis of race, color, religion, sex, national origin, political affiliation, sexual orientation, gender identity, marital status, disabilities and genetic information, age membership in an employee organization, or any other non-merit factor. The Department of Homeland Security strives to achieve a widely diverse candidate pool for all of its recruitment actions.

    If you are interested in applying to become a member of the Committee, send your cover letter and resume to Mr. Patrick Clark, Alternate Designated Federal Officer of the National Offshore Safety Advisory Committee, via one of the transmittal methods in the ADDRESSES section by the deadline in the DATES section of this notice. All email submittals will receive email receipt confirmation.

    Dated: October 31, 2016. J.G. Lantz, Director of Commercial Regulations and Standards.
    [FR Doc. 2016-26651 Filed 11-3-16; 8:45 am] BILLING CODE 9110-04-P
    DEPARTMENT OF HOMELAND SECURITY Federal Emergency Management Agency [Internal Agency Docket No. FEMA-3377-EM; Docket ID FEMA-2016-0001] Florida; Amendment No. 1 to Notice of an Emergency Declaration AGENCY:

    Federal Emergency Management Agency, DHS.

    ACTION:

    Notice.

    SUMMARY:

    This notice amends the notice of an emergency declaration for the State of Florida (FEMA-3377-EM), dated October 6, 2016, and related determinations.

    DATES:

    Effective October 19, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Dean Webster, Office of Response and Recovery, Federal Emergency Management Agency, 500 C Street SW., Washington, DC 20472, (202) 646-2833.

    SUPPLEMENTARY INFORMATION:

    Notice is hereby given that the incident period for this emergency is closed effective October 19, 2016.

    The following Catalog of Federal Domestic Assistance Numbers (CFDA) are to be used for reporting and drawing funds: 97.030, Community Disaster Loans; 97.031, Cora Brown Fund; 97.032, Crisis Counseling; 97.033, Disaster Legal Services; 97.034, Disaster Unemployment Assistance (DUA); 97.046, Fire Management Assistance Grant; 97.048, Disaster Housing Assistance to Individuals and Households In Presidentially Declared Disaster Areas; 97.049, Presidentially Declared Disaster Assistance—Disaster Housing Operations for Individuals and Households; 97.050, Presidentially Declared Disaster Assistance to Individuals and Households—Other Needs; 97.036, Disaster Grants—Public Assistance (Presidentially Declared Disasters); 97.039, Hazard Mitigation Grant.

    W. Craig Fugate, Administrator, Federal Emergency Management Agency.
    [FR Doc. 2016-26736 Filed 11-3-16; 8:45 am] BILLING CODE 9111-23-P
    DEPARTMENT OF HOMELAND SECURITY Federal Emergency Management Agency [Internal Agency Docket No. FEMA-4286-DR; Docket ID FEMA-2016-0001] South Carolina; Amendment No. 1 to Notice of a Major Disaster Declaration AGENCY:

    Federal Emergency Management Agency, DHS.

    ACTION:

    Notice.

    SUMMARY:

    This notice amends the notice of a major disaster declaration for the State of South Carolina (FEMA-4286-DR), dated October 11, 2016, and related determinations.

    DATES:

    Effective October 14, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Dean Webster, Office of Response and Recovery, Federal Emergency Management Agency, 500 C Street SW., Washington, DC 20472, (202) 646-2833.

    SUPPLEMENTARY INFORMATION:

    The notice of a major disaster declaration for the State of South Carolina is hereby amended to include the Individual Assistance program for the following areas among those areas determined to have been adversely affected by the event declared a major disaster by the President in his declaration of October 11, 2016.

    Marion County for Individual Assistance (already designated for assistance for debris removal and emergency protective measures [Categories A and B], including direct federal assistance, under the Public Assistance program).

    Orangeburg County for Individual Assistance.

    The following Catalog of Federal Domestic Assistance Numbers (CFDA) are to be used for reporting and drawing funds: 97.030, Community Disaster Loans; 97.031, Cora Brown Fund; 97.032, Crisis Counseling; 97.033, Disaster Legal Services; 97.034, Disaster Unemployment Assistance (DUA); 97.046, Fire Management Assistance Grant; 97.048, Disaster Housing Assistance to Individuals and Households In Presidentially Declared Disaster Areas; 97.049, Presidentially Declared Disaster Assistance—Disaster Housing Operations for Individuals and Households; 97.050 Presidentially Declared Disaster Assistance to Individuals and Households—Other Needs; 97.036, Disaster Grants—Public Assistance (Presidentially Declared Disasters); 97.039, Hazard Mitigation Grant.

    W. Craig Fugate, Administrator, Federal Emergency Management Agency.
    [FR Doc. 2016-26720 Filed 11-3-16; 8:45 am] BILLING CODE 9111-23-P
    DEPARTMENT OF HOMELAND SECURITY Federal Emergency Management Agency [Internal Agency Docket No. FEMA-4286-DR; Docket ID FEMA-2016-0001] South Carolina; Amendment No. 5 to Notice of a Major Disaster Declaration AGENCY:

    Federal Emergency Management Agency, DHS.

    ACTION:

    Notice.

    SUMMARY:

    This notice amends the notice of a major disaster declaration for the State of South Carolina (FEMA-4286-DR), dated October 11, 2016, and related determinations.

    DATES:

    Effective October 25, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Dean Webster, Office of Response and Recovery, Federal Emergency Management Agency, 500 C Street SW., Washington, DC 20472, (202) 646-2833.

    SUPPLEMENTARY INFORMATION:

    The notice of a major disaster declaration for the State of South Carolina is hereby amended to include the following areas among those areas determined to have been adversely affected by the event declared a major disaster by the President in his declaration of October 11, 2016.

    Berkeley County for Individual Assistance (already designated for Public Assistance).

    Charleston County for Individual Assistance (already designated for assistance for debris removal and emergency protective measures [Categories A and B], including direct federal assistance, under the Public assistance program).

    Chesterfield County for Individual Assistance.

    The following Catalog of Federal Domestic Assistance Numbers (CFDA) are to be used for reporting and drawing funds: 97.030, Community Disaster Loans; 97.031, Cora Brown Fund; 97.032, Crisis Counseling; 97.033, Disaster Legal Services; 97.034, Disaster Unemployment Assistance (DUA); 97.046, Fire Management Assistance Grant; 97.048, Disaster Housing Assistance to Individuals and Households In Presidentially Declared Disaster Areas; 97.049, Presidentially Declared Disaster Assistance—Disaster Housing Operations for Individuals and Households; 97.050 Presidentially Declared Disaster Assistance to Individuals and Households—Other Needs; 97.036, Disaster Grants—Public Assistance (Presidentially Declared Disasters); 97.039, Hazard Mitigation Grant.

    W. Craig Fugate, Administrator, Federal Emergency Management Agency.
    [FR Doc. 2016-26725 Filed 11-3-16; 8:45 am] BILLING CODE 9111-23-P
    DEPARTMENT OF HOMELAND SECURITY Federal Emergency Management Agency [Internal Agency Docket No. FEMA-4288-DR; Docket ID FEMA-2016-0001] Wisconsin; Major Disaster and Related Determinations AGENCY:

    Federal Emergency Management Agency, DHS.

    ACTION:

    Notice.

    SUMMARY:

    This is a notice of the Presidential declaration of a major disaster for the State of Wisconsin (FEMA-4288-DR), dated October 20, 2016, and related determinations.

    DATES:

    Effective October 20, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Dean Webster, Office of Response and Recovery, Federal Emergency Management Agency, 500 C Street SW., Washington, DC 20472, (202) 646-2833.

    SUPPLEMENTARY INFORMATION:

    Notice is hereby given that, in a letter dated October 20, 2016, the President issued a major disaster declaration under the authority of the Robert T. Stafford Disaster Relief and Emergency Assistance Act, 42 U.S.C. 5121 et seq. (the “Stafford Act”), as follows:

    I have determined that the damage in certain areas of the State of Wisconsin resulting from severe storms, flooding, and mudslides during the period of September 21-22, 2016, is of sufficient severity and magnitude to warrant a major disaster declaration under the Robert T. Stafford Disaster Relief and Emergency Assistance Act, 42 U.S.C. 5121 et seq. (the “Stafford Act”). Therefore, I declare that such a major disaster exists in the State of Wisconsin.

    In order to provide Federal assistance, you are hereby authorized to allocate from funds available for these purposes such amounts as you find necessary for Federal disaster assistance and administrative expenses.

    You are authorized to provide Public Assistance in the designated areas and Hazard Mitigation throughout the State. Consistent with the requirement that Federal assistance be supplemental, any Federal funds provided under the Stafford Act for Hazard Mitigation will be limited to 75 percent of the total eligible costs. Federal funds provided under the Stafford Act for Public Assistance also will be limited to 75 percent of the total eligible costs, with the exception of projects that meet the eligibility criteria for a higher Federal cost-sharing percentage under the Public Assistance Alternative Procedures Pilot Program for Debris Removal implemented pursuant to section 428 of the Stafford Act.

    Further, you are authorized to make changes to this declaration for the approved assistance to the extent allowable under the Stafford Act.

    The Federal Emergency Management Agency (FEMA) hereby gives notice that pursuant to the authority vested in the Administrator, under Executive Order 12148, as amended, Benigno Bern Ruiz, of FEMA is appointed to act as the Federal Coordinating Officer for this major disaster.

    The following areas of the State of Wisconsin have been designated as adversely affected by this major disaster:

    Adams, Chippewa, Clark, Crawford, Jackson, Juneau, La Crosse, Monroe, Richland, and Vernon Counties for Public Assistance.

    All areas within the State of Wisconsin are eligible for assistance under the Hazard Mitigation Grant Program.

    The following Catalog of Federal Domestic Assistance Numbers (CFDA) are to be used for reporting and drawing funds: 97.030, Community Disaster Loans; 97.031, Cora Brown Fund; 97.032, Crisis Counseling; 97.033, Disaster Legal Services; 97.034, Disaster Unemployment Assistance (DUA); 97.046, Fire Management Assistance Grant; 97.048, Disaster Housing Assistance to Individuals and Households In Presidentially Declared Disaster Areas; 97.049, Presidentially Declared Disaster Assistance—Disaster Housing Operations for Individuals and Households; 97.050, Presidentially Declared Disaster Assistance to Individuals and Households—Other Needs; 97.036, Disaster Grants—Public Assistance (Presidentially Declared Disasters); 97.039, Hazard Mitigation Grant.

    W. Craig Fugate, Administrator, Federal Emergency Management Agency.
    [FR Doc. 2016-26727 Filed 11-3-16; 8:45 am] BILLING CODE 9111-23-P
    DEPARTMENT OF HOMELAND SECURITY Federal Emergency Management Agency [Internal Agency Docket No. FEMA-3379-EM]; [Docket ID FEMA-2016-0001] Georgia; Amendment No. 1 to Notice of an Emergency Declaration AGENCY:

    Federal Emergency Management Agency, DHS.

    ACTION:

    Notice.

    SUMMARY:

    This notice amends the notice of an emergency declaration for the State of Georgia (FEMA-3379-EM), dated October 6, 2016, and related determinations.

    DATES:

    Effective October 15, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Dean Webster, Office of Response and Recovery, Federal Emergency Management Agency, 500 C Street SW., Washington, DC 20472, (202) 646-2833.

    SUPPLEMENTARY INFORMATION:

    Notice is hereby given that the incident period for this emergency is closed effective October 15, 2016.

    The following Catalog of Federal Domestic Assistance Numbers (CFDA) are to be used for reporting and drawing funds: 97.030, Community Disaster Loans; 97.031, Cora Brown Fund; 97.032, Crisis Counseling; 97.033, Disaster Legal Services; 97.034, Disaster Unemployment Assistance (DUA); 97.046, Fire Management Assistance Grant; 97.048, Disaster Housing Assistance to Individuals and Households In Presidentially Declared Disaster Areas; 97.049, Presidentially Declared Disaster Assistance—Disaster Housing Operations for Individuals and Households; 97.050, Presidentially Declared Disaster Assistance to Individuals and Households—Other Needs; 97.036, Disaster Grants—Public Assistance (Presidentially Declared Disasters); 97.039, Hazard Mitigation Grant.

    W. Craig Fugate, Administrator, Federal Emergency Management Agency.
    [FR Doc. 2016-26730 Filed 11-3-16; 8:45 am] BILLING CODE 9111-23-P
    DEPARTMENT OF HOMELAND SECURITY Federal Emergency Management Agency [Internal Agency Docket No. FEMA-4280-DR]; [Docket ID FEMA-2016-0001] Florida; Amendment No. 2 to Notice of a Major Disaster Declaration AGENCY:

    Federal Emergency Management Agency, DHS.

    ACTION:

    Notice.

    SUMMARY:

    This notice amends the notice of a major disaster declaration for the State of Florida (FEMA-4280-DR), dated September 28, 2016, and related determinations.

    DATES:

    Effective Date: October 21, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Dean Webster, Office of Response and Recovery, Federal Emergency Management Agency, 500 C Street SW., Washington, DC 20472, (202) 646-2833.

    SUPPLEMENTARY INFORMATION:

    The notice of a major disaster declaration for the State of Florida is hereby amended to include the following areas among those areas determined to have been adversely affected by the event declared a major disaster by the President in his declaration of September 28, 2016.

    Columbia and Gadsden Counties for Public Assistance. Hernando County for Public Assistance (already designated for Individual Assistance).

    The following Catalog of Federal Domestic Assistance Numbers (CFDA) are to be used for reporting and drawing funds: 97.030, Community Disaster Loans; 97.031, Cora Brown Fund; 97.032, Crisis Counseling; 97.033, Disaster Legal Services; 97.034, Disaster Unemployment Assistance (DUA); 97.046, Fire Management Assistance Grant; 97.048, Disaster Housing Assistance to Individuals and Households In Presidentially Declared Disaster Areas; 97.049, Presidentially Declared Disaster Assistance—Disaster Housing Operations for Individuals and Households; 97.050 Presidentially Declared Disaster Assistance to Individuals and Households—Other Needs; 97.036, Disaster Grants—Public Assistance (Presidentially Declared Disasters); 97.039, Hazard Mitigation Grant.

    W. Craig Fugate, Administrator, Federal Emergency Management Agency.
    [FR Doc. 2016-26728 Filed 11-3-16; 8:45 am] BILLING CODE 9111-23-P
    DEPARTMENT OF HOMELAND SECURITY Federal Emergency Management Agency [Internal Agency Docket No. FEMA-4286-DR]; [Docket ID FEMA-2016-0001] South Carolina; Amendment No. 6 to Notice of a Major Disaster Declaration AGENCY:

    Federal Emergency Management Agency, DHS.

    ACTION:

    Notice.

    SUMMARY:

    This notice amends the notice of a major disaster declaration for the State of South Carolina (FEMA-4286-DR), dated October 11, 2016, and related determinations.

    DATES:

    Effective Date: October 25, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Dean Webster, Office of Response and Recovery, Federal Emergency Management Agency, 500 C Street SW., Washington, DC 20472, (202) 646-2833.

    SUPPLEMENTARY INFORMATION:

    The notice of a major disaster declaration for the State of South Carolina is hereby amended to include the following areas among those areas determined to have been adversely affected by the event declared a major disaster by the President in his declaration of October 11, 2016.

    Chesterfield County for Public Assistance (already designated for Individual Assistance).

    Kershaw and Richland Counties for Public Assistance.

    Calhoun, Charleston, Clarendon, Darlington, and Marlboro Counties for Public Assistance [Categories C-G] (already designated for Individual Assistance and assistance for debris removal and emergency protective measures [Categories A and B], including direct federal assistance, under the Public Assistance program).

    The following Catalog of Federal Domestic Assistance Numbers (CFDA) are to be used for reporting and drawing funds: 97.030, Community Disaster Loans; 97.031, Cora Brown Fund; 97.032, Crisis Counseling; 97.033, Disaster Legal Services; 97.034, Disaster Unemployment Assistance (DUA); 97.046, Fire Management Assistance Grant; 97.048, Disaster Housing Assistance to Individuals and Households In Presidentially Declared Disaster Areas; 97.049, Presidentially Declared Disaster Assistance—Disaster Housing Operations for Individuals and Households; 97.050 Presidentially Declared Disaster Assistance to Individuals and Households—Other Needs; 97.036, Disaster Grants—Public Assistance (Presidentially Declared Disasters); 97.039, Hazard Mitigation Grant.

    W. Craig Fugate, Administrator, Federal Emergency Management Agency.
    [FR Doc. 2016-26722 Filed 11-3-16; 8:45 am] BILLING CODE 9111-23-P
    DEPARTMENT OF HOMELAND SECURITY Federal Emergency Management Agency [Internal Agency Docket No. FEMA-4286-DR]; [Docket ID FEMA-2016-0001] South Carolina; Amendment No. 4 to Notice of a Major Disaster Declaration AGENCY:

    Federal Emergency Management Agency, DHS.

    ACTION:

    Notice.

    SUMMARY:

    This notice amends the notice of a major disaster declaration for the State of South Carolina (FEMA-4286-DR), dated October 11, 2016, and related determinations.

    DATES:

    Effective October 19, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Dean Webster, Office of Response and Recovery, Federal Emergency Management Agency, 500 C Street SW., Washington, DC 20472, (202) 646-2833.

    SUPPLEMENTARY INFORMATION:

    The notice of a major disaster declaration for the State of South Carolina is hereby amended to include the following areas among those areas determined to have been adversely affected by the event declared a major disaster by the President in his declaration of October 11, 2016.

    Calhoun, Clarendon, and Marlboro Counties for Individual Assistance and assistance for debris removal and emergency protective measures (Categories A and B), including direct federal assistance, under the Public Assistance program.

    Horry for Individual Assistance (already designated for Public Assistance).

    The following Catalog of Federal Domestic Assistance Numbers (CFDA) are to be used for reporting and drawing funds: 97.030, Community Disaster Loans; 97.031, Cora Brown Fund; 97.032, Crisis Counseling; 97.033, Disaster Legal Services; 97.034, Disaster Unemployment Assistance (DUA); 97.046, Fire Management Assistance Grant; 97.048, Disaster Housing Assistance to Individuals and Households In Presidentially Declared Disaster Areas; 97.049, Presidentially Declared Disaster Assistance—Disaster Housing Operations for Individuals and Households; 97.050 Presidentially Declared Disaster Assistance to Individuals and Households—Other Needs; 97.036, Disaster Grants—Public Assistance (Presidentially Declared Disasters); 97.039, Hazard Mitigation Grant.

    W. Craig Fugate, Administrator, Federal Emergency Management Agency.
    [FR Doc. 2016-26726 Filed 11-3-16; 8:45 am] BILLING CODE 9111-23-P
    DEPARTMENT OF HOMELAND SECURITY Federal Emergency Management Agency [Internal Agency Docket No. FEMA-4286-DR]; [Docket ID FEMA-2016-0001] South Carolina; Amendment No. 2 to Notice of a Major Disaster Declaration AGENCY:

    Federal Emergency Management Agency, DHS.

    ACTION:

    Notice.

    SUMMARY:

    This notice amends the notice of a major disaster declaration for the State of South Carolina (FEMA-4286-DR), dated October 11, 2016, and related determinations.

    DATES:

    Effective October 17, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Dean Webster, Office of Response and Recovery, Federal Emergency Management Agency, 500 C Street SW., Washington, DC 20472, (202) 646-2833.

    SUPPLEMENTARY INFORMATION:

    The notice of a major disaster declaration for the State of South Carolina is hereby amended to include the following areas among those areas determined to have been adversely affected by the event declared a major disaster by the President in his declaration of October 11, 2016.

    Allendale, Bamberg, Barnwell, Hampton, Lee, and Sumter Counties for Individual Assistance and assistance for debris removal and emergency protective measures (Categories A and B), including direct federal assistance, under the Public Assistance program.

    Beaufort, Colleton, Darlington, Dillon, Dorchester, Florence, Georgetown, Jasper, and Williamsburg Counties for Individual Assistance (already designated for assistance for debris removal and emergency protective measures [Categories A and B], including direct federal assistance, under the Public Assistance program).

    The following Catalog of Federal Domestic Assistance Numbers (CFDA) are to be used for reporting and drawing funds: 97.030, Community Disaster Loans; 97.031, Cora Brown Fund; 97.032, Crisis Counseling; 97.033, Disaster Legal Services; 97.034, Disaster Unemployment Assistance (DUA); 97.046, Fire Management Assistance Grant; 97.048, Disaster Housing Assistance to Individuals and Households In Presidentially Declared Disaster Areas; 97.049, Presidentially Declared Disaster Assistance—Disaster Housing Operations for Individuals and Households; 97.050 Presidentially Declared Disaster Assistance to Individuals and Households—Other Needs; 97.036, Disaster Grants—Public Assistance (Presidentially Declared Disasters); 97.039, Hazard Mitigation Grant.

    W. Craig Fugate, Administrator, Federal Emergency Management Agency.
    [FR Doc. 2016-26719 Filed 11-3-16; 8:45 am] BILLING CODE 9111-23-P
    DEPARTMENT OF HOMELAND SECURITY Federal Emergency Management Agency [Internal Agency Docket No. FEMA-4286-DR]; [Docket ID FEMA-2016-0001] South Carolina; Amendment No. 3 to Notice of a Major Disaster Declaration AGENCY:

    Federal Emergency Management Agency, DHS.

    ACTION:

    Notice.

    SUMMARY:

    This notice amends the notice of a major disaster declaration for the State of South Carolina (FEMA-4286-DR), dated October 11, 2016, and related determinations.

    DATES:

    Effective Date: October 18, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Dean Webster, Office of Response and Recovery, Federal Emergency Management Agency, 500 C Street SW., Washington, DC 20472, (202) 646-2833.

    SUPPLEMENTARY INFORMATION:

    The notice of a major disaster declaration for the State of South Carolina is hereby amended to include the following areas among those areas determined to have been adversely affected by the event declared a major disaster by the President in his declaration of October 11, 2016.

    Allendale, Bamberg, Barnwell, Beaufort, Colleton, Dillon, Dorchester, Florence, Georgetown, Hampton, Jasper, Lee, Marion, Sumter, and Williamsburg Counties for Public Assistance [Categories C-G] (already designated for Individual Assistance and assistance for debris removal and emergency protective measures [Categories A and B], including direct federal assistance, under the Public Assistance program).

    Berkeley and Horry Counties for Public Assistance [Categories C-G] (already designated for assistance for debris removal and emergency protective measures [Categories A and B], including direct federal assistance, under the Public Assistance program).

    Orangeburg County for Public Assistance (already designated for Individual Assistance).

    The following Catalog of Federal Domestic Assistance Numbers (CFDA) are to be used for reporting and drawing funds: 97.030, Community Disaster Loans; 97.031, Cora Brown Fund; 97.032, Crisis Counseling; 97.033, Disaster Legal Services; 97.034, Disaster Unemployment Assistance (DUA); 97.046, Fire Management Assistance Grant; 97.048, Disaster Housing Assistance to Individuals and Households In Presidentially Declared Disaster Areas; 97.049, Presidentially Declared Disaster Assistance—Disaster Housing Operations for Individuals and Households; 97.050 Presidentially Declared Disaster Assistance to Individuals and Households—Other Needs; 97.036, Disaster Grants—Public Assistance (Presidentially Declared Disasters); 97.039, Hazard Mitigation Grant.

    W. Craig Fugate, Administrator, Federal Emergency Management Agency.
    [FR Doc. 2016-26721 Filed 11-3-16; 8:45 am] BILLING CODE 9111-23-P
    DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT [Docket No. FR-5915-N-13] 60-Day Notice of Proposed Information Collection: Improving the Speed of Housing Recovery Program Launch After Severe Disasters AGENCY:

    Office of Policy Development and Research, HUD.

    ACTION:

    Notice.

    SUMMARY:

    HUD is seeking approval from the Office of Management and Budget (OMB) for the information collection described below. In accordance with the Paperwork Reduction Act, HUD is requesting comment from all interested parties on the proposed collection of information. The purpose of this notice is to allow for 60 days of public comment.

    DATES:

    Comments Due Date: January 3, 2017.

    ADDRESSES:

    Interested persons are invited to submit comments regarding this proposal. Comments should refer to the proposal by name and/or OMB Control Number and should be sent to: Anna P. Guido, Reports Management Officer, QDAM, Department of Housing and Urban Development, 451 7th Street SW., Room 4176, Washington, DC 20410-5000; telephone 202-402-5534 (this is not a toll-free number) or email at [email protected] for a copy of the proposed forms or other available information. Persons with hearing or speech impairments may access this number through TTY by calling the toll-free Federal Relay Service at (800) 877-8339.

    FOR FURTHER INFORMATION CONTACT:

    Anna P. Guido, Reports Management Officer, QDAM, Department of Housing and Urban Development, 451 7th Street SW., Washington, DC 20410; email Anna P. Guido at [email protected] or telephone 202-402-5535. This is not a toll-free number. Persons with hearing or speech impairments may access this number through TTY by calling the toll-free Federal Relay Service at (800) 877-8339.

    Copies of available documents submitted to OMB may be obtained from Ms. Guido.

    SUPPLEMENTARY INFORMATION:

    This notice informs the public that HUD is seeking approval from OMB for the information collection described in Section A.

    A. Overview of Information Collection

    Title of Information Collection: Improving the Speed of Housing Recovery Program Launch after Severe Disaster.

    OMB Approval Number: Pending.

    Type of Request: New.

    Form Number: No forms.

    Description of the need for the information and proposed use: Since 1992, Congress has appropriated over $44 billion through HUD's Community Development Block Grant—Disaster Recovery (CDBG-DR) program to support long-term recovery in communities affected by Presidentially-declared disasters. This has included $19.7 billion for recovery from Hurricanes Katrina, Rita and Wilma in 2005, as well as $13 billion for recovery from Hurricane Sandy in 2012. These funds can be used for a wide variety of activities related to long-term recovery, including: Buyouts of homes in high-risk area; relocation or other compensation of affected households; rehabilitation/reconstruction of damaged homes; infrastructure and public improvements; demolition and debris removal; and economic development.

    CDBG-DR funds are appropriated to HUD and then allocated to affected states and local governments. At that point, the grantees will be eager to move quickly, to develop programs to provide support to individuals and organizations that need it, and to begin recovery in earnest. But launching a disaster recovery program can be an enormous challenge. Some grantees have minimal previous experience with the base CDBG program. Even the more experienced grantees struggle with the scale of the challenge—both the level of need in the community and the amount of funds suddenly available for deployment. And there are, of course, many challenges unique to disaster recovery, that grantees may never have had to deal with before. All of these factors, and more, combine to hinder the recovery of disaster-affected communities. The purpose of this project is to examine factors that contribute to delays in launching housing recovery programs in the wake of severe disasters, and to produce a guidebook that will help to accelerate that process.

    Conducting this research will require the research team (The Urban Institute, under HUD grant H-21670CA) to interview a variety of individuals with experience with disaster recovery, and the CDBG-DR program in particular.

    Respondents (i.e., affected public): This information collection will affect approximately 60 individuals that have been involved in the design and management of CDBG-DR programs, particularly those related to housing recovery. Respondents are expected to be current or former employees of state and local governments that have received CDBG-DR funding, or current or former employees of private-sector entities that have supported those grantees. The study will focus on a purposive sample of CDBG-DR grantees, their selection based on the characteristics of the disaster and the grantee. This sample is expected to cover approximately 17 grantees: 12 grantees affected by 3 major disasters (4 grantees per disaster) and 5 grantees affected by smaller disasters (1 grantee per disaster). Once those grantees are selected, the research team will seek to interview an average of 4 individuals per major disaster grantee and 2 individuals per small disaster grantee (for a total of 58 respondents). Interview targets will include CDBG-DR program directors, CDBG-DR housing program managers, and other staff as needed. Interviews will be structured and will focus on important aspects of the period between the occurrence of the disaster and the completion of recovery activities, such as: program design decisions; hiring and training of staff; selection of contractors; and partnership with HUD and other recovery agencies. Interviews are expected to last an average of an hour and a half. The research team will conduct some interviews in person during site visits. The other interviews will be conducted by telephone.

    All interviews will be confidential and not attributed to individuals by name or association. Interview results will be coded for analytical purposes and used to inform the study's two key deliverables: A retrospective report on factors that contribute to rapid disaster recovery and a guidebook to help disaster-affected communities recover more quickly.

    The table below estimates the total burden to the public for the proposed information collection, assuming an hourly cost per response based on the GS-15 step 1 hourly wage rate.

    Information collection Number of
  • respondents
  • Frequency of response Responses
  • per annum
  • Burden hour per response Annual burden hours Hourly cost per response Annual cost
    Interviews with Disaster Recovery staff 58 One time 1 1.5 87 $50 $4,350 Total 58 87 50 4,350
    B. Solicitation of Public Comment

    This notice is soliciting comments from members of the public and affected parties concerning the collection of information described in Section A on the following:

    (1) Whether the proposed collection of information is necessary for the proper performance of the functions of the agency, including whether the information will have practical utility;

    (2) The accuracy of the agency's estimate of the burden of the proposed collection of information;

    (3) Ways to enhance the quality, utility, and clarity of the information to be collected; and

    (4) Ways to minimize the burden of the collection of information on those who are to respond, including the use of appropriate automated collection techniques or other forms of information technology, e.g., permitting electronic submission of responses. HUD encourages interested parties to submit comment in response to these questions.

    Authority:

    Section 3507 of the Paperwork Reduction Act of 1995, 44 U.S.C. Chapter 35.

    Dated: October 25, 2016. Katherine M. O'Regan, Assistant Secretary for Policy Development and Research.
    [FR Doc. 2016-26742 Filed 11-3-16; 8:45 am] BILLING CODE 4210-67-P
    DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT [Docket No. FR-5907-N-45] Federal Property Suitable as Facilities To Assist the Homeless AGENCY:

    Office of the Assistant Secretary for Community Planning and Development, HUD.

    ACTION:

    Notice.

    SUMMARY:

    This Notice identifies unutilized, underutilized, excess, and surplus Federal property reviewed by HUD for suitability for possible use to assist the homeless.

    FOR FURTHER INFORMATION CONTACT:

    Juanita Perry, Department of Housing and Urban Development, 451 Seventh Street SW., Room 7266, Washington, DC 20410; telephone (202) 402-3970; TTY number for the hearing- and speech-impaired (202) 708-2565 (these telephone numbers are not toll-free), call the toll-free Title V information line at 800-927-7588 or send an email to [email protected].

    SUPPLEMENTARY INFORMATION:

    In accordance with the December 12, 1988 court order in National Coalition for the Homeless v. Veterans Administration, No. 88-2503-OG (D.D.C.), HUD publishes a Notice, on a weekly basis, identifying unutilized, underutilized, excess and surplus Federal buildings and real property that HUD has reviewed for suitability for use to assist the homeless. Today's Notice is for the purpose of announcing that no additional properties have been determined suitable or unsuitable this week.

    Dated: October 27, 2016. Brian P. Fitzmaurice, Director, Division of Community Assistance, Office of Special Needs Assistance Programs.
    [FR Doc. 2016-26468 Filed 11-3-16; 8:45 am] BILLING CODE 4210-67-P
    DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [Docket No. FWS-HQ-IA-2016-0134; FXIA16710900000-178-FF09A30000] Endangered Species; Receipt of Applications for Permit AGENCY:

    Fish and Wildlife Service, Interior.

    ACTION:

    Notice of receipt of applications for permit.

    SUMMARY:

    We, the U.S. Fish and Wildlife Service, invite the public to comment on the following applications to conduct certain activities with endangered species. With some exceptions, the Endangered Species Act (ESA) prohibit activities with listed species unless Federal authorization is acquired that allows such activities.

    DATES:

    We must receive comments or requests for documents on or before December 5, 2016.

    ADDRESSES:

    Submitting Comments: You may submit comments by one of the following methods:

    Federal eRulemaking Portal: http://www.regulations.gov. Follow the instructions for submitting comments on Docket No. FWS-HQ-IA-2016-0134.

    U.S. mail or hand-delivery: Public Comments Processing, Attn: Docket No. FWS-HQ-IA-2016-0134; U.S. Fish and Wildlife Service Headquarters, MS: BPHC; 5275 Leesburg Pike, Falls Church, VA 22041-3803.

    When submitting comments, please indicate the name of the applicant and the PRT# you are commenting on. We will post all comments on http://www.regulations.gov. This generally means that we will post any personal information you provide us (see the Public Comments section below for more information).

    Viewing Comments: Comments and materials we receive will be available for public inspection on http://www.regulations.gov, or by appointment, between 8 a.m. and 4 p.m., Monday through Friday, except Federal holidays, at the U.S. Fish and Wildlife Service, Division of Management Authority, 5275 Leesburg Pike, Falls Church, VA 22041-3803; telephone 703-358-2095.

    FOR FURTHER INFORMATION CONTACT:

    Brenda Tapia, (703) 358-2104 (telephone); (703) 358-2281 (fax); [email protected] (email).

    SUPPLEMENTARY INFORMATION: I. Public Comment Procedures A. How do I request copies of applications or comment on submitted applications?

    Send your request for copies of applications or comments and materials concerning any of the applications to the contact listed under ADDRESSES. Please include the Federal Register notice publication date, the PRT-number, and the name of the applicant in your request or submission. We will not consider requests or comments sent to an email or address not listed under ADDRESSES. If you provide an email address in your request for copies of applications, we will attempt to respond to your request electronically.

    Please make your requests or comments as specific as possible. Please confine your comments to issues for which we seek comments in this notice, and explain the basis for your comments. Include sufficient information with your comments to allow us to authenticate any scientific or commercial data you include.

    The comments and recommendations that will be most useful and likely to influence agency decisions are: (1) Those supported by quantitative information or studies; and (2) Those that include citations to, and analyses of, the applicable laws and regulations. We will not consider or include in our administrative record comments we receive after the close of the comment period (see DATES) or comments delivered to an address other than those listed above (see ADDRESSES).

    B. May I review comments submitted by others?

    Comments, including names and street addresses of respondents, will be available for public review at the street address listed under ADDRESSES. The public may review documents and other information applicants have sent in support of the application unless our allowing viewing would violate the Privacy Act or Freedom of Information Act. Before including your address, phone number, email address, or other personal identifying information in your comment, you should be aware that your entire comment—including your personal identifying information—may be made publicly available at any time. While you can ask us in your comment to withhold your personal identifying information from public review, we cannot guarantee that we will be able to do so.

    II. Background

    To help us carry out our conservation responsibilities for affected species, and in consideration of section 10(a)(1)(A) of the Endangered Species Act of 1973, as amended (16 U.S.C. 1531 et seq.), along with Executive Order 13576, “Delivering an Efficient, Effective, and Accountable Government,” and the President's Memorandum for the Heads of Executive Departments and Agencies of January 21, 2009—Transparency and Open Government (74 FR 4685; January 26, 2009), which call on all Federal agencies to promote openness and transparency in Government by disclosing information to the public, we invite public comment on these permit applications before final action is taken.

    III. Permit Applications Endangered Species Applicant: U.S. Geological Survey, National Wildlife Health Center, Madison, WI; PRT-06408C

    The applicant requests a permit to import biological samples for all wildlife species, both of wild-origin and captive-held or captive-bred for the purpose of scientific research. This notification covers activities to be conducted by the applicant over a 5-year period.

    Applicant: Cheyenne Mountain Zoological Park, Colorado Springs, CO; PRT-06157C

    The applicant requests a permit to import one live male captive-born amur leopard (Panthera pardus orientalis) from JCS Livestock, Berks, United Kingdom, for the purpose of enhancement of the survival of the species. This notification covers activities to be conducted by the applicant over a 5-year period.

    Applicant: Tiger World Inc., Rockwell, NC; PRT-97961A

    On June 3, 2016, we published a Federal Register notice inviting the public to comment for an application for permit to conduct certain activities with endangered species (81 FR 35792). We are now reopening the comment period to allow the public the opportunity to review additional information submitted to amend of their captive-bred wildlife registration under 50 CFR 17.21(g) for the following species to enhance species propagation or survival: African lion (Panthera leo), black-and-white ruffed lemur (Varecia variegata), ring-tailed lemur (Lemur catta), red ruffed lemur (Varecia rubra), mandrill (Mandrillus sphinx), lar gibbon (Hylobates lar), clouded leopard (Neofelis nebulosa), leopard (Panthera pardus), snow leopard (Uncia uncia), Galapagos tortoise (Chelonoidis nigra), and radiated tortoise (Astrochelys radiata). This notification covers activities to be conducted by the applicant over a 5-year period.

    Multiple Applicants

    The following applicants each request a permit to import the sport-hunted trophy of one male bontebok (Damaliscus pygargus pygargus) culled from a captive herd maintained under the management program of the Republic of South Africa, for the purpose of enhancement of the survival of the species.

    Applicant: John Ferguson, Colorado Springs, CO; PRT-04220C Applicant: Carl Leukefeld, Lexington, KY; PRT-08238C Applicant: Todd Timm, Clifton, VA; PRT-08151C Brenda Tapia, Program Analyst/Data Administrator, Branch of Permits, Division of Management Authority.
    [FR Doc. 2016-26626 Filed 11-3-16; 8:45 am] BILLING CODE 4333-15-P
    DEPARTMENT OF THE INTERIOR Bureau of Indian Affairs [178A2100DD/AAKC001030/A0A501010.999900 253G] Indian Gaming; Tribal-State Class III Gaming Compact Taking Effect in the State of California AGENCY:

    Bureau of Indian Affairs, Interior.

    ACTION:

    Notice.

    SUMMARY:

    The State of California and the Pala Band of Mission Indians entered into a Tribal-State compact governing Class III gaming. This notice announces that the compact is taking effect.

    DATES:

    The effective date of the compact is November 4, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Ms. Paula L. Hart, Director, Office of Indian Gaming, Office of the Assistant Secretary—Indian Affairs, Washington, DC 20240, (202) 219-4066.

    SUPPLEMENTARY INFORMATION:

    Section 11 of the Indian Gaming Regulatory Act (IGRA) requires the Secretary of the Interior to publish in the Federal Register notice of approved Tribal-State compacts that are for the purpose of engaging in Class III gaming activities on Indian lands. See Public Law 100-497, 25 U.S.C. 2701 et seq. All Tribal-State Class III compacts are subject to review and approval by the Secretary under 25 CFR 293.4. The Secretary took no action on the compact within 45 days of its submission. Therefore, the compact is considered to have been approved, but only to the extent the compact is consistent with IGRA. See 25 U.S.C. 2710(d)(8)(C).

    Dated: October 28, 2016. Lawrence S. Roberts, Principal Deputy Assistant Secretary—Indian Affairs.
    [FR Doc. 2016-26670 Filed 11-3-16; 8:45 am] BILLING CODE 4337-15-P
    DEPARTMENT OF THE INTERIOR National Park Service [NPS-MWR-KNRI-21917; 16XP103905-PPWODESCP1-PMP00UP05.YP0000-PX.PD171326E.00.1] Notice of Availability of the Draft Archeological Resources Management Plan, Environmental Impact Statement, Knife River Indian Villages National Historic Site, North Dakota AGENCY:

    National Park Service, Interior.

    ACTION:

    Notice of availability.

    SUMMARY:

    The National Park Service (NPS) announces the availability of the Draft Archeological Resources Management Plan/Environmental Impact Statement (EIS), Knife River Indian Village National Historic Site (Park), North Dakota.

    DATES:

    All comments must be postmarked or transmitted not later than January 3, 2017.

    ADDRESSES:

    A limited number of hard-copies of the Draft EIS may be picked up in-person or may be obtained by making a request in writing to Knife River Indian Villages National Historic Site, P.O. Box 9, Stanton, North Dakota 58571. The document is also available on the internet at the NPS Planning, Environment, and Public Comment Web site at: https://Parkplanning.nps.gov/projectHome.cfm?projectID=34314

    FOR FURTHER INFORMATION CONTACT:

    Superintendent Craig Hansen can be reached at the address above, by telephone at (701) 745-3741 (ext. 209), or via email at [email protected].

    SUPPLEMENTARY INFORMATION:

    This process has been conducted pursuant to the National Environmental Policy Act of 1969 (42 U.S.C. 4321 et seq.) and the regulations of the Department of the Interior (43 CFR part 46). The purpose of the plan is to provide a management framework for proactive, sustainable archeological resource protection at the Park for the next 30 years. The NPS has identified four major threats to archeological resources. While riverbank erosion is the most visible and documented threat to archeological resources, additional impacts occur from pocket gopher activity, vegetation encroachment, and location of Park infrastructure.

    Riverbank erosion has been an ongoing problem since the Park was created and this ongoing impact has the greatest adverse effect to archeological resources. Over the past few decades village remnants and archeological sites adjacent to the Knife River have experienced measurable erosion. In addition, Northern pocket gophers affect archeological sites by displacing soil and artifacts from chronologically stratified deposits. Also, the encroachment of woody and overgrown vegetation into archeological sites causes multiple issues for archeological sites. Root growth results in displacement of chronological layers, similar to that of pocket gophers.

    The maintenance facility for the Park is a visual intrusion in the cultural landscape, particularly for the Big Hidatsa site, a designated National Historic Landmark. The North Dakota State Historic Preservation Office (SHPO) and the Mandan, Hidatsa, and Arikara Nation (MHA Nation) Tribal Historic Preservation Office have recommended that the facility be relocated to remove this visual impact from the site. In addition, the maintenance facility is located near burial sites and areas considered sacred by the tribes traditionally associated with the resources present in the Park.

    Finally, the location of the Museum Collection Storage Facility, in the basement of the Visitor's Center, has had water infiltration issues. A final goal of this plan is to develop a remedy for this problem, or the storage facility will need to be replaced.

    Range of Alternatives Considered: The alternatives analyzed in the Draft EIS are summarized below.

    Alternative 1: No-Action Alternative: Under the no-action alternative, management of archeological resources at the Park would continue as currently implemented.

    Management would respond to archeological resource threats but without the benefit of site prioritization and a proactive adaptive management framework. Under the no-action alternative, existing Park infrastructure would remain in place. Repairs to the existing visitor center to address water infiltration issues would occur. Ongoing riverbank erosion, pocket gopher control, and vegetation encroachment management activities would continue.

    Elements Common to All Action Alternatives: Under both action alternatives, archeological resources management at the Park would be executed within an adaptive management framework. This framework would be used to address riverbank erosion, gopher control, and woody vegetation encroachment. The project team developed a process to prioritize archeological sites based on the importance of the resource and the level of risk of loss of the resource to inform management decisions.

    The NPS has developed indicators and standards for managing the archeological resources based on the Park's purpose, significance, objectives, and desired conditions. These indicators and standards will serve as a tool to monitor and evaluate the adaptive management actions.

    Alternative 2: Relocate Facilities in the Park: Under alternative 2, archeological resources would be managed under the adaptive management framework described above. Under this alternative, the maintenance facility would be moved to another location in the Park and the existing maintenance buildings would be removed.

    Additionally, the museum collection would be moved if the project to stop water infiltration in the visitor center building is unsuccessful or if the Park identifies funding or partnership opportunities to relocate the museum collection out of the basement of the Visitor's Center to a more suitable location.

    Alternative 3: Locate Facilities Off-Site: Under alternative 3, archeological resources would be managed under the adaptive management framework described above. Under this alternative, the Park would relocate the maintenance facility outside the Park boundary and remove the existing maintenance buildings from the Park landscape. Similar to alternative 2, the museum collection would be moved if the project to stop water infiltration in the visitor center building is unsuccessful or if the Park identifies funding or partnership opportunities to relocate the museum collection out of the basement to a more suitable location.

    NPS Preferred Alternative: The preferred alternative is likely to be a combination of alternatives 2 and 3. The NPS would prefer to remove the maintenance facility from Park property, and stop water infiltration at the visitor center so the museum collection can remain in place. While moving the maintenance facility off-site is preferred to best protect Park resources, the ability to relocate is dependent on the availability of suitable property at a reasonable price. If suitable sites are not available when the Park is ready to relocate, the Park will construct the facilities within the Park.

    In order to comment on this plan, comments may be transmitted electronically through the project Web site (address above). If preferred, you may mail written comments directly to the Superintendent at the address above.

    Before including your address, phone number, email address, or other personal identifying information in your comment, you should be aware that your entire comment—including your personal identifying information—may be made publicly available at any time. While you can ask us in your comment to withhold your personal identifying information from public review, we cannot guarantee that we will be able to do so.

    Dated: September 9, 2016. Patricia S. Trap, Deputy Regional Director, Midwest Region.
    [FR Doc. 2016-26690 Filed 11-3-16; 8:45 am] BILLING CODE 4312-52-P
    DEPARTMENT OF THE INTERIOR National Park Service Record of Decision for Non-Federal Oil and Gas Regulation Revision Environmental Impact Statement (EIS) AGENCY:

    National Park Service, Interior.

    ACTION:

    Notice of availability; record of decision.

    SUMMARY:

    The National Park Service (NPS) has prepared and approved a Record of Decision (ROD) for the Nonfederal Oil and Gas Regulations (36 CFR part 9, subpart B) Revisions. Approval of this Record of Decision completes the National Environmental Policy Act process.

    DATES:

    November 4, 2016.

    ADDRESSES:

    Copies of the ROD are available for public review at http://parkplanning.nps.gov/ROD_9B.

    FOR FURTHER INFORMATION CONTACT:

    David Steensen, Chief, Geologic Resources Division, National Park Service, PO Box 25287, Denver, CO 80225; phone (303) 969-2014. The responsible official for this ROD is Jonathan Jarvis, Director, National Park Service, 1849 C Street NW., Washington, DC 20240.

    SUPPLEMENTARY INFORMATION:

    This process was conducted in accordance with the requirements of the National Environmental Policy Act of 1969, as amended (NEPA) (42 U.S.C. 4321 et seq.), its implementing regulations (40 CFR parts 1500-1508), the Department of the Interior NEPA regulations (43 CFR part 46), and NPS Director's Order 12, Conservation Planning, Environmental Impact Analysis and Decision-Making and accompanying handbook. The original Notice of Intent (NOI) initiating the NEPA process was published in the Federal Register on December 30, 2010 (75 FR 82362). The NOI specifically solicited public comment on draft purpose and need statements, objectives, and issues and concerns related to revisions of the NPS regulations governing non-federal oil and gas development on units of the national park system. The NOI also requested public comment on possible alternatives the NPS should consider in revising the regulations. On October 23, 2015, the NPS released for public review the draft EIS for the Proposed Revision of 9B Regulations Governing Nonfederal Oil and Gas Activities through the publication of a Notice of Availability in the Federal Register (80 FR 64445). The Environmental Protection Agency also issued a Notice of Availability for the draft EIS that was published in the Federal Register on October 30, 2015 (80 FR 66898). On September 2, 2016, the Environmental Protection Agency issued a Notice of Availability for the plan/Final Environmental Impact Statement (FEIS) that was published in the Federal Register (81 FR 60697); NPS also released the FEIS for public review on September 2, 2016, and published its own NPS Notice of Availability in the Federal Register on September 7, 2016 (81 FR 61715).

    The FEIS evaluated the environmental consequences of three alternatives, Alternative A (no action), Alternative B (preferred and environmentally preferable alternative), and Alternative C.

    Alternative B includes the following alternative elements:

    • Elimination of two regulatory provisions that exempt 60% of the oil and gas operations in System units. All operators in System units would be required to comply with the 9B regulations.

    • Elimination of the financial assurance (bonding) cap. Financial assurance would be equal to the reasonable estimated cost of site reclamation.

    • Improving enforcement authority by incorporating existing NPS penalty provisions. Law enforcement staff would have authority to write citations for noncompliance with the regulations.

    • Authorizing compensation to the federal government for new access on federal lands and waters outside the boundary of an operator's mineral right.

    • Reformatting the regulations to make it easier to identify an operator's information requirements and operating standards that apply to each type of operation.

    Alternative C includes all the proposed changes in Alternative B, except:

    Directional drilling operations: Alternative C would expand the scope of the regulations to encompass surface and subsurface directional drilling operations outside the boundary of a System unit.

    Proposed Operations Located Wholly on Non-Federally Owned Land Within the Boundary of a System Unit: This provision would allow for an exemption to the operations permit requirement for those operations located wholly on non-federally owned land within a System unit, if the operator could demonstrate that the proposed operation would have no effect to NPS administered resources or values.

    Joint and Several Liability: This provision would hold mineral owners and their lessees jointly and severally liable for all obligations to comply with the terms and conditions of an approved permit and any other applicable provision under these regulations.

    The NPS consulted with traditionally associated American Indian tribes and groups, State Historic Preservation Officers, United States Fish and Wildlife Service, United States Environmental Protection Agency, state oil and gas regulatory commissions, and the state of Alaska.

    The ROD includes a summary of the purpose and need for action, synopses of alternatives considered and analyzed in detail, a description of the selected alternative, including measures that are included in the rule to minimize environmental harm, the basis for the decision, a description of the environmentally preferable alternative, and findings on impairment of park resources. The ROD is not the final agency action for those elements of the EIS that require promulgation of regulations to be effective. Promulgation of such regulations will constitute the final agency action for such elements, and will be published in a separate Federal Register document.

    Dated: October 23, 2016. Jonathan B. Jarvis, Director, National Park Service.
    [FR Doc. 2016-26492 Filed 11-3-16; 8:45 am] BILLING CODE 4312-52-P
    DEPARTMENT OF THE INTERIOR Bureau of Ocean Energy Management [MMAA104000] Notice on Outer Continental Shelf Oil and Gas Lease Sales AGENCY:

    Bureau of Ocean Energy Management (BOEM), Interior.

    ACTION:

    List of Restricted Joint Bidders.

    SUMMARY:

    Pursuant to the joint bidding provisions of 30 CFR 556.511—556.515, the Director of the Bureau of Ocean Energy Management is publishing a List of Restricted Joint Bidders. Each entity within one of the following groups is restricted from bidding with any entity in any of the other following groups at Outer Continental Shelf oil and gas lease sales to be held during the bidding period November 1, 2016, through April 30, 2017. This List of Restricted Joint Bidders will cover the period November 1, 2016, through April 30, 2017, and replace the prior list published on May 17, 2016, which covered the period of May 1, 2016, through October 31, 2016.

    Group I BP America Production Company BP Exploration & Production Inc. BP Exploration (Alaska) Inc. Group II Chevron Corporation Chevron U.S.A. Inc. Chevron Midcontinent, L.P. Unocal Corporation Union Oil Company of California Pure Partners, L.P. Group III Eni Petroleum Co. Inc. Eni Petroleum US LLC Eni Oil US LLC Eni Marketing Inc. Eni BB Petroleum Inc. Eni US Operating Co. Inc. Eni BB Pipeline LLC Group IV Exxon Mobil Corporation ExxonMobil Exploration Company Group V Petroleo Brasileiro S.A. Petrobras America Inc. Group VI Shell Oil Company Shell Offshore Inc. SWEPI LP Shell Frontier Oil & Gas Inc. SOI Finance Inc. Shell Gulf of Mexico Inc. Group VII Statoil ASA Statoil Gulf of Mexico LLC Statoil USA E&P Inc. Statoil Gulf Properties Inc. Group VIII Total E&P USA, Inc.
    Abigail Ross Hopper, Director, Bureau of Ocean Energy Management.
    [FR Doc. 2016-26737 Filed 11-3-16; 8:45 am] BILLING CODE 4310-MR-P
    DEPARTMENT OF THE INTERIOR Bureau of Reclamation [RR01041000, 17XR0680G3, RX.16786921.2000100] Notice of Additional Scoping Meeting for the Columbia River System Operations Environmental Impact Statement AGENCIES:

    Bureau of Reclamation, Interior.

    ACTION:

    Notice.

    SUMMARY:

    The Bureau of Reclamation, along with the U.S. Army Corps of Engineers and the Bonneville Power Administration as joint lead agencies, are adding one public scoping meeting to invite the public to comment on the scope of the Columbia River System Operations Environmental Impact Statement.

    DATES:

    The additional scoping meeting will be held on Monday, November 21, 2016, 4 p.m. to 7 p.m., in Pasco, Washington.

    ADDRESSES:

    The meeting will be held at the Holiday Inn Express & Suites Pasco-Tri Cities, 4525 Convention Place, Pasco, Washington 99301.

    FOR FURTHER INFORMATION CONTACT:

    Call the toll-free telephone 1-(800) 290-5033 or email [email protected]. Additional information can be found at the project Web site: www.crso.info.

    SUPPLEMENTARY INFORMATION:

    One scoping meeting is being added to the schedule. All other scoping meetings for the Columbia River System Operations Environmental Impact Statement were previously announced in a notice that was published in the Federal Register on September 30, 2016 (81 FR 67382). As the project evolves, there may be additional scoping meetings. All additional scoping meetings for this project will be announced on the project Web site at www.crso.info.

    Dated: October 26, 2016. Lorri J. Lee, Regional Director—Pacific Northwest Region, Bureau of Reclamation.
    [FR Doc. 2016-26740 Filed 11-3-16; 8:45 am] BILLING CODE 4332-90-P
    INTERNATIONAL TRADE COMMISSION Notice of Receipt of Complaint; Solicitation of Comments Relating to the Public Interest AGENCY:

    U.S. International Trade Commission.

    ACTION:

    Notice.

    SUMMARY:

    Notice is hereby given that the U.S. International Trade Commission has received a complaint entitled Certain UV Curable Coatings for Optical Fibers, Coated Optical Fibers, and Products Containing Same, DN 3181; the Commission is soliciting comments on any public interest issues raised by the complaint or complainant's filing under the Commission's Rules of Practice and Procedure.

    FOR FURTHER INFORMATION CONTACT:

    Lisa R. Barton, Secretary to the Commission, U.S. International Trade Commission, 500 E Street SW., Washington, DC 20436, telephone (202) 205-2000. The public version of the complaint can be accessed on the Commission's Electronic Document Information System (EDIS) at https://edis.usitc.gov, and will be available for inspection during official business hours (8:45 a.m. to 5:15 p.m.) in the Office of the Secretary, U.S. International Trade Commission, 500 E Street SW., Washington, DC 20436, telephone (202) 205-2000.

    General information concerning the Commission may also be obtained by accessing its Internet server at United States International Trade Commission (USITC) at https://www.usitc.gov. The public record for this investigation may be viewed on the Commission's Electronic Document Information System (EDIS) at https://edis.usitc.gov. Hearing-impaired persons are advised that information on this matter can be obtained by contacting the Commission's TDD terminal on (202) 205-1810.

    SUPPLEMENTARY INFORMATION:

    The Commission has received a complaint and a submission pursuant to § 210.8(b) of the Commission's Rules of Practice and Procedure filed on behalf of DSM Desotech, Inc. and DSM IP Assets B.V. on October 31, 2016. The complaint alleges violations of section 337 of the Tariff Act of 1930 (19 U.S.C. 1337) in the importation into the United States, the sale for importation, and the sale within the United States after importation of certain UV curable coatings for optical fibers, coated optical fibers, and products containing same. The complaint names as respondents Momentive UV Coatings (Shanghai) Co., Ltd. of China; and OFS Fitel, LLC of Norcross, GA. The complainant requests that the Commission issue a limited exclusion order, cease and desist orders and impose a bond upon respondents' alleged infringing articles during the 60-day Presidential review period pursuant to 19 U.S.C. 1337(j).

    Proposed respondents, other interested parties, and members of the public are invited to file comments, not to exceed five (5) pages in length, inclusive of attachments, on any public interest issues raised by the complaint or § 210.8(b) filing. Comments should address whether issuance of the relief specifically requested by the complainant in this investigation would affect the public health and welfare in the United States, competitive conditions in the United States economy, the production of like or directly competitive articles in the United States, or United States consumers.

    In particular, the Commission is interested in comments that:

    (i) Explain how the articles potentially subject to the requested remedial orders are used in the United States;

    (ii) identify any public health, safety, or welfare concerns in the United States relating to the requested remedial orders;

    (iii) identify like or directly competitive articles that complainant, its licensees, or third parties make in the United States which could replace the subject articles if they were to be excluded;

    (iv) indicate whether complainant, complainant's licensees, and/or third party suppliers have the capacity to replace the volume of articles potentially subject to the requested exclusion order and/or a cease and desist order within a commercially reasonable time; and

    (v) explain how the requested remedial orders would impact United States consumers.

    Written submissions must be filed no later than by close of business, eight calendar days after the date of publication of this notice in the Federal Register. There will be further opportunities for comment on the public interest after the issuance of any final initial determination in this investigation.

    Persons filing written submissions must file the original document electronically on or before the deadlines stated above and submit 8 true paper copies to the Office of the Secretary by noon the next day pursuant to § 210.4(f) of the Commission's Rules of Practice and Procedure (19 CFR 210.4(f)). Submissions should refer to the docket number (“Docket No. 3181”) in a prominent place on the cover page and/or the first page. (See Handbook for Electronic Filing Procedures, Electronic Filing Procedures).1 Persons with questions regarding filing should contact the Secretary (202-205-2000).

    1 Handbook for Electronic Filing Procedures: https://www.usitc.gov/documents/handbook_on_filing_procedures.pdf.

    Any person desiring to submit a document to the Commission in confidence must request confidential treatment. All such requests should be directed to the Secretary to the Commission and must include a full statement of the reasons why the Commission should grant such treatment. See 19 CFR 201.6. Documents for which confidential treatment by the Commission is properly sought will be treated accordingly. All such requests should be directed to the Secretary to the Commission and must include a full statement of the reasons why the Commission should grant such treatment. See 19 CFR 201.6. Documents for which confidential treatment by the Commission is properly sought will be treated accordingly. All information, including confidential business information and documents for which confidential treatment is properly sought, submitted to the Commission for purposes of this Investigation may be disclosed to and used: (i) By the Commission, its employees and Offices, and contract personnel (a) for developing or maintaining the records of this or a related proceeding, or (b) in internal investigations, audits, reviews, and evaluations relating to the programs, personnel, and operations of the Commission including under 5 U.S.C. Appendix 3; or (ii) by U.S. government employees and contract personnel,2 solely for cybersecurity purposes. All nonconfidential written submissions will be available for public inspection at the Office of the Secretary and on EDIS.3

    2 All contract personnel will sign appropriate nondisclosure agreements.

    3 Electronic Document Information System (EDIS): http://edis.usitc.gov.

    This action is taken under the authority of section 337 of the Tariff Act of 1930, as amended (19 U.S.C. 1337), and of §§ 201.10 and 210.8(c) of the Commission's Rules of Practice and Procedure (19 CFR 201.10, 210.8(c)).

    By order of the Commission.

    Issued: October 31, 2016. Lisa R. Barton, Secretary to the Commission.
    [FR Doc. 2016-26649 Filed 11-3-16; 8:45 am] BILLING CODE 7020-02-P
    DEPARTMENT OF JUSTICE Bureau of Alcohol, Tobacco, Firearms and Explosives [OMB Number 1140-0008] Agency Information Collection Activities; Proposed eCollection eComments Requested; Application and Permit for Permanent Exportation of Firearms (National Firearms Act) ATF F 9 (5320.9) AGENCY:

    Bureau of Alcohol, Tobacco, Firearms and Explosives, Department of Justice.

    ACTION:

    30-Day notice.

    SUMMARY:

    The Department of Justice (DOJ), Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), will submit the following information collection request to the Office of Management and Budget (OMB) for review and approval in accordance with the Paperwork Reduction Act of 1995. The proposed information collection was previously published in the Federal Register 81 FR 60023, on August 31, 2016, allowing for a 60-day comment period.

    DATES:

    Comments are encouraged and will be accepted for an additional 30 days until December 5, 2016.

    FOR FURTHER INFORMATION CONTACT:

    If you have additional comments, particularly with respect to the estimated public burden or associated response time, have suggestions, need a copy of the proposed information collection instrument with instructions, or desire any other additional information, please contact Kenneth Mason, Firearms and Explosives Services Specialist, National Firearms Act Branch, either by mail at 244 Needy Road, Martinsburg, WV 25405, or by email at [email protected]. Written comments and/or suggestions can also be directed to the Office of Management and Budget, Office of Information and Regulatory Affairs, Attention Department of Justice Desk Officer, Washington, DC 20503 or sent to [email protected].

    SUPPLEMENTARY INFORMATION:

    Written comments and suggestions from the public and affected agencies concerning the proposed collection of information are encouraged. Your comments should address one or more of the following four points:

    • Evaluate whether the proposed collection of information is necessary for the proper performance of the functions of the agency, including whether the information will have practical utility;

    • Evaluate the accuracy of the agency's estimate of the burden of the proposed collection of information, including the validity of the methodology and assumptions used;

    • Evaluate whether and if so how the quality, utility, and clarity of the information to be collected can be enhanced; and

    • Minimize the burden of the collection of information on those who are to respond, including through the use of appropriate automated, electronic, mechanical, or other technological collection techniques or other forms of information technology, e.g., permitting electronic submission of responses.

    Overview of This Information Collection

    1. Type of Information Collection: Revision of a currently approved collection.

    2. The Title of the Form/Collection: Application and Permit for Permanent Exportation of Firearms (National Firearms Act).

    3. The agency form number, if any, and the applicable component of the Department sponsoring the collection:

    Form number: ATF F 9 (5320.9).

    Component: Bureau of Alcohol, Tobacco, Firearms and Explosives, U.S. Department of Justice.

    4. Affected public who will be asked or required to respond, as well as a brief abstract:

    Primary: Business or other for-profit.

    Other: Individuals or households.

    Abstract: ATF Form 9 (5320.9) is typically used by a Federal firearms licensee who has paid the special (occupational) tax to deal, manufacture or import NFA firearms. The form must be filed (in quadruplicate) for approval to permanently export NFA firearms registered in the National Firearms Registration and Transfer Record. Once authorization has been granted, one copy is retained by ATF and the remaining copies returned to the exporter to establish that the exportation took place and claim relief from liability for the transfer tax.

    5. An estimate of the total number of respondents and the amount of time estimated for an average respondent to respond: An estimated 1,339 respondents will utilize the form, and it will take each respondent 18 minutes to complete the form.

    6. An estimate of the total public burden (in hours) associated with the collection: The estimated annual public burden associated with this collection is 401 hours.

    If additional information is required contact: Jerri Murray, Department Clearance Officer, United States Department of Justice, Justice Management Division, Policy and Planning Staff, Two Constitution Square, 145 N Street NE., Room 3E-405B, Washington, DC 20530.

    Dated: November 1, 2016. Jerri Murray, Department Clearance Officer for PRA, U.S. Department of Justice.
    [FR Doc. 2016-26704 Filed 11-3-16; 8:45 am] BILLING CODE 4410-FY-P
    DEPARTMENT OF JUSTICE [OMB Number 1117-0052] Agency Information Collection Activities; Proposed eCollection eComments Requested; Extension of a Currently Approved Collection: National Drug Threat Survey AGENCY:

    Drug Enforcement Administration, Department of Justice.

    ACTION:

    30-Day notice.

    SUMMARY:

    The Department of Justice (DOJ), Drug Enforcement Administration, will be submitting the following information collection request to the Office of Management and Budget (OMB) for review and approval in accordance with the Paperwork Reduction Act of 1995. This collection was previously published in the Federal Register at 81 FR 59656, on August 30, 2016, allowing for a 60 day comment period.

    DATES:

    Comments are encouraged and will be accepted for an additional 30 days until December 5, 2016.

    FOR FURTHER INFORMATION CONTACT:

    If you have additional comments especially on the estimated public burden or associated response time, suggestions, or need a copy of the proposed information collection instrument with instructions or additional information, please contact Kirsten Waters, Unit Chief, Domestic Strategic Intelligence Unit, Office of Strategic Intelligence and Programs, Drug Enforcement Administration, 8701 Morrissette Drive, Springfield, VA 22152. Written comments and/or suggestions can also be directed to the Office of Management and Budget, Office of Information and Regulatory Affairs, Attention Department of Justice Desk Officer, Washington, DC 20530 or sent to [email protected].

    SUPPLEMENTARY INFORMATION:

    Written comments and suggestions from the public and affected agencies concerning the proposed collection of information are encouraged. Your comments should address one or more of the following four points:

    —Evaluate whether the proposed collection of information is necessary for the proper performance of the functions of the Bureau of Justice Statistics, including whether the information will have practical utility; —Evaluate the accuracy of the agency's estimate of the burden of the proposed collection of information, including the validity of the methodology and assumptions used; —Evaluate whether and if so how the quality, utility, and clarity of the information to be collected can be enhanced; and —Minimize the burden of the collection of information on those who are to respond, including through the use of appropriate automated, electronic, mechanical, or other technological collection techniques or other forms of information technology, e.g., permitting electronic submission of responses. Overview of This Information Collection

    1. Type of Information Collection: Extension of a currently approved collection.

    2. The Title of the Form/Collection: National Drug Threat Survey.

    3. The agency form number, if any, and the applicable component of the Department sponsoring the collection: None.

    4. Affected public who will be asked or required to respond, as well as a brief abstract: The affected public includes state, local and tribal law enforcement agencies. Combined with other Federal, state, and local information, the survey is used to present an accurate picture of the national drug threat.

    5. An estimate of the total number of respondents and the amount of time estimated for an average respondent to respond: It is estimated that approximately 12,782 respondents will complete the survey within approximately 33 minutes.

    6. An estimate of the total public burden (in hours) associated with the collection: The estimated public burden associated with this collection is 4218. This figure was derived by multiplying the number of respondents (12,782) × frequency of response (1) × hours (0.33). The estimate time for response is a conservative estimate. The technology available to the respondent will further reduce response time.

    If additional information is required contact: Jerri Murray, Department Clearance Officer, United States Department of Justice, Justice Management Division, Policy and Planning Staff, Two Constitution Square, 145 N Street NE., 3E.405B, Washington, DC 20530.

    Dated: November 1, 2016. Jerri Murray, Department Clearance Officer for PRA, U.S. Department of Justice.
    [FR Doc. 2016-26703 Filed 11-3-16; 8:45 am] BILLING CODE 4410-09-P
    DEPARTMENT OF JUSTICE [OMB Number 1121-0030] Agency Information Collection Activities; Proposed eCollection eComments Requested; Extension of a Currently Approved Collection: Capital Punishment Report of Inmates Under Sentence of Death AGENCY:

    Bureau of Justice Statistics, Department of Justice.

    ACTION:

    30-Day notice.

    SUMMARY:

    The Department of Justice (DOJ), Office of Justice Programs, Bureau of Justice Statistics, will be submitting the following information collection request to the Office of Management and Budget (OMB) for review and approval in accordance with the Paperwork Reduction Act of 1995. This proposed information collection was previously published in the Federal Register at 81 FR 41352-41353, on June 24, 2016, allowing for a 60-day comment period.

    DATES:

    Comments are encouraged and will be accepted for 30 days until December 5, 2016.

    FOR FURTHER INFORMATION CONTACT:

    If you have additional comments especially on the estimated public burden or associated response time, suggestions, or need a copy of the proposed information collection instrument with instructions or additional information, please contact Tracy L. Snell, Statistician, Bureau of Justice Statistics, 810 Seventh Street NW., Washington, DC 20531 (email: [email protected]; telephone: 202-616-3288). Written comments and/or suggestions can also be directed to the Office of Management and Budget, Office of Information and Regulatory Affairs, Attention Department of Justice Desk Officer, Washington, DC 20530 or sent to [email protected].

    SUPPLEMENTARY INFORMATION:

    Written comments and suggestions from the public and affected agencies concerning the proposed collection of information are encouraged. Your comments should address one or more of the following four points:

    —Evaluate whether the proposed collection of information is necessary for the proper performance of the functions of the Bureau of Justice Statistics, including whether the information will have practical utility; —Evaluate the accuracy of the agency's estimate of the burden of the proposed collection of information, including the validity of the methodology and assumptions used; —Evaluate whether and if so how the quality, utility, and clarity of the information to be collected can be enhanced; and —Minimize the burden of the collection of information on those who are to respond, including through the use of appropriate automated, electronic, mechanical, or other technological collection techniques or other forms of information technology, e.g., permitting electronic submission of responses.

    Overview of this information collection:

    (1) Type of Information Collection: Extension of a currently approved collection.

    (2) The Title of the Form/Collection: Capital Punishment Report of Inmates under Sentence of Death.

    (3) The agency form number, if any, and the applicable component of the Department sponsoring the collection: The form numbers for the questionnaires are: NPS-8, Report of Inmates under Sentence of Death; NPS-8A Update Report of Inmate under Sentence of Death; NPS-8B Status of Death Penalty—No Statute in Force; and NPS-8C Status of Death Penalty—Statute in Force. The applicable component within the Department of Justice is the Bureau of Justice Statistics, in the Office of Justice Programs.

    (4) Affected public who will be asked or required to respond, as well as a brief abstract: Respondents will be staff from state departments of correction, state Attorneys General, and the Federal Bureau of Prisons. Staff responsible for keeping records on inmates under sentence of death in their jurisdiction and in their custody are asked to provide information for the following categories: Condemned inmates' demographic characteristics, legal status at the time of capital offense, capital offense for which imprisoned, number of death sentences imposed, criminal history information, reason for removal and current status if no longer under sentence of death, method of execution, and cause of death by means other than execution. BJS plans to publish this information in reports and reference it when responding to queries from the U.S. Congress, Executive Office of the President, the U.S. Supreme Court, state officials, international organizations, researchers, students, the media, and others interested in criminal justices statistics.

    (5) An estimate of the total number of respondents and the amount of time estimated for an average respondent to respond: 117 responses at 30 minutes each for the NPS-8; 3,215 responses at 30 minutes each for the NPS-8A; and 52 responses at 15 minutes each for the NPS-8B or NPS-8C.

    (6) An estimate of the total public burden (in hours) associated with the collection: There are an estimated 1,539.5 annual total burden hours associated with the collection.

    If additional information is required contact: Jerri Murray, Department Clearance Officer, United States Department of Justice, Justice Management Division, Policy and Planning Staff, Two Constitution Square, 145 N Street NE., 3E.405B, Washington, DC 20530.

    Dated: November 1, 2016. Jerri Murray, Department Clearance Officer for PRA, U.S. Department of Justice.
    [FR Doc. 2016-26705 Filed 11-3-16; 8:45 am] BILLING CODE 4410-18-P
    DEPARTMENT OF JUSTICE Notice of Lodging of Proposed Consent Decree Under the Comprehensive Environmental Response, Compensation, and Liability Act

    On October 31, 2016, the Department of Justice lodged a proposed Consent Decree with the United States District Court for the District of Connecticut in the lawsuit entitled United States and State of Connecticut v. Eastgate Plaza, LLC, Civil Action No. 3:16-cv-01796.

    In the Complaint, the United States, on behalf of the U.S. Environmental Protection Agency (EPA), and the State of Connecticut, on behalf of the Connecticut Department of Energy and Environmental Protection (CT DEEP), allege that the defendant, Eastgate Plaza, LLC, is liable under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), 42 U.S.C. 9601, et seq., in connection with the Scovill Industrial Landfill Superfund Site in Waterbury, Connecticut. The consent decree is based in part on Eastgate's limited financial circumstances. The proposed Consent Decree requires Eastgate to, among other things, pay $100,000 toward EPA's and CT DEEP's past response costs; provide access to its property to allow for remedial actions to take place; institute institutional controls to restrict development and excavation of that property; and consent to a judgment in the amount of $13.7 million, which will be placed as a lien against the property.

    The publication of this notice opens a period for public comment on the Consent Decree. Comments should be addressed to the Assistant Attorney General, Environment and Natural Resources Division, and should refer to United States and State of Connecticut v. Eastgate Plaza, LLC, D.J. Ref. No. 90-11-3-11297. All comments must be submitted no later than thirty (30) days after the publication date of this notice. Comments may be submitted either by email or by mail:

    To submit comments: Send them to: By email [email protected]. By mail Assistant Attorney General, U.S. DOJ—ENRD, P.O. Box 7611, Washington, DC 20044-7611.

    During the public comment period, the Consent Decree may be examined and downloaded at this Justice Department Web site: https://www.justice.gov/enrd/consent-decrees. We will provide a paper copy of the Consent Decree upon written request and payment of reproduction costs. Please mail your request and payment to: Consent Decree Library, U.S. DOJ—ENRD, P.O. Box 7611, Washington, DC 20044-7611.

    Please enclose a check or money order for $10.75 (25 cents per page reproduction cost, including Appendices), payable to the United States Treasury.

    Robert E. Maher Jr., Assistant Chief, Environmental Enforcement Section, Environment & Natural Resources Division.
    [FR Doc. 2016-26694 Filed 11-3-16; 8:45 am] BILLING CODE 4410-15-P
    DEPARTMENT OF LABOR Employment and Training Administration Federal-State Unemployment Compensation Program: Certifications for 2016 Under the Federal Unemployment Tax Act AGENCY:

    Employment and Training Administration, Labor.

    ACTION:

    Notice.

    SUMMARY:

    The Secretary of Labor signed the annual certifications under the Federal Unemployment Tax Act, 26 U.S.C. 3301 et seq., thereby enabling employers who make contributions to state unemployment funds to obtain certain credits against their liability for the federal unemployment tax. By letter, the certifications were transmitted to the Secretary of the Treasury. The letter and certifications are printed below.

    Signed in Washington, DC, October 31, 2016. Portia Wu, Assistant Secretary, Employment and Training Administration. October 31, 2016 The Honorable Jacob J. Lew Secretary of the Treasury Department of the Treasury 1500 Pennsylvania Avenue, N.W. Washington, DC 20220 Dear Secretary Lew: Transmitted herewith are an original and one copy of the certifications of the states and their unemployment compensation laws for the 12-month period ending on October 31, 2016. One is required with respect to the normal federal unemployment tax credit by Section 3304 of the Internal Revenue Code of 1986 (IRC), and the other is required with respect to the additional tax credit by Section 3303 of the IRC. Both certifications list all 53 jurisdictions. Sincerely, THOMAS E. PEREZ Enclosures UNITED STATES DEPARTMENT OF LABOR OFFICE OF THE SECRETARY WASHINGTON, D.C. CERTIFICATION OF STATES TO THE SECRETARY OF THE TREASURY PURSUANT TO SECTION 3304(c) OF THE INTERNAL REVENUE CODE OF 1986

    In accordance with the provisions of Section 3304(c) of the Internal Revenue Code of 1986 (26 U.S.C. 3304(c)), I hereby certify the following named states to the Secretary of the Treasury for the 12-month period ending on October 31, 2016, in regard to the unemployment compensation laws of those states, which heretofore have been approved under the Federal Unemployment Tax Act:

    Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware District of Columbia Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Puerto Rico Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Virgin Islands Washington West Virginia Wisconsin Wyoming

    This certification is for the maximum normal credit allowable under Section 3302(a) of the Code.

    Signed at Washington, D.C., on October 31, 2016. THOMAS E. PEREZ UNITED STATES DEPARTMENT OF LABOR OFFICE OF THE SECRETARY WASHINGTON, D.C. CERTIFICATION OF STATE UNEMPLOYMENT COMPENSATION LAWS TO THE SECRETARY OF THE TREASURY PURSUANT TO SECTION 3303(b)(1) OF THE INTERNAL REVENUE CODE OF 1986

    In accordance with the provisions of paragraph (1) of Section 3303(b) of the Internal Revenue Code of 1986 (26 U.S.C. 3303(b)(1)), I hereby certify the unemployment compensation laws of the following named states, which heretofore have been certified pursuant to paragraph (3) of Section 3303(b) of the Code, to the Secretary of the Treasury for the 12-month period ending on October 31, 2016:

    Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware District of Columbia Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Puerto Rico Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Virgin Islands Washington West Virginia Wisconsin Wyoming

    This certification is for the maximum additional credit allowable under Section 3302(b) of the Code, subject to the limitations of Section 3302(c) of the Code.

    Signed at Washington, D.C., on October 31, 2016. Thomas E. Perez
    [FR Doc. 2016-26691 Filed 11-3-16; 8:45 am] BILLING CODE 4510-30-P
    DEPARTMENT OF LABOR Office of the Secretary Agency Information Collection Activities; Submission for OMB Review; Comment Request; American Time Use Survey ACTION:

    Notice.

    SUMMARY:

    The Department of Labor (DOL) is submitting the Bureau of Labor Statistics (BLS) sponsored information collection request (ICR) titled, “American Time Use Survey,” to the Office of Management and Budget (OMB) for review and approval for continued use, without change, in accordance with the Paperwork Reduction Act of 1995 (PRA). Public comments on the ICR are invited.

    DATES:

    The OMB will consider all written comments that agency receives on or before December 5, 2016.

    ADDRESSES:

    A copy of this ICR with applicable supporting documentation; including a description of the likely respondents, proposed frequency of response, and estimated total burden may be obtained free of charge from the RegInfo.gov Web site at http://www.reginfo.gov/public/do/PRAViewICR?ref_nbr=201607-1220-003 (this link will only become active on the day following publication of this notice) or by contacting Michel Smyth by telephone at 202-693-4129, TTY 202-693-8064, (these are not toll-free numbers) or by email at [email protected].

    Submit comments about this request by mail or courier to the Office of Information and Regulatory Affairs, Attn: OMB Desk Officer for DOL-BLS, Office of Management and Budget, Room 10235, 725 17th Street NW., Washington, DC 20503; by Fax: 202-395-5806 (this is not a toll-free number); or by email: [email protected]. Commenters are encouraged, but not required, to send a courtesy copy of any comments by mail or courier to the U.S. Department of Labor-OASAM, Office of the Chief Information Officer, Attn: Departmental Information Compliance Management Program, Room N1301, 200 Constitution Avenue NW., Washington, DC 20210; or by email: [email protected].

    FOR FURTHER INFORMATION CONTACT:

    Michel Smyth by telephone at 202-693-4129, TTY 202-693-8064, (these are not toll-free numbers) or by email at [email protected].

    SUPPLEMENTARY INFORMATION:

    This ICR seeks to extend PRA authority for the American Time Use Survey (ATUS) information collection. The ATUS is the first Federally administered continuous national survey on time use in the U.S. The ATUS measures, for example, time spent with children, working, sleeping, or doing leisure activities. Several existing Federal surveys in the U.S. collect income and wage data for individuals and families, and analysts often use such measures of material prosperity as proxies for quality of life. Time-use data substantially augment these quality-of-life measures. The data also can be used in conjunction with wage data to evaluate the contribution of non-market work to national economies. This enables comparisons of production between nations that have different mixes of market and non-market activities. The BLS Authorizing Statue authorizes this information collection. See 29 U.S.C. 1.

    This information collection is subject to the PRA. A Federal agency generally cannot conduct or sponsor a collection of information, and the public is generally not required to respond to an information collection, unless it is approved by the OMB under the PRA and displays a currently valid OMB Control Number. In addition, notwithstanding any other provisions of law, no person shall generally be subject to penalty for failing to comply with a collection of information that does not display a valid Control Number. See 5 CFR 1320.5(a) and 1320.6. The DOL obtains OMB approval for this information collection under Control Number 1220-0175.

    OMB authorization for an ICR cannot be for more than three (3) years without renewal, and the current approval for this collection is scheduled to expire on December 31, 2016. The DOL seeks to extend PRA authorization for this information collection for three (3) more years, without any change to existing requirements. The DOL notes that existing information collection requirements submitted to the OMB receive a month-to-month extension while they undergo review. For additional substantive information about this ICR, see the related notice published in the Federal Register on July 26, 2016 (81 FR 48849).

    Interested parties are encouraged to send comments to the OMB, Office of Information and Regulatory Affairs at the address shown in the ADDRESSES section within thirty (30) days of publication of this notice in the Federal Register. In order to help ensure appropriate consideration, comments should mention OMB Control Number 1220-0175. The OMB is particularly interested in comments that:

    • Evaluate whether the proposed collection of information is necessary for the proper performance of the functions of the agency, including whether the information will have practical utility;

    • Evaluate the accuracy of the agency's estimate of the burden of the proposed collection of information, including the validity of the methodology and assumptions used;

    • Enhance the quality, utility, and clarity of the information to be collected; and

    • Minimize the burden of the collection of information on those who are to respond, including through the use of appropriate automated, electronic, mechanical, or other technological collection techniques or other forms of information technology, e.g., permitting electronic submission of responses.

    Agency: DOL-BLS.

    Title of Collection: American Time Use Survey.

    OMB Control Number: 1220-0175.

    Affected Public: Individuals or Households.

    Total Estimated Number of Respondents: 11,800.

    Total Estimated Number of Responses: 11,800.

    Total Estimated Annual Time Burden: 3,442 hours.

    Total Estimated Annual Other Costs Burden: $0.

    Authority:

    44 U.S.C. 3507(a)(1)(D).

    Dated: October 28, 2016. Michel Smyth, Departmental Clearance Officer.
    [FR Doc. 2016-26692 Filed 11-3-16; 8:45 am] BILLING CODE 4510-24-P
    DEPARTMENT OF LABOR Mine Safety and Health Administration [OMB Control No. 1219-0124] Proposed Extension of Information Collection; Health Standards for Diesel Particulate Matter Exposure (Underground Coal Mines) AGENCY:

    Mine Safety and Health Administration, Labor.

    ACTION:

    Request for public comments.

    SUMMARY:

    The Department of Labor, as part of its continuing effort to reduce paperwork and respondent burden, conducts a pre-clearance consultation program to provide the general public and Federal agencies with an opportunity to comment on proposed collections of information in accordance with the Paperwork Reduction Act of 1995. This program helps to assure that requested data can be provided in the desired format, reporting burden (time and financial resources) is minimized, collection instruments are clearly understood, and the impact of collection requirements on respondents can be properly assessed. Currently, the Mine Safety and Health Administration (MSHA) is soliciting comments on the information collection for Health Standards for Diesel Particulate Matter Exposure (Underground Coal Mines)

    DATES:

    All comments must be received on or before January 3, 2017.

    ADDRESSES:

    Comments concerning the information collection requirements of this notice may be sent by any of the methods listed below.

    Federal E-Rulemaking Portal: http://www.regulations.gov. Follow the on-line instructions for submitting comments for docket number MSHA-2016-0031.

    Regular Mail: Send comments to USDOL-MSHA, Office of Standards, Regulations, and Variances, 201 12th Street South, Suite 4E401, Arlington, VA 22202-5452.

    Hand Delivery: USDOL-Mine Safety and Health Administration, 201 12th Street South, Suite 4E401, Arlington, VA 22202-5452. Sign in at the receptionist's desk on the 4th floor via the East elevator.

    FOR FURTHER INFORMATION CONTACT:

    Sheila McConnell, Director, Office of Standards, Regulations, and Variances, MSHA, at [email protected] (email); 202-693-9440 (voice); or 202-693-9441 (facsimile).

    SUPPLEMENTARY INFORMATION: I. Background

    Section 103(h) of the Federal Mine Safety and Health Act of 1977 (Mine Act), 30 U.S.C. 813(h), authorizes the Mine Safety and Health Administration (MSHA) to collect information necessary to carry out its duty in protecting the safety and health of miners. Further, Section 101(a) of the Mine Act, 30 U.S.C. 811 authorizes the Secretary to develop, promulgate, and revise as may be appropriate, improved mandatory health or safety standards for the protection of life and prevention of injuries in coal or other mines.

    MSHA established standards and regulations for diesel-powered equipment in underground coal mines that provide additional important protection for coal miners who work on and around diesel-powered equipment. The standards were designed to reduce the risks to underground coal miners of serious health hazards that are associated with exposure to high concentrations of diesel particulate matter. The standards contain information collection requirements for underground coal mine operators in sections 72.510(a) & (b), and 72.520(a) & (b).

    Section 72.510(a) requires underground coal mine operators to provide annual training to all miners who may be exposed to diesel emissions. The training must include health risks associated with exposure to diesel particulate matter; methods used in the mine to control diesel particulate concentrations; identification of the personnel responsible for maintaining those controls; and actions miners must take to ensure controls operate as intended.

    Section 72.510(b) requires underground coal mine operators to keep a record of the training for one year.

    Section 72.520(a) and (b) requires underground coal mine operators to maintain an inventory of diesel powered equipment units together with a list of information about any unit's emission control or filtration system. The list must be updated within 7 calendar days of any change.

    II. Desired Focus of Comments

    MSHA is soliciting comments concerning the proposed information collection related to Health Standards for Diesel Particulate Matter Exposure (Underground Coal Mines). MSHA is particularly interested in comments that:

    • Evaluate whether the collection of information is necessary for the proper performance of the functions of the agency, including whether the information has practical utility;

    • Evaluate the accuracy of MSHA's estimate of the burden of the collection of information, including the validity of the methodology and assumptions used;

    • Suggest methods to enhance the quality, utility, and clarity of the information to be collected; and

    • Minimize the burden of the collection of information on those who are to respond, including through the use of appropriate automated, electronic, mechanical, or other technological collection techniques or other forms of information technology, e.g., permitting electronic submission of responses.

    The information collection request will be available on http://www.regulations.gov. MSHA cautions the commenter against providing any information in the submission that should not be publicly disclosed. Full comments, including personal information provided, will be made available on www.regulations.gov and www.reginfo.gov.

    The public may also examine publicly available documents at USDOL-Mine Safety and Health Administration, 201 12th South, Suite 4E401, Arlington, VA 22202-5452. Sign in at the receptionist's desk on the 4th floor via the East elevator.

    Questions about the information collection requirements may be directed to the person listed in the FOR FURTHER INFORMATION section of this notice.

    III. Current Actions

    This request for collection of information contains provisions for Health Standards for Diesel Particulate Matter Exposure (Underground Coal Mines). MSHA has updated the data with respect to the number of respondents, responses, burden hours, and burden costs supporting this information collection request.

    Type of Review: Extension of a currently approved collection.

    Agency: Mine Safety and Health Administration.

    OMB Number: 1219-0124.

    Affected Public: Business or other for-profit.

    Number of Respondents: 220.

    Frequency: On occasion.

    Number of Responses: 74,282.

    Annual Burden Hours: 936 hours.

    Annual Respondent or Recordkeeper Cost: $13.

    MSHA Forms: None.

    Comments submitted in response to this notice will be summarized and included in the request for Office of Management and Budget approval of the information collection request; they will also become a matter of public record.

    Sheila McConnell, Certifying Officer.
    [FR Doc. 2016-26733 Filed 11-3-16; 8:45 am] BILLING CODE 4510-43-P
    LEGAL SERVICES CORPORATION Notice of Intent To Award—Grant Awards for the Provision of Civil Legal Services to Eligible Low-Income Clients Beginning January 1, 2017 AGENCY:

    Legal Services Corporation.

    ACTION:

    Announcement of intention to make FY 2017 Grant Awards.

    SUMMARY:

    The Legal Services Corporation (LSC) hereby announces its intention to award grants to provide economical and effective delivery of high quality civil legal services to eligible low-income clients, beginning January 1, 2017.

    DATES:

    All comments and recommendations must be received on or before the close of business on December 5, 2016.

    ADDRESSES:

    Legal Services Corporation—Grants Awards, Legal Services Corporation; 3333 K Street NW., Third Floor, Washington, DC 20007.

    FOR FURTHER INFORMATION CONTACT:

    Reginald Haley, Office of Program Performance, at (202) 295-1545, or [email protected].

    SUPPLEMENTARY INFORMATION:

    Pursuant to LSC's announcement of funding availability on March 24, 2016, 81 FR 15754, and Grant Renewal applications due beginning June 1, 2016, LSC intends to award funds to provide civil legal services in the indicated service areas. Applicants for each service area are listed below. The amounts below are estimates based on the 2016 grant awards to each service area. The estimates incorporate the adjustments for the agricultural worker population as described at http://www.lsc.gov/ag-worker-data. The funding estimates may change based on the final FY2017 appropriation.

    LSC will post all updates and/or changes to this notice at http://www.grants.lsc.gov/grants-grantee-resources. Interested parties are asked to visit http://www.grants.lsc.gov/grants-grantee-resources regularly for updates on the LSC grants process.

    Name of applicant organization State Service
  • area
  • Estimated
  • annualized
  • 2017 funding
  • Alaska Legal Services Corporation AK AK-1 $741,073 Alaska Legal Services Corporation AK NAK-1 556,121 Legal Services Alabama AL AL-4 6,123,393 Legal Aid of Arkansas AR AR-6 1,469,531 Center for Arkansas Legal Services AR AR-7 2,134,386 American Samoa Legal Aid AS AS-1 216,951 Community Legal Services AZ MAZ 178,985 Community Legal Services AZ AZ-3 5,431,956 Southern Arizona Legal Aid AZ AZ-5 2,141,137 DNA-Peoples Legal Services AZ AZ-2 426,023 Southern Arizona Legal Aid AZ NAZ-6 655,456 DNA-Peoples Legal Services AZ NAZ-5 2,683,310 California Rural Legal Assistance CA MCA 2,617,000 California Indian Legal Services CA CA-1 20,117 Greater Bakersfield Legal Assistance CA CA-2 1,144,116 Central California Legal Services CA CA-26 3,228,962 Legal Aid Foundation of Los Angeles CA CA-29 6,155,682 Neighborhood Legal Services of Los Angeles County CA CA-30 4,383,963 Inland Counties Legal Services CA CA-12 5,277,785 Legal Services of Northern California CA CA-27 3,876,782 Legal Aid Society of San Diego CA CA-14 2,989,977 California Rural Legal Assistance CA CA-31 5,018,808 Bay Area Legal Aid CA CA-28 4,122,146 Legal Aid Society of Orange County CA CA-19 3,854,358 California Indian Legal Services CA NCA-1 908,493 Colorado Legal Services CO MCO 180,774 Colorado Legal Services CO CO-6 4,121,449 Colorado Legal Services CO NCO-1 98,754 Statewide Legal Services of Connecticut CT CT-1 2,519,312 Pine Tree Legal Assistance CT NCT-1 16,099 Neighborhood Legal Services Program of DC DC DC-1 754,782 Legal Services Corporation of Delaware DE DE-1 754,969 Legal Aid Bureau DE MDE 19,218 Florida Rural Legal Services FL MFL 730,538 Community Legal Services of Mid-Florida FL FL-15 4,640,897 Florida Rural Legal Services FL FL-17 3,902,016 Legal Services of Greater Miami FL FL-5 3,569,324 Legal Services of North Florida FL FL-13 1,435,752 Bay Area Legal Services FL FL-16 3,398,986 Three Rivers Legal Services FL FL-14 2,137,273 Coast to Coast Legal Aid of South Florida FL FL-18 2,089,796 Georgia Legal Services Program GA MGA 335,199 Atlanta Legal Aid Society GA GA-1 3,796,481 Georgia Legal Services Program GA GA-2 8,131,242 Guam Legal Services Corporation GU GU-1 244,499 Legal Aid Society of Hawaii HI HI-1 1,284,668 Legal Aid Society of Hawaii HI NHI-1 235,552 Iowa Legal Aid IA MIA 181,450 Iowa Legal Aid IA IA-3 2,327,206 Idaho Legal Aid Services ID MID 220,047 Idaho Legal Aid Services ID ID-1 1,403,078 Idaho Legal Aid Services ID NID-1 66,807 Legal Assistance Foundation IL MIL 252,971 Legal Assistance Foundation IL IL-6 5,866,002 Land of Lincoln Legal Assistance Foundation IL IL-3 2,547,340 Prairie State Legal Services IL IL-7 3,641,385 Indiana Legal Services IN MIN 150,120 Indiana Legal Services IN IN-5 6,494,476 Kansas Legal Services KS KS-1 2,610,245 Legal Aid of the Bluegrass KY KY-10 1,459,451 Legal Aid Society KY KY-2 1,271,594 Appalachian Research and Defense Fund of Kentucky KY KY-5 1,613,022 Kentucky Legal Aid KY KY-9 1,118,558 Acadiana Legal Service Corporation LA LA-10 1,474,467 Acadiana Legal Service Corporation LA LA-11 1,551,192 Legal Services of North Louisiana LA LA-11 1,551,192 Southeast Louisiana Legal Services Corporation LA LA-13 3,000,372 Pine Region Legal Aid LA LA-11 1,551,192 Volunteer Lawyers Project of the Boston Bar Assoc. MA MA-11 2,013,002 South Coastal Counties Legal Services MA MA-12 841,595 Northeast Legal Aid MA MA-4 803,653 Community Legal Aid MA MA-10 1,469,369 Legal Aid Bureau MD MD-1 3,951,576 Legal Aid Bureau MD MMD 71,248 Pine Tree Legal Assistance ME MMX-1 190,835 Pine Tree Legal Assistance ME ME-1 1,170,642 Pine Tree Legal Assistance ME NME-1 66,279 Michigan Advocacy Program MI MMI 467,389 Michigan Advocacy Program MI MI-12 1,512,391 Legal Services of Eastern Michigan MI MI-14 1,552,347 Lakeshore Legal Aid MI MI-13 4,196,162 Legal Services of Northern Michigan MI MI-9 785,785 Legal Aid of Western Michigan MI MI-15 2,186,083 Michigan Indian Legal Services MI NMI-1 169,276 Southern Minnesota Regional Legal Services MN MMN 242,661 Legal Aid Service of Northeastern Minnesota MN MN-1 441,546 Central Minnesota Legal Services MN MN-6 1,623,247 Legal Services of Northwest Minnesota Corporation MN MN-4 317,581 Southern Minnesota Regional Legal Services MN MN-5 1,544,668 Anishinabe Legal Services MN NMN-1 245,745 Legal Aid of Western Missouri MO MMO 138,747 Legal Aid of Western Missouri MO MO-3 1,931,134 Legal Services of Eastern Missouri MO MO-4 1,911,602 Mid-Missouri Legal Services Corporation MO MO-5 447,967 Legal Services of Southern Missouri MO MO-7 1,767,761 Micronesian Legal Services MP MP-1 1,226,169 North Mississippi Rural Legal Services MS MS-9 1,605,360 Mississippi Center for Legal Services MS MS-10 2,548,651 Mississippi Center for Legal Services MS NMS-1 85,478 Montana Legal Services Association MT MMT 80,800 Montana Legal Services Association MT MT-1 969,239 Montana Legal Services Association MT NMT-1 163,734 Legal Aid of North Carolina NC MNC 463,965 Legal Aid of North Carolina NC NC-5 10,917,178 Legal Aid of North Carolina NC NNC-1 224,422 Legal Services of North Dakota ND MND 118,864 Legal Services of North Dakota ND ND-3 442,219 Legal Services of North Dakota ND NND-3 276,997 Southern Minnesota Regional Legal Services ND MND 118,864 Legal Aid of Nebraska NE MNE 132,695 Legal Aid of Nebraska NE NE-4 1,417,656 Legal Aid of Nebraska NE NNE-1 33,990 Legal Advice & Referral Center NH NH-1 787,447 South Jersey Legal Services NJ MNJ 96,706 Legal Services of Northwest Jersey NJ NJ-15 403,334 South Jersey Legal Services NJ NJ-16 1,419,547 South Jersey Legal Services NJ NJ-12 814,019 Northeast New Jersey Legal Services Corporation NJ NJ-18 1,889,964 Essex-Newark Legal Services Project NJ NJ-8 875,601 Central Jersey Legal Services NJ NJ-17 1,136,456 New Mexico Legal Aid NM MNM 92,653 DNA-Peoples Legal Services NM NM-1 176,958 New Mexico Legal Aid NM NM-5 2,705,152 DNA-Peoples Legal Services NM NNM-2 23,363 New Mexico Legal Aid NM NNM-4 477,790 Nevada Legal Services NV NV-1 2,910,481 Nevada Legal Services NV NNV-1 136,737 Legal Aid Society of Mid-New York NY MNY 263,649 Legal Aid Society of Northeastern New York NY NY-21 1,273,393 Neighborhood Legal Services NY NY-24 1,221,550 Nassau/Suffolk Law Services Committee NY NY-7 1,320,389 Legal Services NYC NY NY-9 11,755,163 Legal Assistance of Western New York NY NY-23 1,665,332 Legal Aid Society of Mid-New York NY NY-22 1,640,207 Legal Services of the Hudson Valley NY NY-20 1,750,874 Legal Aid of Western Ohio OH MOH 176,957 Community Legal Aid Services OH OH-20 1,787,044 Legal Aid Society of Greater Cincinnati OH OH-18 1,626,720 The Legal Aid Society of Cleveland OH OH-21 2,224,913 Ohio State Legal Services OH OH-24 3,372,394 Legal Aid of Western Ohio OH OH-23 2,991,786 Legal Aid Services of Oklahoma OK MOK 101,305 Legal Aid Services of Oklahoma OK OK-3 4,153,550 Oklahoma Indian Legal Services OK NOK-1 841,963 Legal Aid Services of Oregon OR MOR 507,357 Legal Aid Services of Oregon OR OR-6 3,888,067 Legal Aid Services of Oregon OR NOR-1 189,825 Philadelphia Legal Assistance Center PA MPA 173,957 Philadelphia Legal Assistance Center PA PA-1 2,650,840 Laurel Legal Services PA PA-5 591,589 MidPenn Legal Services PA PA-25 2,433,673 Neighborhood Legal Services Association PA PA-8 1,369,708 North Penn Legal Services PA PA-24 1,880,615 Southwestern Pennsylvania Legal Services PA PA-11 414,954 Northwestern Legal Services PA PA-26 651,714 Legal Aid of Southeastern Pennsylvania PA PA-23 1,306,338 Puerto Rico Legal Services PR MPR 175,940 Puerto Rico Legal Services PR PR-1 10,663,785 Community Law Office PR PR-2 239,716 Rhode Island Legal Services RI RI-1 989,001 South Carolina Legal Services SC MSC 165,865 South Carolina Legal Services SC SC-8 5,589,620 East River Legal Services SD SD-2 396,301 Dakota Plains Legal Services SD SD-4 400,598 Dakota Plains Legal Services SD NSD-1 960,128 Legal Aid of East Tennessee TN TN-9 2,508,380 Memphis Area Legal Services TN TN-4 1,559,629 Legal Aid Society of Middle TN and the Cumberlands TN TN-10 3,121,680 West Tennessee Legal Services TN TN-7 700,533 Texas RioGrande Legal Aid TX MSX-2 1,672,296 Legal Aid of NorthWest Texas TX TX-14 8,923,293 Lone Star Legal Aid TX TX-13 10,278,664 Texas RioGrande Legal Aid TX TX-15 10,565,783 Texas RioGrande Legal Aid TX NTX-1 32,183 Utah Legal Services UT MUT 73,289 Utah Legal Services UT UT-1 2,244,974 Utah Legal Services UT NUT-1 84,598 Central Virginia Legal Aid Society VA MVA 158,585 Legal Services of Northern Virginia VA VA-20 1,467,087 Southwest Virginia Legal Aid Society VA VA-15 711,526 Legal Aid Society of Eastern Virginia VA VA-16 1,291,796 Central Virginia Legal Aid Society VA VA-18 1,186,352 Virginia Legal Aid Society VA VA-17 895,898 Blue Ridge Legal Services VA VA-19 791,317 Legal Services of the Virgin Islands VI VI-1 161,119 Legal Services Law Line of Vermont VT VT-1 479,249 Northwest Justice Project WA MWA 667,471 Northwest Justice Project WA WA-1 5,563,807 Northwest Justice Project WA NWA-1 292,929 Legal Action of Wisconsin WI MWI 212,421 Legal Action of Wisconsin WI WI-5 3,904,788 Wisconsin Judicare WI WI-2 918,107 Wisconsin Judicare WI NWI-1 159,512 Legal Aid of West Virginia WV WV-5 2,235,497 Legal Aid of Wyoming WY WY-4 434,973 Legal Aid of Wyoming WY NWY-1 177,694

    These grants will be awarded under the authority conferred on LSC by section 1006(a)(1) of the Legal Services Corporation Act, 42 U.S.C. 2996e(a)(l). Awards will be made so that each service area is served, although no listed organization is guaranteed an award. Grants will become effective and grant funds will be distributed on or about January 1, 2017.

    This notice is issued pursuant to 42 U.S.C. 2996f(f). Comments and recommendations concerning potential grantees are invited, and should be delivered to LSC within 30 days from the date of publication of this notice.

    Dated: November 1, 2016. Katherine Ward, Executive Assistant to the General Counsel and Vice President for Legal Affairs.
    [FR Doc. 2016-26675 Filed 11-3-16; 8:45 am] BILLING CODE 7050-01-P
    NATIONAL ARCHIVES AND RECORDS ADMINISTRATION [NARA-2017-002] Privacy Act of 1974, as Amended; System of Records Notice AGENCY:

    National Archives and Records Administration (NARA).

    ACTION:

    Notice revising Privacy Act system of records (SORN) for NARA 39.

    SUMMARY:

    The National Archives and Records Administration (NARA) proposes to revise its system of records on visitor services in its existing inventory of systems subject to the Privacy Act of 1974, as amended (“Privacy Act”). In this notice, NARA publishes the proposed revised NARA 39, Visitor Service System (VSS) Files (formerly Visitor Ticketing Application (VISTA) Files). NARA is revising SORN 39 to reflect a move to a cloud-hosted environment, which affects the system name, location information, and safeguards.

    DATES:

    This revised system of records, NARA 39, will become effective December 14, 2016 without further notice unless we receive comments by December 5, 2016 that cause us to revise it. NARA will publish a new notice if we must delay the effective date to review comments or make changes.

    ADDRESSES:

    You may submit comments, identified by “SORN NARA 39,” by one of the following methods:

    Federal eRulemaking Portal: http://www.regulations.gov. Follow the instructions for submitting comments.

    Email: [email protected]. Include SORN NARA 39 in the subject line of the message.

    Mail (for paper, disk, or CD-ROM submissions. Include SORN NARA 39 on the submission): Regulations Comment Desk, Strategy and Performance Division (SP); Suite 4100; National and Archives Records Administration; 8601 Adelphi Road; College Park, MD 20740-6001

    Hand delivery or courier: Deliver comments to front desk at the address above.

    Instructions: All submissions must include NARA's name and SORN NARA 39. We may publish any comments we receive without changes, including any personal information you include.

    FOR FURTHER INFORMATION CONTACT:

    Kimberly Keravuori, External Policy Program Manager, by email at [email protected], or by telephone at 301-837-3151.

    SUPPLEMENTARY INFORMATION:

    The notice for this system of records states the record system's name and location, authority for and manner of operation, categories of individuals it covers, types of records it contains, sources of information in the records, and the “routine uses” from Appendix A for which the agency may use the information. Appendix B includes the business address of the NARA official you may contact to find out how you may access and correct records pertaining to yourself. You may find Appendix A and Appendix B on NARA's Web site at https://www.archives.gov/privacy/inventory.html.

    The Privacy Act provides certain safeguards for an individual against an invasion of personal privacy. It requires Federal agencies that disseminate any record of personally identifiable information to do so in a manner that assures the action is for a necessary and lawful purpose, the information is current and accurate for its intended use, and the agency provides adequate safeguards to prevent misuse of such information. NARA intends to follow these principles when transferring information to another agency or individual as a “routine use,” including assuring that the information is relevant for the purposes for which it is transferred.

    David S. Ferriero, Archivist of the United States. NARA 39 SYSTEM NAME:

    Visitor Service System (VSS) Files.

    SYSTEM LOCATION:

    The system data is in a cloud-hosted environment, located in the continental United States.

    CATEGORIES OF INDIVIDUALS COVERED BY THE SYSTEM:

    Individuals covered by this system include people who purchase tickets to Presidential libraries, serve as points of contact for groups visiting the Presidential libraries, and are invited guests to special events at the libraries.

    CATEGORIES OF RECORDS IN THE SYSTEM:

    VSS files may include the following information on an individual: Name, mailing address, telephone number, email address, and credit card information.

    AUTHORITY FOR MAINTENANCE OF THE SYSTEM:

    44 U.S.C. 2108, 2111 note, 2112, and 2203(f)(1).

    ROUTINE USES OF RECORDS MAINTAINED IN THE SYSTEM, INCLUDING CATEGORIES OF USERS AND THE PURPOSES OF SUCH USES:

    NARA maintains the VSS files on individuals to: Store information on groups that interact with the library; conduct outreach with the points of contact in these groups to maintain visitor levels and improve service; study visitor data over time; communicate confirmation letters to visitors, and store information on those attending special events. Libraries may disclose the information to support their Presidential library foundations, and where a library is co-located with a National Park, to the National Park Service. The routine use statements A, C, E, F, G, and H, described in Appendix A, also apply to this system of records.

    POLICIES AND PRACTICES FOR STORING, RETRIEVING, ACCESSING, RETAINING, AND DISPOSING OF RECORDS IN THE SYSTEM: STORAGE:

    Electronic records.

    RETRIEVABILITY:

    Staff may retrieve information in the records by the individual's name or any of the other categories of information in the database.

    SAFEGUARDS:

    Staff access the electronic records via password-protected workstations located in attended offices or through a secure, remote-access network. After business hours, buildings have security guards and/or secured doors, and electronic surveillance equipment monitors all entrances.

    RETENTION AND DISPOSAL:

    NARA retains and disposes of the records in accordance with the NARA Records Control Schedule and the General Records Schedules approved by the National Archives and Records Administration.

    SYSTEM MANAGER(S) AND ADDRESS:

    The system manager is the Director, Office of Presidential Libraries. The business address for the system manager is listed in Appendix B.

    NOTIFICATION PROCEDURE:

    People inquiring about their records should notify the NARA Privacy Act Officer at the address listed in Appendix B.

    RECORD ACCESS PROCEDURES:

    People who wish to access their records should submit a request in writing to the NARA Privacy Act Officer at the address listed in Appendix B.

    CONTESTING RECORD PROCEDURES:

    NARA's rules for contesting the contents of a person's records and appealing initial determinations are in 36 CFR part 1202.

    RECORD SOURCE CATEGORIES:

    NARA obtains information in the VSS files from visitors and from NARA employees who maintain the files.

    [FR Doc. 2016-26696 Filed 11-3-16; 8:45 am] BILLING CODE 7515-01-P
    NATIONAL SCIENCE FOUNDATION Business and Operations Advisory Committee; Notice of Meeting

    In accordance with the Federal Advisory Committee Act (Pub. L. 92-463, as amended), the National Science Foundation (NSF) announces the following meeting:

    Name: Business and Operations Advisory Committee (9556)

    Date/Time: November 29, 2016; 1:00 p.m. to 5:30 p.m. (EST)

    November 30, 2016; 8:00 a.m. to 12:00 p.m. (EST)

    Place: National Science Foundation, 4201 Wilson Boulevard, Arlington, Virginia 22230; Stafford I, Room 1235.

    Type of Meeting: Open.

    Contact Person: Joan Miller, National Science Foundation, 4201 Wilson Boulevard, Arlington, VA 22230; (703) 292-8200.

    Purpose of Meeting: To provide advice concerning issues related to the oversight, integrity, development and enhancement of NSF's business operations.

    Agenda Tuesday, November 29, 2016; 1:00 p.m.-5:30 p.m.

    Welcome/Introductions; BFA/OIRM/OLPA Updates; NSF Strategic Plan; BOAC and Operations with its Subcommittees; Update from Subcommittee on National Academy of Public Administration (NAPA); Application of Lessons Learned from Other Lessons-Learned Programs.

    Wednesday, November 30, 2016; 8:00 a.m.-12:00 p.m.

    Results from the 2016 Federal Employees Viewpoint Survey (FEVS); Discussion with Director and Chief Operating Officer; Update: Committee on Equal Opportunities in Science and Engineering (CEOSE); Meeting Wrap-Up.

    Dated: November 1, 2016. Crystal Robinson, Committee Management Officer.
    [FR Doc. 2016-26666 Filed 11-3-16; 8:45 am] BILLING CODE 7555-01-P
    NATIONAL SCIENCE FOUNDATION Notice of Permit Modification Received Under the Antarctic Conservation Act of 1978 AGENCY:

    National Science Foundation.

    ACTION:

    Notice of permit modification request received and permit issued under the Antarctic Conservation Act of 1978.

    SUMMARY:

    The National Science Foundation (NSF) is required to publish a notice of requests to modify permits issued to conduct activities regulated and permits issued under the Antarctic Conservation Act of 1978. NSF has published regulations under the Antarctic Conservation Act at Title 45 Part 671 of the Code of Federal Regulations. This is the required notice of a requested permit modification and permit issued.

    FOR FURTHER INFORMATION CONTACT:

    Nature McGinn, ACA Permit Officer, Division of Polar Programs, Rm. 755, National Science Foundation, 4201 Wilson Boulevard, Arlington, VA 22230. Or by email: [email protected]

    SUPPLEMENTARY INFORMATION:

    The Foundation issued a permit (ACA 2016-008) to David Rootes, Environmental Manager, Antarctic Logistics and Expeditions, LLC, on October 23, 2015. The issued permit allows the applicant to operate a remote camp at Union Glacier, Antarctica, and provide logistical support services for scientific and other expeditions, film crews, and tourists. These activities include aircraft support, cache positioning, camp and field support, resupply, search and rescue, medevac, medical support and logistic support for some National Operators.

    Now the applicant proposes a permit modification to continue permitted activities, including minimization, mitigation, and monitoring of waste, for the 2016-2017 Antarctic season. The Environmental Officer has reviewed the modification request and has determined that the amendment is not a material change to the permit, and it will have a less than a minor or transitory impact.

    DATES: October 23, 2015 to February 28, 2020.

    The permit modification was issued on October 31, 2016.

    Nadene G. Kennedy, Polar Coordination Specialist, Division of Polar Programs.
    [FR Doc. 2016-26622 Filed 11-3-16; 8:45 am] BILLING CODE 7555-01-P
    NATIONAL SCIENCE FOUNDATION Advisory Committee for Education and Human Resources; Notice of Meeting

    In accordance with the Federal Advisory Committee Act (Pub. L. 92-463, as amended), the National Science Foundation (NSF) announces the following meeting:

    Name: Advisory Committee for Education and Human Resources (#1119).

    Date/Time: November 30, 2016; 8:00 a.m.-5:00 p.m.

    December 1, 2016; 8:00 a.m.-1:00 p.m.

    Place: National Science Foundation, 4201 Wilson Boulevard, Room 375, Arlington, VA 22230.

    Operated assisted teleconference is available for this meeting. Call 888-658-9757 with password EHRAC and you will be connected to the audio portion of the meeting.

    To attend the meeting in person, all visitors must contact the Directorate for Education and Human Resources ([email protected]) at least 24 hours prior to the teleconference to arrange for a visitor's badge. All visitors must report to the NSF visitor desk located in the lobby at the 9th and N. Stuart Streets entrance at 4201 Wilson Blvd. on the day of the teleconference to receive a visitor's badge.

    Meeting materials and minutes will also be available on the EHR Advisory Committee Web site at http://www.nsf.gov/ehr/advisory.jsp.

    Type of Meeting: Open, Teleconference.

    Contact Person: Keaven M. Stevenson, National Science Foundation, 4201 Wilson Boulevard, Room 805, Arlington, VA 22230; (703) 292-8600; [email protected].

    Purpose of Meeting: To provide advice with respect to the Foundation's science, technology, engineering, and mathematics (STEM) education and human resources programming.

    Agenda Wednesday, November 30, 2016 8:00 a.m.-5:00 p.m. Remarks by the Committee Chair and NSF Assistant Director for Education and Human Resources (EHR). Discussion of Selected NSF Big Ideas Related to EHR Goals. Committee of Visitor Report on Education Core Research Discussion with France Córdova, NSF Director Thursday, December 1, 2016 8:00 a.m.-1:00 p.m. Discussion of INCLUDES and Selected NSF Big Ideas Committee of Visitor Reports Other Business Adjournment Dated: November 1, 2016. Crystal Robinson, Committee Management Officer.
    [FR Doc. 2016-26664 Filed 11-3-16; 8:45 am] BILLING CODE 7555-01-P
    NUCLEAR REGULATORY COMMISSION [NRC-2016-0001] Sunshine Act Meeting Notice DATES:

    November 7, 14, 21, 28, December 5, 12, 2016.

    PLACE:

    Commissioners' Conference Room, 11555 Rockville Pike, Rockville, Maryland.

    STATUS:

    Public and Closed.

    Week of November 7, 2016

    There are no meetings scheduled for the week of November 7, 2016.

    Week of November 14, 2016—Tentative

    There are no meetings scheduled for the week of November 14, 2016.

    Week of November 21, 2016—Tentative

    There are no meetings scheduled for the week of November 21, 2016.

    Week of November 28, 2016—Tentative

    Tuesday, November 29, 2016

    9:00 A.M. Briefing on Uranium Recovery (Public Meeting) (Contact: Samantha Crane: 301-415-6380)

    This meeting will be webcast live at the Web address—http://www.nrc.gov/.

    Week of December 5, 2016—Tentative

    There are no meetings scheduled for the week of December 5, 2016.

    Week of December 12, 2016—Tentative Thursday, December 15, 2016 9:30 A.M. Briefing on Equal Employment Opportunity, Affirmative Employment, and Small Business (Public Meeting) (Contact: Larniece Moore McKoy: 301-415-1942)

    This meeting will be webcast live at the Web address—http://www.nrc.gov/.

    The schedule for Commission meetings is subject to change on short notice. For more information or to verify the status of meetings, contact Glenn Ellmers at 301-415-0442 or via email at [email protected].

    The NRC Commission Meeting Schedule can be found on the Internet at: http://www.nrc.gov/public-involve/public-meetings/schedule.html.

    The NRC provides reasonable accommodation to individuals with disabilities where appropriate. If you need a reasonable accommodation to participate in these public meetings, or need this meeting notice or the transcript or other information from the public meetings in another format (e.g. braille, large print), please notify Kimberly Meyer, NRC Disability Program Manager, at 301-287-0739, by videophone at 240-428-3217, or by email at [email protected]. Determinations on requests for reasonable accommodation will be made on a case-by-case basis.

    Members of the public may request to receive this information electronically. If you would like to be added to the distribution, please contact the Nuclear Regulatory Commission, Office of the Secretary, Washington, DC 20555 (301-415-1969), or email [email protected] or [email protected].

    Dated: November 2, 2016. Glenn Ellmers, Policy Coordinator, Office of the Secretary.
    [FR Doc. 2016-26827 Filed 11-2-16; 4:15 pm] BILLING CODE 7590-01-P
    SECURITIES AND EXCHANGE COMMISSION [Release No. 34-79200] Order Granting a Limited Exemption From Rule 102 of Regulation M Concerning NASDAQ Stock Market LLC's New Product Support Incentives Pursuant to Regulation M Rule 102(e) October 31, 2016.

    On September 23, 2016, NASDAQ Stock Market LLC (“Exchange” or “NASDAQ”) filed with the Securities and Exchange Commission (“Commission”) a proposal to amend NASDAQ Rule 7014(f) to, among other things, change their Lead Market Maker Program (now renamed the “Designated Liquidity Provider (“DLP”) Program”) to include a new rebate, the New Product Support Incentive (“NPSI”).1 Under the NPSI, the Exchange will pay a higher rebate to market makers that act as DLPs in newly launched exchange-traded products (“ETPs”) that meet certain conditions.2 The proposal became effective upon filing pursuant to Section 19(b)(3)(A)(ii) of the Securities Exchange Act of 1934, as amended (“Exchange Act”).3

    1Notice of Filing and Immediate Effectiveness of Proposed Rule Change to Amend Nasdaq's Fees at Rule 7014(f), Exchange Act Release No. 78912 (Sep. 23, 2016); 81 FR 67019 (Sep. 29, 2016) (“NPSI Release”).

    2 ETPs eligible to be qualified securities for the DLP Program are exchange-traded funds or index-linked securities listed on NASDAQ pursuant to NASDAQ Rules 5705 (Exchange Traded Funds: Portfolio Depository Receipts and Index Fund Shares), 5710 (Securities Linked to the Performance of Indexes and Commodities, Including Currencies), 5720 (Trust Issued Receipts), 5735 (Managed Fund Shares), or 5745 (NextShares). In addition, the ETPs must have at least one DLP. Further, to qualify for the NPSI, the DLP must be at the national best bid or offer at least 20% of the time on average in the assigned ETP, the ETP must have a three-month ADV of less than 500,000, and the ETP must be less than 36 months old. See NASDAQ Rule 7014(f)(1) and (4). Collectively, securities for which rebates under the NPSI are made are referred to in this order as “NPSI Securities.”

    3 15 U.S.C. 78s(b)(3)(A)(ii). See also NPSI Release.

    Specifically, the Exchange will pay an NPSI rebate to a DLP of $0.0070 per executed share in the first year from the ETP's launch, on a decreasing scale until the NPSI is phased out as the ETP ages, terminating three years from the ETP's launch.4 In contrast, the largest rebate that a DLP can collect under the DLP Program's “Basic Rebate” for a non-NPSI ETP is $0.0047 per executed share.5 NASDAQ represents that the NPSI is designed for the purpose of incentivizing DLPs to support trading in newly launched ETPs.6

    4 NASDAQ Rule 7014(f)(5)(B). The rebate decreases to $0.0065 per executed share in the second year and $0.0055 per executed share in the third. After the third year, no rebate is paid under the NPSI. These rebates collectively are referred to in this order as “NPSI Rebates.”

    5See NASDAQ Rule 7014(f)(4)-(5)(A). In addition to the Basic Rebate and NPSI, a DLP in qualifying ETPs can also receive the “Additional Tape C ETP Incentive,” which provides $0.0003 to $0.0005 per executed share, depending on how many ETPs the DLP is assigned to and other conditions are met. See NASDAQ Rule 7014(f)(5)(C).

    6 NPSI Release.

    With the implementation of the NPSI, issuers of newly launched ETPs that choose to list on NASDAQ are automatically enrolled in the NPSI and would indirectly benefit from this liquidity support, which is intended to incentivize market makers to engage in more quotation and trading activity than might otherwise be undertaken in the absence of payments under the NPSI in order to help facilitate the distribution of newly launched ETPs. As such, the Commission believes that participating in the NPSI could constitute an indirect attempt by the issuer to induce a bid for or purchase of a covered security during a restricted period potentially in violation of Rule 102 of Regulation M.7 NASDAQ represents that the NPSI may incentivize DLPs to support trading in newly launched ETPs.8

    7 17 CFR 242.102. The Commission notes in this regard the focus of the NPSI on newly launched ETPs. Cf. Order Instituting Proceedings to Determine Whether to Approve or Disapprove Proposed Rule Changes Relating to Market Maker Incentive Programs for Certain Exchange-Traded Products, Exchange Act Release No. 67411 (Jul. 11, 2012), 77 FR 42052 (Jul. 17, 2012) (regarding the similar NASDAQ Market Quality Program (“MQP”), stating that “[t]he Commission believes that issuer payments made under the SRO Proposals would constitute an indirect attempt by the issuer of a covered security to induce a purchase or bid in a covered security during a restricted period in violation of Rule 102” and noting that “under the NASDAQ Proposal, the issuer payments would `be used for the purpose of incentivizing one or more Market Makers in the MQP Security,' which could induce bids or purchases for the issuer's security during a restricted period”).

    8 NPSI Release.

    The Commission has provided limited, conditional exemptions from Rule 102 for issuers to participate in a number of similar programs, such as the NASDAQ MQP, which also involved an indirect attempt by the issuer to induce a bid for or a purchase of a covered security during a restricted period.9 Like the NPSI, these programs are designed to incentivize market makers to make markets in specific securities. The Commission's exemptions for these programs are intended to ensure that investors purchasing ETPs that are being quoted or traded as a result of incentive payments are notified in advance of the potential consequences of such payments on the prices and liquidity of such ETPs. The Commission believes that it is appropriate to exempt issuers from Rule 102 to permit participation in the NPSI with similar disclosure to investors.

    9See Order Granting a Limited Exemption from Rule 102 of Regulation M Concerning the NASDAQ Stock Market LLC Market Quality Program Pilot Pursuant to Regulation M Rule 102(e), Exchange Act Rel. No. 69196 (Mar. 20, 2013); 78 FR 18410 (Mar. 26, 2013); Order Granting a Limited Exemption from Rule 102 of Regulation M Concerning the NYSE Arca, Inc.'s Exchange Traded Product Incentive Program Pilot Pursuant to Regulation M Rule 102(e), Exchange Act Rel. No. 69707 (Jun. 6, 2013); 78 FR 35330 (Jun. 12, 2013); Order Granting a Limited Exemption from Rule 102 of Regulation M Concerning the NYSE Arca, Inc.'s Crowd Participant Program Pilot, Exchange Act Rel. No. 71805 (Mar. 26, 2014); 79 FR 18365 (Apr. 1, 2014); and Order Granting a Limited Exemption from Rule 102 of Regulation M Concerning the BATS Exchange, Inc.'s Pilot Supplemental Competitive Liquidity Provider Program, Exchange Act Rel. No. 72693 (Jul. 28, 2014); 79 FR 44875 (Aug. 1, 2014).

    The Commission believes that potential investors in NPSI Securities should be provided with sufficient information regarding the potential impact of the NPSI on the price and liquidity of the ETPs, particularly given the temporary and limited nature of each ETP's enrollment in the program. Accordingly, the Commission is granting a limited exemption from Rule 102 of Regulation M solely to permit issuers to participate indirectly in the NPSI Rebates, subject to certain conditions described below.

    Rule 102 of Regulation M

    Rule 102 of Regulation M prohibits issuers, selling security holders, or any affiliated purchaser of such persons, directly or indirectly, from bidding for, purchasing, or attempting to induce any person to bid for or purchase a covered security 10 during the applicable restricted period in connection with a distribution of securities effected by or on behalf of an issuer or selling security holder, except as specifically permitted in the rule.11 As mentioned above, the Commission believes that issuers participating in the NPSI could constitute an indirect attempt to induce a bid for or purchase of a covered security during the applicable restricted period. Accordingly, absent exemptive relief, issuers of NPSI Securities (“NPSI Issuers”) that list on NASDAQ while the NPSI is in effect may violate Rule 102.

    10 Covered security is defined as any security that is the subject of a distribution, or any reference security. 17 CFR 242.100(b).

    11 17 CFR 242.102(a).

    On the basis of the conditions set out below, which in general are designed to help inform investors about the potential impact of the NPSI to potential investors of NPSI Securities, the Commission finds that it is appropriate in the public interest, and is consistent with the protection of investors, to grant a limited exemption from Rule 102 of Regulation M solely to permit NPSI Issuers to list NPSI Securities on NASDAQ while the NPSI is in effect and thus, to participate indirectly in the payment of the NPSI Rebates to DLPs.12

    12 Rule 102(e) allows the Commission to grant an exemption from the provision of Rule 102, either unconditionally or on specified terms and conditions, to any transaction or class of transactions, or to any security or class of securities.

    This limited exemption is conditioned on the NPSI Issuer, or sponsor if applicable, making specific disclosures, as set forth below. The disclosures are designed to alert potential investors that the trading market for NPSI Securities may be affected by these payments. Specifically, these disclosures are designed to inform potential investors about the potential impact of the NPSI on the natural market forces of supply and demand prior to making an investment decision in these newly launched securities products. These disclosures are expected to promote greater investor protection by helping to ensure that investors adequately informed as to this potential impact. We also note that, to the extent that information about the NPSI is material, disclosure of this kind may already be required by the federal securities laws.

    Conclusion

    It is therefore ordered, pursuant to Rule 102(e) of Regulation M, that NPSI Issuers are hereby exempted from Rule 102 of Regulation M solely to permit NPSI Issuers to participate in the NPSI as set forth in NASDAQ Rule 7014(f), subject to the condition that the NPSI Issuer (or the sponsor, if applicable) shall make the following disclosures in a press release, as well as prominently and continuously on its Web site, for each specific ETP that it intends to list, or has listed, on NASDAQ:

    (1) At the beginning of the restricted period, as defined in Rule 100 of Regulation M,13 for the NPSI Security, the following disclosure shall be continuously provided until the disclosure in (2) below is required: “[Specific ETP name] intends to list on NASDAQ on or around [anticipated date]. Once listed, [Specific ETP] is automatically eligible for NASDAQ's New Product Support Incentives Rebate (“NPSI Rebate”), which is a payment made to certain market makers depending on how actively they quote and trade [Specific ETP]. Market makers quoting and trading [Specific ETP] on NASDAQ will receive such payments for up to three years from the launch date for [Specific ETP] if they meet the requirements for such payments.”;

    13 17 CFR 242.100(b).

    (2) Immediately after launch, or immediately at the beginning of the period in which a market maker's trading activity can qualify for an NPSI Rebate in an NPSI Security, as applicable, the following disclosure shall be continuously maintained and updated until termination of the NPSI Rebate, and shall, as necessary, be supplemented with the disclosure in (3) below: “The [Specific ETP name] is listed on NASDAQ. As such, it is enrolled in NASDAQ's New Product Support Incentives Rebate (“NPSI Rebate”), which is a payment made to certain market makers depending on how actively they quote and trade [Specific ETP]. The [Specific ETP] has participated in the NPSI Rebate since [date], and will no longer be eligible to participate in the program on [date], which is three years from the launch date (unless the program is terminated or modified before then or if [Specific ETP] becomes too liquid to participate in the NPSI before then). Certain market makers quoting and trading [Specific ETP] on NASDAQ will be eligible to receive NPSI Rebates until that date, unless, again, the program is terminated or modified before then or if [Specific ETP] becomes too liquid to participate in the NPSI before then. The payment of the NPSI Rebates is intended to help provide liquidity support for newly launched exchange-traded products by generating more quotes and trading than might otherwise exist absent these payments. Investors should be aware that when these payments cease, there may be an adverse impact on the price and liquidity of [Specific ETP], which could adversely impact a purchaser's subsequent sale of the security.”; and

    (3) No less than 30 days before the expected termination date, or as soon as practicable after the NPSI Issuer becomes aware or should become aware that the NPSI Security will no longer be eligible to participate in the NPSI and before the end of such eligibility, the following disclosure shall be added to the disclosure required in (2) above: “UPDATE: [Specific ETP] is expected to no longer qualify for the NPSI rebates on [or around] [date]. This may impact the price or liquidity of [Specific ETP], which could adversely impact a purchaser's subsequent sale of the security.”

    This exemptive relief shall terminate upon the event of any material change to the NPSI, including a change to the types of securities permitted to participate in the program or to the terms or amount of the payments made pursuant to the NPSI.14 Further, this exemptive relief is subject to modification or revocation at any time the Commission determines that such action is necessary or appropriate in furtherance of the purposes of the Exchange Act. This exemptive relief is limited solely to the issuer's indirect participation in the payment of the NPSI Rebates as set forth in NASDAQ Rule 7014(f)(5)(B) for an NPSI Security, and does not extend to any other activities of the issuer, any other security of the issuer or sponsor, or any other issuers.15 In addition, persons relying on this exemption are directed to the anti-fraud and anti-manipulation provisions of the Exchange Act, particularly Sections 9(a) and 10(b), and Rule 10b-5 thereunder. Responsibility for compliance with these and any other applicable provisions of the federal securities laws must rest with the persons relying on this exemption. This order does not represent Commission views with respect to any other question that the proposed activities may raise, including, but not limited to the adequacy of the disclosure required by federal securities laws and rules, and the applicability of other federal or state laws and rules to, the proposed activities.

    14 Accordingly, we expect NASDAQ to contact staff in the Office of Trading Practices in the Division of Trading and Markets before making any material change to the NPSI.

    15 Other activities, such as ETF redemptions, are not covered by this exemptive relief.

    For the Commission, by the Division of Trading and Markets, pursuant to delegated authority.16

    16 17 CFR 200.30-3(a)(6).

    Brent J. Fields, Secretary.
    [FR Doc. 2016-26646 Filed 11-3-16; 8:45 am] BILLING CODE 8011-01-P
    SECURITIES AND EXCHANGE COMMISSION [Release No. 34-79201; File No. SR-NYSEArca-2016-120] Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Relating to the Listing and Trading of Shares of the ForceShares Daily 4X US Market Futures Long Fund and ForceShares Daily 4X US Market Futures Short Fund Under Commentary .02 to NYSE Arca Equities Rule 8.200 October 31, 2016.

    Pursuant to Section 19(b)(1) 1 of the Securities Exchange Act of 1934 (the “Act”) 2 and Rule 19b-4 thereunder,3 notice is hereby given that, on October 17, 2016, NYSE Arca, Inc. (the “Exchange” or “NYSE Arca”) filed with the Securities and Exchange Commission (the “Commission”) the proposed rule change as described in Items I and II below, which Items have been prepared by the self-regulatory organization. The Commission is publishing this notice to solicit comments on the proposed rule change from interested persons.

    1 15 U.S.C. 78s(b)(1).

    2 15 U.S.C. 78a.

    3 17 CFR 240.19b-4.

    I. Self-Regulatory Organization's Statement of the Terms of Substance of the Proposed Rule Change

    The Exchange proposes to list and trade shares of the following under Commentary .02 to NYSE Arca Equities Rule 8.200 (“Trust Issued Receipts”): ForceShares Daily 4X US Market Futures Long Fund and ForceShares Daily 4X US Market Futures Short Fund. The proposed rule change is available on the Exchange's Web site at www.nyse.com, at the principal office of the Exchange, and at the Commission's Public Reference Room.

    II. Self-Regulatory Organization's Statement of the Purpose of, and Statutory Basis for, the Proposed Rule Change

    In its filing with the Commission, the self-regulatory organization included statements concerning the purpose of, and basis for, the proposed rule change and discussed any comments it received on the proposed rule change. The text of those statements may be examined at the places specified in Item IV below. The Exchange has prepared summaries, set forth in sections A, B, and C below, of the most significant parts of such statements.

    A. Self-Regulatory Organization's Statement of the Purpose of, and the Statutory Basis for, the Proposed Rule Change 1. Purpose

    The Exchange proposes to list and trade shares (“Shares”) of the following under Commentary .02 to NYSE Arca Equities Rule 8.200, which governs the listing and trading of Trust Issued Receipts (“TIRs”): 4 ForceShares Daily 4X US Market Futures Long Fund (“Fund” or “Long Fund”) and ForceShares Daily 4X US Market Futures Short Fund (“Fund” or “Short Fund” and, together with the Long Fund, the “Funds”).5

    4 Commentary .02 to NYSE Arca Equities Rule 8.200 applies to TIRs that invest in “Financial Instruments.” The term “Financial Instruments,” as defined in Commentary .02(b)(4) to NYSE Arca Equities Rule 8.200, means any combination of investments, including cash; securities; options on securities and indices; futures contracts; options on futures contracts; forward contracts; equity caps, collars and floors; and swap agreements.

    5 On July 27, 2015, the Trust submitted to the Commission its draft registration statement on Form S-1 under the Securities Act of 1933 (15 U.S.C. 77a) (“Securities Act”). The Jumpstart Our Business Startups Act, enacted on April 5, 2012, added Section 6(e) to the Securities Act. Section 6(e) of the Securities Act provides that an “emerging growth company” may confidentially submit to the Commission a draft registration statement for confidential, non-public review by the Commission staff prior to public filing, provided that the initial confidential submission and all amendments thereto shall be publicly filed not later than 21 days before the date on which the issuer conducts a road show, as such term is defined in Securities Act Rule 433(h)(4). An emerging growth company is defined in Section 2(a)(19) of the Securities Act as an issuer with less than $1,000,000,000 total annual gross revenues during its most recently completed fiscal year. The Funds meet the definition of an emerging growth company and consequently have filed their Form S-1 registration statement on a confidential basis with the Commission.

    Each of the Funds is a commodity pool that is a series of the ForceShares Trust (“Trust”), a Delaware statutory trust. The Funds' sponsor is ForceShares LLC (the “Sponsor”). ALPS Distributors, Inc. is the marketing agent for the Funds' Shares (“Marketing Agent”). U.S. Bank National Association is the Funds' custodian (“Custodian”), which, in such capacity, holds the Funds' “Cash Equivalents” (as described below) and/or cash pursuant to a custodial agreement. The Custodian is also the registrar and transfer agent for the Funds' Shares.

    The Long Fund's primary investment objective is to seek daily investment results, before fees and expenses, that correspond to approximately four times (400%) the daily performance, and the Short Fund's primary investment objective is to seek daily investment results, before fees and expenses, that correspond to approximately four times the inverse (−400%) of the daily performance, of the closing settlement price 6 for lead month (i.e., the “near month” or next-to-expire) Standard & Poor's 500 Stock Price Index Futures contracts (“Big S&P Contracts”) that are traded on the Chicago Mercantile Exchange (“CME”).7 Except as discussed below, this closing settlement price is referred to herein as the “Benchmark”. The Big S&P Contracts are referred to herein as the “Benchmark Component Futures Contracts”.8 The Funds do not seek to achieve their respective stated primary investment objectives over a period of time greater than a single day.

    6 The CME currently calculates the closing settlement price as the volume-weighted average price of all trades executed in the applicable Big S&P Contract on CME Globex in the last 30 seconds of open outcry trading (typically from 4:14:30 p.m. E.T. to 4:15:00 p.m. E.T.).

    7 Big S&P Contracts are traded on the CME in units of $250 multiplied by the value of the S&P 500 Index.

    8 The Funds' Benchmark is intended to track movements in the closing settlement price of lead month Big S&P Contracts. Big S&P Contracts are based on the value of the S&P 500 Index, a measure of large-cap U.S. stock market performance. The S&P 500 Index is a float-adjusted, market capitalization-weighted index of 500 U.S. operating companies and real estate investment trusts selected through a process that factors in criteria such as liquidity, price, market capitalization and financial viability.

    The Sponsor employs a “neutral” investment strategy intended to track the changes in the Benchmark regardless of whether the Benchmark goes up or goes down. Each Fund's “neutral” investment strategy is designed to permit investors generally to purchase and sell a Fund's Shares with the objective of gaining leveraged exposure to Big S&P Contracts and, therefore, the S&P 500® (“S&P 500 Index”), in a cost-effective manner.

    Each Fund seeks to achieve its primary investment objective under normal market conditions 9 primarily by investing in Big S&P Contracts such that daily changes in a Fund's net asset value (“NAV”) are expected to closely track the changes, in the case of the Long Fund, or the inverse of the changes, in the case of the Short Fund, in the Benchmark on a leveraged basis, as described further below. Each Fund will also invest in E-MiniTM S&P 500® Futures contracts (“E-Minis” and, together with Big S&P Contracts, “Primary S&P Interests”) 10 to seek to achieve its primary investment objective where position limits prevent further purchases of Big S&P Contracts.11 Each Fund may also invest in other contracts, securities and instruments that the Sponsor determines, in its sole discretion, further a Fund's primary investment objective (collectively, “Other S&P Interests,” and together with Primary S&P Interests, “S&P Interests”).12

    9 The term “under normal market conditions” includes, but is not limited to, the absence of adverse market, economic, political or other conditions, including extreme volatility or trading halts in the equities markets or the financial markets generally; operational issues (e.g., systems failure) causing dissemination of inaccurate market information; or force majeure type events such as natural or man-made disaster, act of God, armed conflict, act of terrorism, riot or labor disruption or any similar intervening circumstance.

    10 E-Minis are traded on the CME in units of $50 multiplied by the value of the S&P 500 Index.

    11 Primary S&P Interests traded on the CME expire on a specified day in each calendar quarter: March, June, September and December. For example, in terms of the Benchmark, on May 1st of a given year the lead month Big S&P Contract will expire in June of that year and will be the Benchmark Component Futures Contracts. As another example, on December 31st of a given year, the Benchmark Component Futures Contracts will be the contracts expiring in March of the following year.

    12 The Sponsor does not intend to operate the Funds in a fashion such that their respective per Share NAV equals, in dollar terms, the value of the S&P 500 Index or the price of any particular Primary S&P Interest.

    Permissible Other S&P Interests are the following: Swap agreements (cleared and over-the-counter), over-the-counter forward contracts, and short positions on futures contracts, in each case with respect to and referencing Primary S&P Interests or the S&P 500 Index.

    Each Fund may also acquire options on futures contracts (i.e., the Stop Options described below). In the absence of certain stop measures represented by options on futures contracts obtained by a Fund, if the Benchmark moves 25% or more on a given trading day(s) in a direction adverse to a Fund's holdings, a Fund's investors would lose all of their money. Therefore, the Long Fund would hold “put” options, and the Short Fund would hold “call” options, with respect to all or substantially all of its S&P Interests (as defined above) 13 with strike prices at approximately 75%, in the case of the Long Fund, or 125%, in the case of the Short Fund, of the value of the applicable underlying S&P Interest as of the end of the preceding business day (such Fund's “Stop Options”). The Stop Options will serve primarily to (a) prevent the Fund's NAV from going to zero in the event of a 25% adverse move in the Benchmark, and (b) recoup a small portion of substantial losses of a Fund that may result from large movements in the Benchmark. The Stop Options are not expected to result in significant gains for any Fund, and will generally be considered a transaction cost for each Fund. The Stop Options will not prevent a Fund from losing money, but will permit the Fund to recoup a small percentage of its losses in the event of a large or catastrophic adverse movement in a Fund's Benchmark.

    13 The Stop Options will be comprised of options on Primary S&P Interests (i.e., Big S&P Contracts and E-Minis) providing the desired coverage with respect to both Primary S&P Interests and Other S&P Interests, if any.

    Each Fund's positions in S&P Interests will be changed or “rolled” on a regular basis in order to track the changing nature of the Benchmark. For example, quarterly (on the date on which a Big S&P Contract expires), the deferred month (or next-to-expire) Big S&P Contract will become the “Lead” month (or front month) Big S&P Contract and will become the Benchmark Component Futures Contract, and each Fund's investments will have to be changed accordingly. During roll periods, the Benchmark will be composed of a combination of the lead month Big S&P Contract and/or the deferred month Big S&P Contract. The Benchmark is a “rolling index”, which means that the Benchmark does not take physical possession of any commodities. An investor with a rolling futures position is able to avoid delivering (or taking delivery of) underlying physical commodities while maintaining exposure to those commodities. The Benchmark Component Futures Contract is changed from the lead month Big S&P Contract to the deferred month Big S&P Contract over a four-day period. Each quarter, the Benchmark Component Futures Contract changes start at the end of the day on the date two weeks (twelve days) prior to expiration of the lead month Big S&P Contract for that month. During the first three days of the period, the applicable value of the Benchmark is based on a combination of the lead month Big S&P Contract and the deferred month Big S&P Contract as follows:

    • On day 1, the Benchmark consists of 75% of the lead month Big S&P Contract's price plus 25% of the deferred month Big S&P Contract's price;

    • On day 2, the Benchmark consists of 50% of the lead month Big S&P Contract's price plus 50% of the deferred month Big S&P Contract's price;

    • On day 3, the Benchmark consists of 25% of the lead month Big S&P Contract's price plus 75% of the deferred month Big S&P Contract's price; and

    • On day 4, the Benchmark is entirely composed of the prior day's deferred month Big S&P Contract, which now constitutes the lead month Big S&P Contract until the beginning of the following quarter's rolling period.

    On each day during the four-day rolling period, the Sponsor anticipates it will roll S&P Interests positions by closing, or selling, a percentage of positions in S&P Interests and reinvesting the proceeds from closing those positions in new S&P Interests that reflect the change in the Benchmark. The anticipated dates that the quarterly four-day roll period will commence are posted on a Fund's Web site at www.forceshares.com, and are subject to change without notice. By remaining invested as fully as possible in S&P Interests, the Sponsor believes that the daily changes in percentage terms of the NAV will continue to closely track the daily changes in percentage terms in the price of the Benchmark.

    The composition of a Fund's Stop Options positions may or may not need to be changed during a roll period. The Sponsor will consider whether to sell a Stop Option position based upon that Stop Option's economic viability, which is determined by examining its strike price relative to the existing Benchmark Futures Contract value, time to expiration, market demand and any other applicable considerations. In all circumstances, including during roll period and at the end of the roll period, the Stop Option positions will provide coverage, at an aggregate strike price of approximately 75 percent for the Long Fund or 125 percent for the Short Fund, for all of the S&P Interests held by the Fund. As a result, the Sponsor will purchase new Stop Options when required to meet the referenced coverage threshold.14

    14 A Fund may hold Stop Options that provide coverage for more than 100% of a Fund's S&P Interests at any particular time. This result may occur because the Funds' respective investment strategies require that each Fund increase Stop Option positions to maintain a threshold of not less than 100% coverage of S&P Interests, and that Stop Option positions only be decreased if trading out of such positions will generate a transactional profit to the Fund (although such profits are not anticipated to provide a material impact on a Fund's return). Excess Stop Option positions for which trading is not profitable will be allowed to expire.

    The S&P Interests that each Fund will principally invest in are futures contracts, which are standardized contracts traded on, or subject to the rules of, an exchange that call for the future delivery of a specified quantity and type of asset at a specified time and place or, alternatively, may call for cash settlement. Each Fund expects to invest in S&P Interests to the fullest extent possible without (a) materially exceeding the leverage necessary to implement its primary investment objective or (b) being unable to satisfy its expected current or potential margin or collateral obligations with respect to its investments in S&P Interests. Each Fund will invest in Primary S&P Interests to the extent that it is not in violation of exchange position limits on such Primary S&P Interests.15 Futures contracts, all of which held by a Fund are lead month or deferred month Primary S&P Interests, are expected to comprise approximately ten to twenty-five percent (10-25%) of the Long Fund's portfolio and approximately ten to twenty-five percent (10-25%) of the Short Fund's portfolio.16 Subsequently, each Fund in its evaluation may also invest in Other S&P Interests that obtain the investment objective of leveraged exposure to the S&P 500 Index, in an amount up to twenty-five percent (25%) of its net assets. The types of contracts, securities and instruments that qualify as Other S&P Interests are swap agreements (cleared and over-the-counter), over-the-counter forward contracts, and short positions that the Sponsor determines, in its sole discretion, further a Fund's primary investment objective.

    15 The Commodity Futures Trading Commission (“CFTC”) and U.S. designated contract markets such as the CME may establish position limits on the maximum net long or net short futures contracts in commodity interests that any person or group of persons under common trading control (other than as a hedge, which an investment by the Funds is not) may hold, own or control. For example, the current CME instituted position limit for investments at any one time in Big S&P Contracts is 60,000 contracts (on a net basis) total for all months. For the purpose of this limit, E-Minis are counted as 1/5th the size of Big S&P Contracts for the purposes of this limit. These position limits are fixed ceilings that each Fund would not be able to exceed without specific CFTC authorization. Position limits are calculated at the controller level, meaning positions in the contracts held be the Funds will be aggregated at the level of control by the Sponsor, which is the commodity pool operator for the Funds. Position limits are calculated on a net futures basis, meaning that long exposure Primary S&P Interests held in the Long Fund will be netted against the short exposure Primary S&P Interests held by the Short Fund. Additionally, Stop Options held by a Fund will be netted against the Primary S&P Interests held by such Fund; provided, however, that the weighting of a Stop Option for position limit purposes will be determined through analysis of the “net delta” of the Stop Option (relative to current Benchmark values) using the Standard Portfolio Analysis of Risk (SPAN) system operated by the CME. As a result, the net impact of Stop Options on the position limits applicable to the Funds is difficult to ascertain in advance. Based on the Benchmark as of September 22, 2016, the position limits for Primary S&P Interests would account for a total notional value of $32,524,500,000. As a result, assuming the level of the S&P 500 Index remains the same, the Funds would be unlikely to trigger position limits for Primary S&P Interests unless one Fund's net assets exceeded the other Fund's net assets by approximately $8.1 billion. This calculation assumes that each Fund is successful in achieving its stated investment objective of maintaining 400% or −400% exposure to the Benchmark Futures Contract. If, for example, the Long Fund has $9 billion in net assets and does not invest in Other S&P Interests that are not subject to position limits, it will hold Primary S&P Interests with a total notional exposure of $36 billion (equivalent to 66,411.5 Big S&P Contracts). If the Short Fund has $1 billion in net assets and does not invest in Other S&P Interests that are not subject to position limits, it will hold Primary S&P Interests with a total notional exposure of $4 billion (equivalent to 7,379 Big S&P Contracts). On a net basis, the Funds will hold 59,032.5 contracts for position limit purposes. The calculation does not account for the potential impact of Stop Options on the net exposure of the Funds. Accountability levels differ from position limits in that they do not represent a fixed ceiling, but rather a threshold above which a futures exchange may exercise greater scrutiny and control over an investor's positions. If a Fund were to exceed an applicable accountability level for investments in futures contracts, the exchange will monitor a Fund's exposure and may ask for further information on its activities, including the total size of all positions, investment and trading strategy, and the extent of liquidity resources of a Fund. If deemed necessary by the exchange, a Fund could be ordered to reduce its aggregate net position back to the accountability level. Based on the Benchmark as of September 22, 2016, the reportable level that required enhanced recordkeeping for Primary S&P Interests would account for a total notional value of $54,207,500. As a result, assuming the level of the S&P 500 Index remains the same, the Funds would be expected to trigger accountability level recordkeeping requirements when one Fund's net assets exceeded the other Fund's net assets by approximately $54 million. In addition to position limits and accountability, the exchanges set daily price fluctuation limits on futures contracts. The daily price fluctuation limit establishes the maximum amount that the price of futures contracts may vary either up or down from the previous day's settlement price. Once the daily price fluctuation limit has been reached in a particular futures contract, no trades may be made at a price beyond that limit. Neither of the Funds intends to limit the size of the offering and each will attempt to expose substantially all of its proceeds to the S&P 500 Index utilizing S&P Interests. If a Fund encounters position limits, accountability levels, or price fluctuation limits for Primary S&P Interests on the CME, it may then, if permitted under applicable regulatory requirements, purchase Other S&P Interests. In any case, notwithstanding the potential availability of these instruments in certain circumstances, position limits could force a Fund to limit the number of Creation Baskets that it sells. A decline in the S&P 500 Index at certain price levels will trigger market-wide circuit breakers (i.e., price fluctuation limits) causing the Exchange or CME to suspend, halt, or restrict the trading of Primary S&P interests for a short period time or the remainder of the applicable trading day. Price fluctuation limits are established by relevant exchanges on which securities or futures contracts are traded. Currently, the Sponsor intends to acquire S&P Interests on the CME, which has established price fluctuation limits for negative movements of 7% percent, 13% percent and 20% percent in the value of the S&P Index. The CME has not adopted price fluctuation limits for positive movement thresholds in the S&P 500 Index.

    16 To the extent that the CME or any applicable authority or counterparty alters margin requirement applicable to the Primary S&P Interests, the approximate percentage of portfolio interests held in Primary S&P Interests, Other S&P Interests, Stop Options and Cash Equivalents (as defined below) may change in accordance therewith.

    Each Fund may acquire or dispose of Stop Options (puts or calls) on S&P Interests in pursuing its secondary investment objective of recouping a small amount of losses of a Fund against an extreme, short term negative movement, in the case of the Long Fund, or positive movement, in the case of the Short Fund, in the Benchmark. Each Fund will acquire such number of Stop Options as is required in respect of the number and value of a Fund's S&P Interests, on an aggregated basis. Each Fund is expected to make use of options on Primary S&P Interests solely in connection with its secondary investment objective.

    Stop Options are expected to average less than approximately five percent (5%) of the Long Fund's portfolio and less than approximately five percent (5%) of the Short Fund's portfolio.

    On a day-to-day basis, a Fund will invest the remainder of its assets in money market funds, depository accounts with institutions with high quality credit ratings or short-term debt instruments that have terms-to-maturity of less than 397 days and exhibit high quality credit profiles, including U.S. government securities and repurchase agreements (collectively, “Cash Equivalents”). Cash Equivalents are expected to comprise approximately seventy to eighty-five percent (70-85%) of the Long Fund's portfolio and approximately seventy to eighty-five percent (70-85%) of the Short Fund's portfolio.

    The Sponsor uses a mathematical approach to investing. Using this approach, the Sponsor determines the type, quantity and mix of investment positions that each Fund should hold to achieve, on a daily basis, approximately four times (400%) the daily performance, in the case of the Long Fund, or approximately four times the inverse (−400%) of the daily performance, in the case of the Short Fund, of the Benchmark. The Sponsor does not invest the assets of the Funds in securities or financial instruments based on the Sponsor's view of the investment merit of a particular security, instrument, or company, nor does it conduct conventional investment research or analysis or forecast market movement or trends, in managing the assets of the Funds. Each Fund seeks to remain invested at all times in securities and/or financial instruments that, in combination, provide leveraged exposure to the S&P 500 Index without regard to market conditions, trends or direction.

    Following determination of a Fund's respective NAV each business day, each Fund will seek to position its portfolio so that its exposure to the Benchmark is consistent with a Fund's primary investment objective. The Benchmark's price movement during the day will affect whether a Fund's portfolio needs to be repositioned. For example, if the Benchmark has risen on a given day, the NAV of the Long Fund should rise and the NAV of the Short Fund should fall. As a result, the Long Fund's exposure would need to be increased and the Short Fund's exposure would need to be decreased. Conversely, if the Benchmark has fallen on a given day, the NAV of the Long Fund should fall and the NAV of the Short Fund should rise. As a result, the Long Fund's exposure would need to be decreased and the Short Fund's exposure would need to be increased.

    Because of daily rebalancing of each Fund's Portfolio and the compounding of each day's return over time, the return of each Fund for periods longer than a single day will be the result of each day's returns compounded over the period, which will very likely differ from four times (400%) the total performance, in the case of the Long Fund, or four times the inverse (−400%) of the total performance, in the case of the Short Fund, of the Benchmark over the same period. Each Fund will lose money if the level of the Benchmark is flat over time, and it is possible that the Long Fund will lose money over time even if the level of the Benchmark rises, and the Short Fund will lose money over time even if the level of the Benchmark falls, as a result of daily rebalancing of the applicable Fund, the Benchmark's volatility and the effects of compounding.

    Each Fund will be rebalanced daily in order to continue to reflect exposure equal to approximately four times (400%) the daily performance, in the case of the Long Fund, or approximately four times the inverse (−400%) of the daily performance, in the case of the Short Fund, of the Benchmark.17 However, each Fund will only rebalance on business days when the Exchange and the CME are open. The Sponsor will determine the type, quantity and combination of S&P Interests it believes will produce daily returns consistent with the applicable Fund's primary investment objective.

    17 The Sponsor anticipates that the rebalancing of a Fund's S&P Interests will principally take place during the period of time prior to the close of trading of Primary S&P Interests on the CME. Currently, trading on the CME takes place between 9:30 a.m. to 4:15 p.m. E.T.

    The Sponsor believes that market arbitrage opportunities will cause each Fund's Share price on the Exchange to track a Fund's NAV per Share. The Sponsor believes that the net effect of this expected relationship and the expected relationship between each Fund's NAV per Share and the Benchmark will be that the changes in the price of a Fund's Shares on the Exchange will track approximately four times (400%) the daily performance, in the case of the Long Fund, or four times the inverse (−400%) of the daily performance, in the case of the Short Fund, of the Benchmark. This relationship may be affected by various market factors, including but not limited to, the number of Shares of a Fund outstanding and the liquidity of the underlying holdings. The Sponsor believes that the market for Primary S&P Interests is among the more liquid futures markets and does not anticipate liquidity issues relating to a Fund's underlying holdings, absent extraordinary circumstances or material changes to the marketplace for Primary S&P Interests. While the Benchmark is composed of Big S&P Contracts and is therefore a measure of the future value of the S&P 500 Index, there is nonetheless expected to be a reasonable degree of correlation between the Benchmark and the then-current value of the S&P 500 Index.

    The Sponsor will invest each Fund's assets in S&P Interests, Stop Options, Cash Equivalents and/or cash. The Sponsor will deposit a portion of each Fund's net assets with the FCM or other custodians to be used to meet its current or potential margin or collateral requirements in connection with its investment in S&P Interests. Each Fund will use only Cash Equivalents and/or cash to satisfy these requirements.

    The Sponsor intends for such Stop Options to be maintained with an approximate level of coverage such that the Sponsor may put or call, as applicable, the S&P Interests at a strike price of approximately 75%, in the case of the Long Fund, or 125%, in the case of the Short Fund, of the value of the applicable underlying S&P Interests as of the end of the preceding business day. To the extent that the Sponsor is unable (whether through error or limitations in the availability of the required put or call options on futures contracts) to manage the Stop Options to provide coverage for all of a Fund's S&P Interests at the intended target strike price, it is possible that the Stop Options will not prevent a Fund's NAV from going to zero.

    The design of the Funds' Benchmark is such that the Benchmark Component Futures Contracts will change four times per year, and the Funds' investments must be rolled periodically to reflect the changing composition of the Benchmark. For example, when the lead month Big S&P Contract expires, such contract will no longer be the Benchmark Component Futures Contract and the applicable Fund's position in it will no longer be consistent with tracking the Benchmark. In the event of a futures market where near-to-expire contracts trade at a higher price than longer-to-expire contracts, a situation referred to as “backwardation”, then absent the impact of the overall movement in the S&P 500 Index the value of the Benchmark Component Futures Contracts would tend to rise as they approach expiration. As a result the Long Fund may benefit because it would be selling more expensive contracts and buying less expensive ones on an ongoing basis, and the Short Fund may be negatively impacted because it would be selling less expensive contracts and buying more expensive ones on an ongoing basis.

    Conversely, in the event of a futures market where near-to-expire contracts trade at a lower price than longer-to-expire contracts, a situation referred to as “contango,” then absent the impact of the overall movement in the S&P 500 Index the value of the Benchmark Component Futures Contracts would tend to decline as they approach expiration. As a result the Long Fund's total return may be lower than might otherwise be the case because it would be selling less expensive contracts and buying more expensive ones, and the Short Fund's total return may be higher than might otherwise be the case because it would be selling more expensive contracts and buying less expensive ones. The impact of backwardation and contango may lead the total return of a Fund to vary significantly from the total return of other price references, such as the S&P 500 Index. Absent the impact of rising or falling S&P 500 Index values, a prolonged period of contango could have a significant negative impact on the Long Fund's NAV and total return and a prolonged period of backwardation could have a significant negative impact on the Short Fund's NAV and total return.

    Operation of the Funds

    Each Fund invests in S&P Interests to the fullest extent possible without exceeding the leverage necessary to implement its primary investment objective or being unable to satisfy its expected current or potential margin or collateral obligations with respect to its investments in S&P Interests. After fulfilling such margin and collateral requirements and purchasing Stop Options consistent with its secondary investment objective, each Fund invests the remainder of its proceeds from the sale of baskets in Cash Equivalents and/or holds such assets in cash (generally in interest-bearing accounts). Therefore, the focus of the Sponsor in managing each Fund is investing in S&P Interests, Stop Options, Cash Equivalents and/or cash. Each Fund earns interest income from the Cash Equivalents that it purchases and on the cash it holds through the Custodian.

    The Investment Strategies of the Funds

    In managing each Fund's assets, the Sponsor does not use a technical trading system that automatically issues buy and sell orders. Instead, each time one or more baskets are purchased or redeemed, the Sponsor will purchase or sell S&P Interests, Stop Options and Cash Equivalents as required in respect of the amount of cash received or paid upon the purchase or redemption of the basket(s).

    As an example, assume that a Creation Basket is sold by the Long Fund, and that the Long Fund's closing NAV per Share is $50. In that case, the Long Fund would receive $2,500,000 in proceeds from the sale of the Creation Basket ($50 NAV per Share multiplied by 50,000 Shares, and ignoring the Creation Basket fee in the amount set forth in the applicable Fund's prospectus). If one were to assume further that the Sponsor wants to invest the entire proceeds from the Creation Basket in Big S&P Contracts to obtain an aggregate value of $10,000,000 (i.e., four times exposure relative to NAV) and that the market value of each such Big S&P Contract is $522,500 (i.e., index value of 2,090 multiplied by $250) (or otherwise not a round number), the Long Fund would be unable to buy an exact number of Big S&P Contracts with an aggregate market value equal to $10,000,000. Instead, the Long Fund would be able to purchase 19 Big S&P Contracts with an aggregate notional value of $9,927,500. Assuming a margin requirement equal to 4% of the value of the Big S&P Contracts, the Long Fund would be required to deposit $397,100 in Cash Equivalents and/or cash with the FCM through which the Big S&P Contracts were purchased. The remainder of the proceeds from the sale of the Creation Basket, $2,112,900, would remain invested in Cash Equivalents and/or cash as determined by the Sponsor from time to time based on factors such as potential calls for margin or anticipated redemptions.

    The specific S&P Interests purchased depend on various factors, including a judgment by the Sponsor as to the appropriate diversification of each Fund's investments. While the Sponsor anticipates that each Fund will seek to achieve its primary investment objective by investing in Primary S&P Interests, for various reasons, including the ability to enter into the precise amount of exposure to the S&P 500 Index and position limits on Primary S&P Interests, it may also invest in Other S&P Interests, including swaps, in the over-the-counter market to a potentially significant degree. Each Fund will be limited in investing up to twenty percent (20%) of its net assets in Other S&P Interests that may constitute securities for purposes of the Investment Company Act of 1940.

    The Sponsor does not anticipate letting its Primary S&P Interests expire and taking delivery of or having to deliver cash. Instead, the Sponsor closes out existing positions, e.g., in response to ongoing changes in the Benchmark or if it otherwise determines it would be appropriate to do so and reinvest the proceeds in new S&P Interests. Positions may also be closed out to meet orders for Redemption Baskets, in which case the proceeds from closing the positions will not be reinvested.

    Because the Long Fund seeks to track the Benchmark directly and profit when the value of the S&P 500 Index increases and, as a likely result of an increase in the value of the S&P 500 Index, the price of Primary S&P Interests increases, the Long Fund will generally be long on the S&P 500 Index, and will generally sell Primary S&P Interests only to close out existing long positions. Because the Short Fund seeks to track the Benchmark inversely and profit when the value of the S&P 500 Index decreases and, as a likely result of a decrease in the value of the S&P 500 Index, the price of Primary S&P Interests decreases, the Short Fund will generally be short on the S&P 500 Index, and will generally buy Primary S&P Interests only to close out existing short positions.

    Over-the-Counter Derivatives

    In addition to futures contracts, options on futures contracts and cleared swaps, derivative contracts that are tied to various securities will be entered into outside of public exchanges. The over-the-counter contracts that the Funds may enter into will take the form of either swaps or forward contracts, in each case providing exposure to the S&P 500 Index or to Big S&P Contracts.

    To reduce the credit risk that arises in connection with over-the-counter contracts, each Fund generally will enter into an agreement with each counterparty based on the Master Agreement published by the International Swaps and Derivatives Association, Inc. that provides for the netting of each Fund's overall exposure to its counterparty and for daily payments based on the marked to market value of the contract.

    The creditworthiness of each potential counterparty will be assessed by the Sponsor. The Sponsor assesses or reviews, as appropriate, the creditworthiness of each potential or existing counterparty to an over-the-counter contract pursuant to guidelines approved by the Sponsor. The creditworthiness of existing counterparties will be reviewed periodically by the Sponsor. There is no guarantee that the Sponsor's creditworthiness analysis will be successful and that counterparties selected for Fund transactions will not default on their contractual obligations.

    Net Asset Value

    Each Fund's NAV will be calculated by taking the current market value of a Fund's total assets and subtracting any liabilities and dividing the balance by the number of a Fund's Shares. Under each Fund's current operational procedures, each Fund's administrator, USBancorp Fund Services, LLC (the “Administrator”), will calculate the NAV of a Fund as of the earlier of 4:00 p.m. Eastern time (“E.T.”) or the close of the Exchange each day. The NAV for a particular trading day will be released after 4:15 p.m. E.T. The NAV for the Funds will be calculated by the Administrator once a day and will be disseminated daily to all market participants at the same time.18

    18 For each Fund, the NAV will be calculated by taking the current market value of a Fund's total assets and subtracting any liabilities. Under the Funds' current operational procedures, the Administrator will generally calculate the NAV of the Funds' Shares as of the earlier of 4:00 p.m. E.T. or the close of the Exchange each day. The NAV for a particular trading day will be released after 4:15 p.m. E.T.

    Each Fund's NAV includes, in part, any unrealized profits or losses on open swap agreements, futures or forward contracts. Under normal circumstances, a Fund's NAV will reflect the quoted closing settlement price of open futures contracts on the date when a Fund's NAV is being calculated. In instances when the quoted settlement price of futures contract traded on an exchange may not be reflective of fair value based on market condition, generally due to the operation of daily limits or other rules of the exchange or otherwise, a Fund's NAV may not reflect the fair value of open futures contracts on such date.

    The Sponsor will recalculate each Fund's NAV where necessary to reflect the “fair value” of a futures contract when the futures contract closes at its price fluctuation limit for the day.

    In determining the value of Primary S&P Interests, the Administrator will use the then current value of Big S&P Contracts and E-Minis (as reflected on the CME), and, at end of day, the closing settlement price of each such contract on the CME, except that the “fair value” of a Primary S&P Interest (as described in more detail below) may be used when Primary S&P Interests close at their price fluctuation limit for the day. The Administrator will determine the value of each Fund's other investments as of the earlier of the close of the Exchange or 4:00 p.m. E.T., in accordance with the current Services Agreement between the Administrator and the Trust. The value of over-the-counter S&P Interests is determined based on the value of the security, futures contract or index underlying such S&P Interest, except that a fair value may be determined if the Sponsor believes that a Fund is subject to significant credit risk relating to the counterparty to such S&P Interest. Cash Equivalents held by a Fund will be valued by the Administrator using values received from recognized third-party vendors (such as Reuters) and dealer quotes. NAV includes any unrealized profit or loss on open S&P Interests and any other credit or debit accruing to each Fund but unpaid or not received by a Fund. The fair value of a S&P Interest shall be determined by the Sponsor in good faith and in a manner that assesses the S&P Interest's value based on a consideration of all available facts and all available information on the valuation date.

    Cash Equivalents will normally be valued on the basis of quotes obtained from brokers and dealers or pricing services. Exchange-traded options on futures will generally be valued at the settlement price determined by the applicable exchange.

    With respect to specific derivatives:

    • A total return swap on an index will be valued at the publicly available index price. The index price, in turn, is determined by the applicable index calculation agent, which generally values the securities underlying the index at the last reported sale price.

    • Equity total return swaps will generally be valued using the actual underlying equity at local market closing.

    • Over-the-counter [sic] will generally be valued on a basis of quotes obtained from a quotation reporting system, established market makers, or pricing services.

    • Forwards will generally be valued in the same manner as the underlying securities. Forward settling positions for which market quotes are readily available will generally be valued at market value.

    When a Primary S&P Interest has closed at its price fluctuation limit, the fair value determination attempts to estimate the price at which such Primary S&P Interest would be trading in the absence of the price fluctuation limit (either above such limit when an upward limit has been reached or below such limit when a downward limit has been reached). Typically, this estimate will be made primarily by reference to the price of comparable S&P Interests trading in the over-the-counter market. The fair value of an S&P Interest may not reflect such security's market value or the amount that a Fund might reasonably expect to receive for the S&P Interest upon its current sale.

    Indicative Fund Value

    In addition, in order to provide updated information relating to a Fund for use by investors and market professionals, the Exchange will calculate and disseminate throughout the trading day an updated “indicative fund value” (“IFV”). The IFV will be calculated by using the prior day's closing NAV per Share of a Fund as a base and updating that value throughout the trading day to reflect changes in the value of the underlying holdings. Tracking the changes in underlying holdings will be calculated as follows:

    Benchmark Component Futures Contracts will be valued using their most recent quoted price during the trading day, for as long as the main pricing mechanism of the CME is open.

    Primary S&P Interests will be valued using their most recent quoted price during the trading day for as long as the main pricing mechanism of the CME is open.

    • Futures may be valued intraday using the relevant futures exchange data, or another proxy as determined to be appropriate by the third party market data provider. Benchmark Component Futures Contracts will be valued intraday using the main pricing mechanism of the CME or through another proxy if such data is not readily available.

    • Total return swaps may be valued intraday using the underlying asset or index price, or another proxy as determined to be appropriate by the third party market data provider.

    • Exchange-listed options may be valued intraday using the relevant exchange data, or another proxy as determined to be appropriate by the third party market data provider.

    • Over-the-counter options may be valued intraday through option valuation models (e.g., Black-Scholes) or using exchange-traded options as a proxy, or another proxy as determined to be appropriate by the third party market data provider.

    • A third party market data provider's valuation of forwards will be similar to their valuation of the underlying interests, or another proxy as determined to be appropriate by the third party market data provider. The third party market data provider will generally use market quotes if available. Where market quotes are not available, they may fair value securities against proxies (such as swap or yield curves). Each Fund's disclosure of forward positions will include information that market participants can use to value these positions intraday.

    Changes in the value of Cash Equivalents will not be included in the calculation of the IFV. For this and other reasons, the IFV disseminated during Exchange trading hours should not be viewed as an actual real time update of the NAV of a Fund. NAV will be calculated only once at the end of each trading day.

    The IFV will be disseminated on a per Share basis every 15 seconds during the Exchange's Core Trading Session. The trading hours for the CME can be found at http://www.cmegroup.com/trading_hours/.

    The Exchange will disseminate the IFV through the facilities of Consolidated Tape Association (“CTA”) high speed line. In addition, IFV will be published on the Exchange's Web site and will be available through on-line information services such as Bloomberg and Reuters.

    Creation and Redemption of Shares

    Each Fund will create and redeem Shares from time to time, but only in one or more “Creation Baskets” or “Redemption Baskets” comprised of 25,000 Shares. The size of Creation Baskets and Redemption Baskets is subject to change. The creation and redemption of baskets will only be made in exchange for delivery to a Fund or the distribution by a Fund of cash in an amount equal to the combined NAV of the number of Shares of the Fund included in the baskets being created or redeemed determined as of 4:00 p.m. E.T. on the day the order to create or redeem baskets is properly received. “Authorized Purchasers” are the only persons that may place orders to create and redeem baskets. Authorized Purchasers must be (1) either registered broker-dealers or other securities market participants, such as banks and other financial institutions, that are not required to register as broker-dealers to engage in securities transactions, and (2) Depository Trust Company (“DTC”) Participants. To become an Authorized Purchaser, a person must enter into an Authorized Purchaser Agreement with the Funds.

    The amount of the purchase payment for a Creation Basket of a Fund will be equal to the aggregate NAV per Share of the Shares in the Creation Basket. The amount of the redemption proceeds for a Redemption Basket will be equal to the aggregate NAV per Share of the Shares in the Redemption Basket. The purchase price for Creation Baskets and the redemption price for Redemption Baskets of a Fund will be based on the actual NAV per Share calculated at the end of the business day when a request for a purchase or redemption is received by the applicable Fund.

    Creation Procedures

    On any business day, an Authorized Purchaser may place an order with the transfer agent to create one or more baskets. For purposes of processing purchase and redemption orders, a “business day” means any day other than a day when any of the Exchange or the CME is closed for regular trading. Purchase orders must be placed by 3:00 p.m. E.T. or the close of the Exchange Core Trading Session (normally, 4:00 p.m. E.T.) whichever is earlier.

    Determination of Required Payment

    The total payment required to create each Creation Basket is an amount in cash equal to the combined NAV of the number of Shares of a Fund included in the baskets being created determined as of 4:00 p.m. E.T. on the day the order to create baskets is properly received plus the applicable transaction fee.

    Rejection of Purchase Orders

    The Sponsor acting by itself or through the Marketing Agent or Custodian may reject a purchase order if: (1) It determines that, due to position limits or otherwise (including, without limitation, lock limits or price fluctuation limits that may restrict the availability of S&P Interests), investment alternatives that will enable a Fund to meet its primary investment objective are not available or practicable at that time; (2) the acceptance of the purchase order would, in the opinion of counsel to the Sponsor, be unlawful; or (3) circumstances outside the control of the Sponsor, Marketing Agent or Custodian make it, for all practical purposes, not feasible to process creations of baskets.

    Redemption Procedures

    The procedures by which an Authorized Purchaser can redeem one or more Redemption Baskets mirror the procedures for the creation of baskets. On any business day, an Authorized Purchaser may place an order with the transfer agent to redeem one or more baskets. Redemption orders must be placed by 3:00 p.m. E.T. or the close of the Exchange's Core Trading Session, whichever is earlier. By placing a redemption order, an Authorized Purchaser agrees to deliver the baskets to be redeemed through DTC's book-entry system to a Fund by the end of a later business day, generally, but not to exceed, three business days after the effective date of the redemption order, as agreed to between the Authorized Purchaser and the transfer agent when the redemption order is placed (the “Redemption Settlement Date”). Prior to the delivery of the redemption distribution for a redemption order, the Authorized Purchaser must also have wired to the Sponsor's account at the Custodian the non-refundable transaction fee due for the redemption order. An Authorized Purchaser may not withdraw a redemption order without the prior consent of the Sponsor in its discretion.

    Determination of Redemption Distribution

    The redemption distribution from a Fund will consist of a transfer to the redeeming Authorized Purchaser of an amount in cash equal to the combined NAV of the number of Shares of a Fund included in the baskets being redeemed determined as of 4:00 p.m. E.T. on the day the order to redeem baskets is properly received, less the applicable transaction fee.

    Payment of Redemption Distribution

    The redemption distribution due from a Fund will be paid to the Authorized Purchaser on the Redemption Settlement Date if a Fund's DTC account has been credited with the baskets to be redeemed. If a Fund's DTC account has not been credited with all of the baskets to be redeemed by the end of such date, the redemption distribution will be paid to the extent of whole baskets received.

    Suspension or Rejection of Redemption Orders

    The Sponsor may, in its discretion, suspend the right of redemption, or postpone the redemption settlement date with respect to a Fund, (1) for any period during which the Exchange or CME is closed other than customary weekend or holiday closings, or trading on the Exchange or CME is suspended or restricted, (2) for such other period as the Sponsor determines to be necessary for the protection of a Fund's Shareholders, (3) if there is a possibility that the Benchmark Component Futures Contracts of a Fund on the CME from which the NAV of a Fund is calculated will be priced at a daily price limit restriction (e.g., a daily price fluctuation limit halts trading of Big S&P Contracts on the CME), or (4) if, in the sole discretion of the Sponsor, the execution of such an order would not be in the best interest of a Fund or its Shareholders.

    Availability of Information

    Each Fund's total portfolio composition will be disclosed each business day that the Exchange is open for trading on the Funds' Web site at www.forceshares.com. The Web site disclosure of portfolio holdings will include information that market participants can use to value these positions intraday. On a daily basis, the Sponsor will disclose on the Funds' Web site the following information regarding each portfolio holding, as applicable to the type of holding: Ticker symbol, CUSIP number or other identifier, if any; a description of the holding (including the type of holding, such as the type of swap); the identity of the security, index or other asset or instrument underlying the holding, if any; for options, the option strike price; quantity held (as measured by, for example, par value, notional value or number of shares, contracts or units); maturity date, if any; market value of the holding; and the percentage weighting of the holding in a Fund's portfolio. The Web site information will be publicly available at no charge. This Web site disclosure of the portfolio composition of the Funds will occur at the same time as the disclosure by the Sponsor of the portfolio composition to Authorized Purchasers so that all market participants are provided portfolio composition information at the same time. Therefore, the same portfolio information will be provided on the public Web sites as well as in electronic files provided to Authorized Purchasers.

    The Funds' Web site also includes the NAV, the 4 p.m. Bid/Ask Midpoint as reported by the Exchange, the last trade price for each Fund's Shares as reported by the Exchange, the Shares of each Fund outstanding, the Shares of each Fund available for issuance, and the Shares of each Fund created or redeemed on that day. The prospectus, monthly “Statements of Account,” “Quarterly Performance of the Midpoint versus the NAV” (as required by the CFTC), and the “Roll Dates” (i.e., the period during which positions in S&P Interests are changed or “rolled” in order to track the changing nature of the Benchmark), as well as Forms 10-Q, Forms 10-K, and other Commission filings, for each Fund will also be posted on such Web site. The Funds' Web site will be publicly accessible at no charge.

    The Funds' Web site will contain the following information: (a) The current NAV per Share daily and the prior business day's NAV and the reported closing price; (b) the midpoint of the bid-ask price in relation to the NAV as of the time the NAV is calculated (the “Bid-Ask Price”); (c) calculation of the premium or discount of such price against such NAV; (d) the bid-ask price of Shares determined using the highest bid and lowest offer as of the time of calculation of the NAV; (e) data in chart form displaying the frequency distribution of discounts and premiums of the Bid-Ask Price against the NAV, within appropriate ranges for each of the four (4) previous calendar quarters; (f) the prospectus; and (g) other applicable quantitative information. The Funds will also disseminate the Funds' holdings on a daily basis on the Funds' Web site.

    Intra-day and closing price information from brokers and dealers or independent pricing services will be available for S&P Interests, Stop Options, and Cash Equivalents.

    The Exchange also will disseminate on a daily basis via the CTA information with respect to recent NAV, and Shares outstanding. The Exchange will also make available on its Web site daily trading volume of each of the Shares, closing prices of such Shares, and the corresponding NAV. The closing settlement prices of Primary S&P Interests are readily available from the CME, automated quotation systems, published or other public sources, or on-line information services such as Bloomberg or Reuters. Prices of Stop Options will be available on the markets on which they trade, automated quotation systems, published or other public sources, or on-line information services (or, for over the counter Stop Options, if any, by reference to available data for similar exchange traded Stop Options). The Benchmark will be disseminated by one or more major market data vendors every 15 seconds during the NYSE Arca Core Trading Session of 9:30 a.m. to 4:00 p.m. E.T. Quotation and last-sale information regarding each Fund's Shares will be disseminated through the facilities of the CTA. In addition, the Funds' Web site will display the intraday and closing Benchmark level, the IFV and NAV of each Fund's Shares.

    Trading Rules

    The Funds will meet the initial and continued listing requirements applicable to TIRs in NYSE Arca Equities Rule 8.200 and Commentary .02 thereto. With respect to application of Rule 10A-3 19 under the Act, the Trust relies on the exception contained in Rule 10A-3(c)(7).20 A minimum of 100,000 Shares for each Fund will be outstanding as of the start of trading on the Exchange.

    19 17 CFR 240.10A-3.

    20 17 CFR 240.10A-3(c)(7).

    The Exchange deems the Shares to be equity securities, thus rendering trading in the Shares subject to the Exchange's existing rules governing the trading of equity securities. Shares will trade on the NYSE Arca Marketplace from 4:00 a.m. to 8:00 p.m. E.T. The Exchange has appropriate rules to facilitate transactions in the Shares during all trading sessions. As provided in NYSE Arca Equities Rule 7.6, the minimum price variation (“MPV”) for quoting and entry of orders in equity securities traded on the NYSE Arca Marketplace is $0.01, with the exception of securities that are priced less than $1.00 for which the MPV for order entry is $0.0001.

    The trading of the Shares will be subject to NYSE Arca Equities Rule 8.200, Commentary .02(e), which sets forth certain restrictions on ETP Holders acting as registered Market Makers in TIRs to facilitate surveillance.

    Trading Halts

    With respect to trading halts, the Exchange may consider all relevant factors in exercising its discretion to halt or suspend trading in the Shares. Trading may be halted because of market conditions or for reasons that, in the view of the Exchange, make trading in the Shares inadvisable. These may include: (1) The extent to which trading is not occurring in the underlying futures contracts, or (2) whether other unusual conditions or circumstances detrimental to the maintenance of a fair and orderly market are present. In addition, trading in Shares will be subject to trading halts caused by extraordinary market volatility pursuant to the Exchange's “circuit breaker” rule 21 or by the halt or suspension of trading of the underlying futures contracts.

    21See NYSE Arca Equities Rule 7.12.

    The Exchange represents that the Exchange may halt trading during the day in which an interruption to the dissemination of the IFV or the value of the underlying futures contracts occurs. If the interruption to the dissemination of the IFV or the value of the underlying futures contracts persists past the trading day in which it occurred, the Exchange will halt trading no later than the beginning of the trading day following the interruption. In addition, if the Exchange becomes aware that the NAV with respect to the Shares is not disseminated to all market participants at the same time, it will halt trading in the Shares until such time as the NAV is available to all market participants.

    Surveillance

    The Exchange represents that trading in the Shares will be subject to the existing trading surveillances administered by the Exchange, as well as cross-market surveillances administered by the Financial Industry Regulatory Authority (“FINRA”) on behalf of the Exchange, which are designed to detect violations of Exchange rules and applicable federal securities laws.22 The Exchange represents that these procedures are adequate to properly monitor Exchange trading of the Shares in all trading sessions and to deter and detect violations of Exchange rules and federal securities laws applicable to trading on the Exchange.

    22 FINRA conducts cross-market surveillances on behalf of the Exchange pursuant to a regulatory services agreement. The Exchange is responsible for FINRA's performance under this regulatory services agreement.

    The surveillances referred to above generally focus on detecting securities trading outside their normal patterns, which could be indicative of manipulative or other violative activity. When such situations are detected, surveillance analysis follows and investigations are opened, where appropriate, to review the behavior of all relevant parties for all relevant trading violations.

    The Exchange or FINRA, on behalf of the Exchange, or both, will communicate as needed regarding trading in the Shares, Primary S&P Interests and options on futures with other markets and other entities that are members of the Intermarket Surveillance Group (“ISG”), and the Exchange or FINRA, on behalf of the Exchange, or both, may obtain trading information regarding trading such securities and financial instruments from such markets and other entities. In addition, the Exchange may obtain information regarding trading in such securities and financial instruments from markets and other entities that are members of ISG or with which the Exchange has in place a comprehensive surveillance sharing agreement.23

    23 For a list of the current members of ISG, see www.isgportal.org. The Exchange notes that not all components of the Disclosed Portfolio for a Fund may trade on markets that are members of ISG or with which the Exchange has in place a comprehensive surveillance sharing agreement.

    Not more than 10% of the net assets of a Fund in the aggregate invested in futures contracts or exchange-traded options contracts shall consist of futures contracts or exchange-traded options contracts whose principal market is not a member of ISG or is a market with which the Exchange does not have a comprehensive surveillance sharing agreement.

    All statements and representations made in this filing regarding (a) the description of the portfolios, (b) limitations on portfolio holdings or reference assets, or (c) the applicability of Exchange rules and surveillance procedures shall constitute continued listing requirements for listing the Shares of a Fund on the Exchange.

    The issuer has represented to the Exchange that it will advise the Exchange of any failure by a Fund to comply with the continued listing requirements, and, pursuant to its obligations under Section 19(g)(1) of the Act, the Exchange will monitor for compliance with the continued listing requirements. If a Fund is not in compliance with the applicable listing requirements, the Exchange will commence delisting procedures under NYSE Arca Equities Rule 5.5(m).

    In addition, the Exchange also has a general policy prohibiting the distribution of material, non-public information by its employees.

    Information Bulletin

    Prior to the commencement of trading, the Exchange will inform its ETP Holders in an Information Bulletin of the special characteristics and risks associated with trading the Shares. Specifically, the Information Bulletin will discuss the following: (1) The risks involved in trading the Shares during the Opening and Late Trading Sessions when an updated IFV will not be calculated or publicly disseminated; (2) the procedures for purchases and redemptions of Shares in Creation Baskets and Redemption Baskets (and that Shares are not individually redeemable); (3) NYSE Arca Equities Rule 9.2(a), which imposes a duty of due diligence on its ETP Holders to learn the essential facts relating to every customer prior to trading the Shares; (4) how information regarding the IFV is disseminated; (5) that a static IFV will be disseminated, between the close of trading on the applicable futures exchange and the close of the NYSE Arca Core Trading Session; (6) the requirement that ETP Holders deliver a prospectus to investors purchasing newly issued Shares prior to or concurrently with the confirmation of a transaction; and (7) trading information.

    In addition, the Information Bulletin will advise ETP Holders, prior to the commencement of trading, of the prospectus delivery requirements applicable to the Funds. The Exchange notes that investors purchasing Shares directly from each Fund will receive a prospectus. ETP Holders purchasing Shares from each Fund for resale to investors will deliver a prospectus to such investors. The Information Bulletin will also discuss any exemptive, no-action and interpretive relief granted by the Commission from any rules under the Act.

    In addition, the Information Bulletin will reference that the Funds are subject to various fees and expenses. The Information Bulletin will also reference that the CFTC has regulatory jurisdiction over the trading of futures contracts traded on U.S. markets.

    The Information Bulletin will also disclose the trading hours of the Shares of each Fund and that the NAV for the Shares will be calculated as of the earlier of 4:00 p.m. E.T. or the close of the Exchange each day. The NAV for a particular trading day will be released after 4:15 p.m. E.T. The Bulletin will disclose that information about the Shares of each Fund is publicly available on the Funds' Web site.

    2. Statutory Basis

    The basis under the Act for this proposed rule change is the requirement under Section 6(b)(5) 24 that an exchange have rules that are designed to prevent fraudulent and manipulative acts and practices, to promote just and equitable principles of trade, to remove impediments to, and perfect the mechanism of a free and open market and, in general, to protect investors and the public interest.

    24 15 U.S.C. 78f(b)(5).

    The Exchange believes that the proposed rule change is designed to prevent fraudulent and manipulative acts and practices in that the Shares will be listed and traded on the Exchange pursuant to the initial and continued listing criteria in NYSE Arca Equities Rule 8.200 and Commentary .02 thereto. The Exchange has in place surveillance procedures that are adequate to properly monitor trading in the Shares in all trading sessions and to deter and detect violations of Exchange rules and applicable federal securities laws. Not more than 10% of the net assets of a Fund in the aggregate invested in futures contracts or exchange-traded options contracts shall consist of futures contracts or exchange-traded options contracts whose principal market is not a member of ISG or is a market with which the Exchange does not have a comprehensive surveillance sharing agreement.

    The closing price and settlement prices of the Primary S&P Interests are readily available from the CME. In addition, such prices are available from automated quotation systems, published or other public sources, or on-line information services. The Benchmark will be disseminated by one or more major market data vendors every 15 seconds during the NYSE Arca Core Trading Session of 9:30 a.m. to 4:00 p.m. E.T. Quotation and last-sale information regarding the Shares will be disseminated through the facilities of the CTA. The IFV will be disseminated on a per Share basis by one or more major market data vendors every 15 seconds during the NYSE Arca Core Trading Session. The Exchange may halt trading during the day in which an interruption to the dissemination of the IFV or the value of the underlying futures contracts occurs. If the interruption to the dissemination of the IFV or the value of the underlying futures contracts persists past the trading day in which it occurred, the Exchange will halt trading no later than the beginning of the trading day following the interruption. In addition, if the Exchange becomes aware that the NAV with respect to the Shares is not disseminated to all market participants at the same time, it will halt trading in the Shares until such time as the NAV is available to all market participants.

    The proposed rule change is designed to promote just and equitable principles of trade and to protect investors and the public interest in that a large amount of information will be publicly available regarding the Funds and the Shares, thereby promoting market transparency. Quotation and last sale information for the futures contracts are widely disseminated through a variety of major market data vendors worldwide. Complete real-time data for such contracts is available by subscription from Reuters and Bloomberg. The CME also provides delayed futures information on current and past trading sessions and market news free of charge on their Web sites. The Benchmark will be disseminated by one or more major market data vendors every 15 seconds during the NYSE Arca Core Trading Session of 9:30 a.m. to 4:00 p.m. E.T. The NAV per Share will be calculated daily and made available to all market participants at the same time. NYSE Arca will calculate and disseminate every 15 seconds throughout the NYSE Arca Core Trading Session an updated IFV.

    The proposed rule change is designed to perfect the mechanism of a free and open market and, in general, to protect investors and the public interest in that it will facilitate the listing and trading of additional types of exchange-traded products that principally exposed to futures contracts and that will enhance competition among market participants, to the benefit of investors and the marketplace. As noted above, the Exchange has in place surveillance procedures relating to trading in the Shares and may obtain information via ISG from other exchanges that are members of ISG or with which the Exchange has in place a comprehensive surveillance sharing agreement.

    B. Self-Regulatory Organization's Statement on Burden on Competition

    The Exchange does not believe that the proposed rule change will impose any burden on competition that is not necessary or appropriate in furtherance of the purpose of the Act. The Exchange notes that the proposed rule change will facilitate the listing and trading of additional types of actively-managed exchange-traded products that will enhance competition among market participants, to the benefit of investors and the marketplace.

    C. Self-Regulatory Organization's Statement on Comments on the Proposed Rule Change Received From Members, Participants, or Others

    No written comments were solicited or received with respect to the proposed rule change.

    III. Date of Effectiveness of the Proposed Rule Change and Timing for Commission Action

    Within 45 days of the date of publication of this notice in the Federal Register or within such longer period up to 90 days (i) as the Commission may designate if it finds such longer period to be appropriate and publishes its reasons for so finding or (ii) as to which the self-regulatory organization consents, the Commission will:

    (A) by order approve or disapprove the proposed rule change, or

    (B) institute proceedings to determine whether the proposed rule change should be disapproved.

    IV. Solicitation of Comments

    Interested persons are invited to submit written data, views, and arguments concerning the foregoing, including whether the proposed rule change is consistent with the Act. Comments may be submitted by any of the following methods:

    Electronic Comments

    • Use the Commission's Internet comment form (http://www.sec.gov/rules/sro.shtml); or

    • Send an email to [email protected]. Please include File Number SR-NYSEArca-2016-120 on the subject line.

    Paper Comments

    • Send paper comments in triplicate to Secretary, Securities and Exchange Commission, 100 F Street NE., Washington, DC 20549-1090.

    All submissions should refer to File Number SR-NYSEArca-2016-120. This file number should be included on the subject line if email is used. To help the Commission process and review your comments more efficiently, please use only one method. The Commission will post all comments on the Commission's Internet Web site (http://www.sec.gov/rules/sro.shtml). Copies of the submission, all subsequent amendments, all written statements with respect to the proposed rule change that are filed with the Commission, and all written communications relating to the proposed rule change between the Commission and any person, other than those that may be withheld from the public in accordance with the provisions of 5 U.S.C. 552, will be available for Web site viewing and printing in the Commission's Public Reference Room, 100 F Street NE., Washington, DC 20549, on official business days between the hours of 10:00 a.m. and 3:00 p.m. Copies of the filing also will be available for inspection and copying at the principal office of the Exchange. All comments received will be posted without change; the Commission does not edit personal identifying information from submissions. You should submit only information that you wish to make available publicly. All submissions should refer to File Number SR-NYSEArca-2016-120, and should be submitted on or before November 25, 2016.

    For the Commission, by the Division of Trading and Markets, pursuant to delegated authority.25

    25 17 CFR 200.30-3(a)(12).

    Brent J. Fields, Secretary.
    [FR Doc. 2016-26647 Filed 11-3-16; 8:45 am] BILLING CODE 8011-01-P
    SECURITIES AND EXCHANGE COMMISSION [Release No. 34-79197; File No. SR-ICC-2016-012] Self-Regulatory Organizations; ICE Clear Credit LLC; Order Approving Proposed Rule Change To Provide for the Clearance of Additional Credit Default Swap Contracts October 31, 2016. I. Introduction

    On August 29, 2016, ICE Clear Credit LLC (“ICC”) filed with the Securities and Exchange Commission (“Commission”), pursuant to Section 19(b)(1) of the Securities Exchange Act (“Act”) 1 and Rule 19b-4 thereunder,2 a proposed rule change to provide for the clearance of additional Standard Emerging Market Sovereign CDS contracts (collectively, “EM Contracts”), 2003 ISDA Definitions of Standard Western European Sovereign CDS contracts (collectively, “SWES Contracts”), and an additional Asia/Pacific Sovereign CDS contract (the “Asia/Pacific Contract”). The proposed rule change was published for comment in the Federal Register on September 16, 2016.3 The Commission did not receive comments on the proposed rule change. For the reasons discussed below, the Commission is approving the proposed rule change.

    1 15 U.S.C. 78s(b)(1).

    2 17 CFR 240.19b-4.

    3 Securities Exchange Act Release No. 34-78818 (Sept. 12, 2016), 81 FR 63831 (Sept. 19, 2016) (SR-ICC-2016-012).

    II. Description of the Proposed Rule Change

    The purpose of the proposed rule change is to adopt rules that will provide the basis for ICC to clear additional credit default swap contracts.

    ICC has proposed amending Subchapter 26D of its Rules to provide for the clearance of additional EM Contracts, specifically the Republic of Panama, Abu Dhabi, Dubai, the State of Israel and the State of Qatar. ICC plans to offer these additional EM Contracts on the 2003 and 2014 ISDA Credit Derivatives Definitions.

    ICC represents that these additional EM Contracts have terms consistent with the other EM Contracts approved for clearing at ICC and governed by Subchapter 26D of the Rules. Minor revisions to Subchapter 26D (Standard Emerging Market Sovereign (“SES”) Single Name) will also be made to provide for clearing the additional EM Contracts. Specifically, in Rule 26D-102 (Definitions), “Eligible SES Reference Entities” will be modified to include the Republic of Panama, Abu Dhabi, Dubai, the State of Israel and the State of Qatar in the list of specific Eligible SES Reference Entities to be cleared by ICC.

    Additionally, ICC has proposed amending Subchapter 26I of its Rules to provide for the clearance of 2003 ISDA Definitions of SWES Contracts. ICC currently clears the 2014 ISDA Definitions of ten SWES Contracts, namely the Republic of Ireland, the Italian Republic, the Portuguese Republic, the Kingdom of Spain, the Kingdom of Belgium, the Republic of Austria, the Kingdom of the Netherlands, the Federal Republic of Germany, the French Republic and the United Kingdom of Great Britain and Northern Ireland. The proposed changes to Subchapter 26I will allow ICC to offer clearing for the 2003 ISDA Definitions of these SWES Contracts.

    Minor revisions to Subchapter 26I (Standard Western European (“SWES”) Single Name) will be made to provide for clearing the 2003 ISDA Definitions of SWES Contracts. Specifically, in Rule 26I-102 (Definitions), the definitions of “Eligible SWES Reference Obligations”, “List of Eligible SWES Reference Entities” and “SWES Contract Reference Obligations” will be updated to distinguish between the 2003- and 2014-Type CDS Contracts, and the corresponding Applicable Credit Derivatives Definitions.4 Rule 26I-309 (Acceptance of SWES Contracts by ICE Clear Credit) will be revised in part (c) to note that a CDS Participant may not submit a Trade for clearance as a SWES contract, and any such Trade shall not be a Confirming Trade, if the acceptance would be at a time when the CDS Participant (or any Non-Participant Party for whom such CDS Participant is acting) is, or is an Affiliate of, the Eligible SWES Reference Entity for such SWES Contract or is subject to an agreement under which it is reasonably likely that the CDS Participant (or any such Non-Participant Party) will become, or will become an Affiliate of, the Eligible SWES Reference Entity for such SWES Contract. Rule 26I-309 will also be revised in part (e) to address and distinguish between relevant successor or other events under both 2003- and 2014-Type CDS Contracts, and the corresponding Applicable Credit Derivatives Definitions.

    4 As defined in Rule 20-102 (Applicable Credit Derivatives Definitions).

    Rule 26I-315 (Terms of the Cleared SWES Contract) will be revised to provide reference to provisions of the proper ISDA Definitions, and corresponding changes to provision numbering will be made as necessary. Rule 26I-315(h) will be revised to refer to the Applicable Credit Derivatives Definitions and eligible Seniority Level, as appropriate.

    Defined terms in Rule 26I-316 (Physical Settlement Matrix Updates) will be updated to refer specifically to SWES contracts. Rule 26I-616 (Contract Modification) will be revised to note that it shall not constitute a Contract Modification if the Board (or its designee) updates the List of Eligible SWES Reference Entities (and modifies the terms and conditions of related SWES Contracts) to give effect to determinations of Succession Events.

    Finally, ICC has proposed amending Subchapter 26L of its rules to provide for the clearance of an additional Asia/Pacific Contract, namely the Kingdom of Thailand. ICC plans to offer this contract on the 2003 and 2014 ISDA Credit Derivatives Definitions.

    ICC represents that the additional Asia/Pacific Contract has terms consistent with the other Asia/Pacific Contracts approved for clearing at ICC and governed by Subchapter 26L of the Rules. Minor revisions to Subchapter 26L (Asia/Pacific Sovereign (“SAS”) Single Name) will be made to provide for clearing the additional Asia/Pacific Contract. Specifically, in Rule 26L-102 (Definitions), “Eligible SAS Reference Entities” will be modified to include the Kingdom of Thailand in the list of specific Eligible SAS Reference Entities to be cleared by ICC.

    III. Discussion and Commission Findings

    Section 19(b)(2)(C) of the Act 5 directs the Commission to approve a proposed rule change of a self-regulatory organization if the Commission finds that the proposed rule change is consistent with the requirements of the Act and the rules and regulations thereunder applicable to such self-regulatory organization. Section 17A(b)(3)(F) of the Act 6 requires, among other things, that the rules of a clearing agency be designed to promote the prompt and accurate clearance and settlement of securities transactions, and to the extent applicable, derivative agreements, contracts and transactions, to assure the safeguarding of securities and funds which are in the custody or control of the clearing agency or for which it is responsible and, in general, to protect investors and the public interest.

    5 15 U.S.C. 78s(b)(2)(C).

    6 15 U.S.C. 78q-1(b)(3)(F).

    ICC has represented that the additional EM Contracts, Asia/Pacific Contract and the 2003 ISDA Definitions of SWES Contracts proposed for clearing are similar to the EM, SWES and Asia/Pacific Contracts that are currently cleared by ICC. ICC also represents that these contracts will be cleared pursuant to ICC's existing clearing arrangements and related financial safeguards, protections and risk management procedures. The Commission therefore finds that the proposed rule change is designed to promote the prompt and accurate clearance and settlement of securities transactions and, to the extent applicable, derivative agreements, contracts, and transactions, and to assure the safeguarding of securities and funds which are in the custody or control of the clearing agency or for which it is responsible and, in general, to protect investors and the public interest.

    IV. Conclusion

    On the basis of the foregoing, the Commission finds that the proposal is consistent with the requirements of the Act 7 and the rules and regulations thereunder.

    7 15 U.S.C. 78q-1.

    It is therefore ordered, pursuant to Section 19(b)(2) of the Act,8 that the proposed rule change (File No. SR-ICC-2016-012) be, and hereby is, approved.9

    8 15 U.S.C. 78s(b)(2).

    9 In approving the proposed rule change, the Commission considered the proposal's impact on efficiency, competition and capital formation. 15 U.S.C. 78c(f).

    10 17 CFR 200.30-3(a)(12).

    For the Commission, by the Division of Trading and Markets, pursuant to delegated authority.10

    Brent J. Fields, Secretary.
    [FR Doc. 2016-26644 Filed 11-3-16; 8:45 am] BILLING CODE 8011-01-P
    SECURITIES AND EXCHANGE COMMISSION [Release No. 34-79198; File No. SR-MIAX-2016-37] Self-Regulatory Organizations; Miami International Securities Exchange LLC; Notice of Filing and Immediate Effectiveness of a Proposed Rule Change To Amend Its Fee Schedule October 31, 2016.

    Pursuant to the provisions of Section 19(b)(1) of the Securities Exchange Act of 1934 (“Act”) 1 and Rule 19b-4 thereunder,2 notice is hereby given that on October 17, 2016, Miami International Securities Exchange LLC (“MIAX” or “Exchange”) filed with the Securities and Exchange Commission (“Commission”) a proposed rule change as described in Items I, II, and III below, which Items have been prepared by the Exchange. The Commission is publishing this notice to solicit comments on the proposed rule change from interested persons.

    1 15 U.S.C. 78s(b)(1).

    2 17 CFR 240.19b-4.

    I. Self-Regulatory Organization's Statement of the Terms of Substance of the Proposed Rule Change

    The Exchange is filing a proposal to amend the MIAX Options Fee Schedule (the “Fee Schedule”). While changes to the Fee Schedule pursuant to this proposal are effective upon filing, the Exchange has designated these changes to be operative on October 17, 2016.

    The text of the proposed rule change is available on the Exchange's Web site at http://www.miaxoptions.com/filter/wotitle/rule_filing, at MIAX's principal office, and at the Commission's Public Reference Room.

    II. Self-Regulatory Organization's Statement of the Purpose of, and Statutory Basis for, the Proposed Rule Change

    In its filing with the Commission, the Exchange included statements concerning the purpose of and basis for the proposed rule change and discussed any comments it received on the proposed rule change. The text of these statements may be examined at the places specified in Item IV below. The Exchange has prepared summaries, set forth in sections A, B, and C below, of the most significant aspects of such statements.

    A. Self-Regulatory Organization's Statement of the Purpose of, and Statutory Basis for, the Proposed Rule Change 1. Purpose

    The Exchange proposes to amend the MIAX Options Fee Schedule (the “Fee Schedule”) to offer two (2) additional Limited Service MIAX Express Interface (“MEI”) Ports to Market Makers.3

    3 The term “Market Makers” refers to Lead Market Makers (“LMMs”), Primary Lead Market Makers (“PLMMs”), and Registered Market Makers (“RMMs”) collectively. See Exchange Rule 100.

    Currently, MIAX assesses monthly MEI Port Fees on Market Makers based upon the number of MIAX matching engines 4 used by the Market Maker. Market Makers are allocated two (2) Full Service MEI Ports 5 and two (2) Limited Service MEI Ports 6 per matching engine to which they connect. The Exchange currently assesses the following MEI Port fees: (i) $5,000 for Market Maker Assignments in up to 5 option classes or up to 10% of option classes by volume; (ii) $10,000 for Market Maker Assignments in up to 10 option classes or up to 20% of option classes by volume; (iii) $14,000 for Market Maker Assignments in up to 40 option classes or up to 35% of option classes by volume; (iv) $17,500 for Market Maker Assignments in up to 100 option classes or up to 50% of option classes by volume; and (v) $20,500.00 for Market Maker Assignments in over 100 option classes or over 50% of option classes by volume up to all option classes listed on MIAX.7 In each of the foregoing categories, the stated fee applies if the less of the two applicable measurements is met. For example, a Market Maker that wishes to make markets in just one symbol would require the two (2) MEI Ports in a single matching engine; a Market Maker wishing to make markets in all symbols traded on MIAX would require the two (2) MEI Ports in each of the Exchange's matching engines. The Exchange also currently charges $50 per month for each additional Limited Service MEI Port per matching engine for Market Makers over and above the two (2) Limited Service MEI Ports per matching engine that are allocated with the Full Service MEI Ports. The Full Service MEI Ports, Limited Service MEI Ports, and the additional Limited Service MEI Ports all include access to MIAX's primary and secondary data centers and its disaster recovery center.

    4 A “matching engine” is a part of the MIAX electronic system that processes options quotes and trades on a symbol-by-symbol basis. Some matching engines will process option classes with multiple root symbols, and other matching engines will be dedicated to one single option root symbol (for example, options on SPY will be processed by one single matching engine that is dedicated only to SPY). A particular root symbol may only be assigned to a single designated matching engine. A particular root symbol may not be assigned to multiple matching engines.

    5 Full Service MEI Ports provide Market Makers with the ability to send Market Maker quotes, eQuotes, and quote purge messages to the MIAX System. Full Service MEI Ports are also capable of receiving administrative information. Market Makers are limited to two Full Service MEI Ports per matching engine.

    6 Limited Service MEI Ports provide Market Makers with the ability to send eQuotes and quote purge messages only, but not Market Maker Quotes, to the MIAX System. Limited Service MEI Ports are also capable of receiving administrative information. Market Makers initially receive two Limited Service MEI Ports per matching engine.

    7See MIAX Fee Schedule, Section 5)d)ii).

    The Exchange originally added the Limited Service MEI Ports to enhance the MEI Port connectivity made available to Market Makers, and has subsequently made additional Limited Service MEI Ports available to Market Makers.8 Limited Service MEI Ports have been well received by Market Makers since their addition. The Exchange now proposes to offer to Market Makers the ability to purchase an additional two (2) Limited Service MEI Ports per matching engine over and above the current four (4) additional Limited Service MEI Ports per matching engine that are available for purchase by Market Makers. The Exchange proposes to charge the same amount that it currently charges, $50 per month, for each extra Limited Service MEI Port per matching engine. The Exchange proposes making a corresponding change to footnote 31 of the Exchange's Fee Schedule to specify that Market Makers will now be limited to purchasing six (6) additional Limited Service MEI Ports per matching engine, for a total of eight (8) per matching engine. All other fees related to MEI Ports shall remain unchanged.

    8See Securities Exchange Act Release Nos. 70137 (August 8, 2013), 78 FR 49586 (August 14, 2013) (SR-MIAX-2013-39); 70903 (November 20, 2013), 78 FR 228 [sic] (November 26, 2013) (SR-MIAX-2013-52); and 78950 (September 27, 2016), 81 FR 68084 (October 3, 2016) (SR-MIAX-2016-33).

    The purpose of this amendment to the Fee Schedule is to accommodate the Exchange's introduction of complex orders and quotes, as well as to provide Market Makers with access to additional functionality to be introduced in the future thereby continuing to offer Market Makers greater and improved technical flexibility to connect additional Limited Service MEI Ports to independent servers that host their eQuote and purge functionality. The Exchange believes that the offering of additional ports will help Market Makers mitigate the risk of using the same server for all of their Market Maker simple and complex quoting. By using the additional Limited Service MEI Ports for risk purposes, Market Makers can place purge functionality on a different server than the Market Maker quoting server (via the Limited Service MEI Ports), which provides them a failsafe for getting out of the market in case they have an issue with the quote server. Market Makers can also use the extra Limited Service MEI Ports to submit eQuotes. Since eQuotes are frequently generated by a different algorithm that determines when to respond to an auction message, the Exchange believes that the offering of additional ports will further enable Market Makers to connect to a different server that processes auctions and eQuotes rather than forcing them to use their Market Maker Standard quote server as a gateway for communicating eQuotes to MIAX.

    2. Statutory Basis

    The Exchange believes that its proposal to amend its Fee Schedule is consistent with Section 6(b) of the Act 9 in general, and furthers the objectives of Section 6(b)(4) of the Act 10 in particular, in that it provides for the equitable allocation of reasonable dues, fees and other charges among members and issuers and other persons using any facility or system which the Exchange operates or controls. The Exchange also believes the proposal furthers the objectives of Section 6(b)(5) of the Act 11 in that it is designed to promote just and equitable principles of trade, to remove impediments to and perfect the mechanism of a free and open market and a national market system, and, in general to protect investors and the public interest and is not designed to permit unfair discrimination between customers, issuers, brokers and dealers.

    9 15 U.S.C. 78f(b).

    10 15 U.S.C. 78f(b)(4).

    11 15 U.S.C. 78f(b)(5).

    The Exchange believes that its proposal is consistent with Section 6(b)(4) of the Act because only Market Makers that decide that they need the extra Limited Service MEI Ports will be charged the additional fee. The Exchange further believes that the availability of the additional Limited Service MEI Ports is equitable and not unfairly discriminatory because it further enhances Market Makers' access to the MIAX System and consequently enhances the marketplace by helping Market Makers to better manage risk, thus preserving the integrity of the MIAX markets, all to the benefit of and protection of investors and the public as a whole.

    The Exchange also believes that its proposal is consistent with the objectives of Section 6(b)(5) of the Act 12 because the additional Limited Service MEI Ports are available to all Market Makers and the proposed fees assessable for the additional Limited Service MEI Ports apply equally to all Market Makers regardless of type, and access to the Exchange is offered on terms that are not unfairly discriminatory. The Exchange designed the fee rates in order to provide objective criteria for Market Makers of different sizes and business models to be assessed a MEI Port fee and to have technical connectivity that best matches their quoting activity on the Exchange and the offering of additional Limited Service MEI Ports comports with this objective.

    12 15 U.S.C. 78f(b)(5).

    B. Self-Regulatory Organization's Statement on Burden on Competition

    The Exchange does not believe that the proposed rule change will result in any burden on competition that is not necessary or appropriate in furtherance of the purposes of the Act. The Exchange believes that the proposal increases both intermarket and intramarket competition by enabling Market Makers to enhance their connectivity to the Exchange in a manner that is designed to provide Market Makers of different sizes and business models to be assessed a MEI Port fee and to have technical connectivity that best matches their quoting activity on the Exchange and the offering of additional Limited Service MEI Ports comports with this objective. The Exchange believes that the proposal will increase competition amongst Market Makers of different sizes and business models by encouraging Market Makers to connect additional Limited Service Ports to independent servers that host their eQuote and purge functionality. The Exchange notes that it operates in a highly competitive market in which market participants can readily favor competing venues if they deem fee levels at a particular venue to be excessive. In such an environment, the Exchange must continually adjust its fees to remain competitive with other exchanges and in order to attract market participants to use its services. The Exchange believes that the proposal reflects this competitive environment because it increases the Exchange's fees in a manner that continues to encourage market participants to register as Market Makers on the Exchange, to provide liquidity, and to attract order flow. To the extent that this purpose is achieved, all the Exchange's market participants should benefit from the improved market liquidity.

    C. Self-Regulatory Organization's Statement on Comments on the Proposed Rule Change Received From Members, Participants, or Others

    Written comments were neither solicited nor received.

    III. Date of Effectiveness of the Proposed Rule Change and Timing for Commission Action

    The foregoing rule change has become effective pursuant to Section 19(b)(3)(A)(ii) of the Act,13 and Rule 19b-4(f)(2) 14 thereunder. At any time within 60 days of the filing of the proposed rule change, the Commission summarily may temporarily suspend such rule change if it appears to the Commission that such action is necessary or appropriate in the public interest, for the protection of investors, or otherwise in furtherance of the purposes of the Act. If the Commission takes such action, the Commission shall institute proceedings to determine whether the proposed rule should be approved or disapproved.

    13 15 U.S.C. 78s(b)(3)(A)(ii).

    14 17 CFR 240.19b-4(f)(2).

    IV. Solicitation of Comments

    Interested persons are invited to submit written data, views, and arguments concerning the foregoing, including whether the proposed rule change is consistent with the Act. Comments may be submitted by any of the following methods:

    Electronic Comments

    • Use the Commission's Internet comment form (http://www.sec.gov/rules/sro.shtml); or

    • Send an email to [email protected]. Please include File Number SR-MIAX-2016-37 on the subject line.

    Paper Comments

    • Send paper comments in triplicate to Secretary, Securities and Exchange Commission, 100 F Street NE., Washington, DC 20549-1090.

    All submissions should refer to File Number SR-MIAX-2016-37. This file number should be included on the subject line if email is used. To help the Commission process and review your comments more efficiently, please use only one method. The Commission will post all comments on the Commission's Internet Web site (http://www.sec.gov/rules/sro.shtml). Copies of the submission, all subsequent amendments, all written statements with respect to the proposed rule change that are filed with the Commission, and all written communications relating to the proposed rule change between the Commission and any person, other than those that may be withheld from the public in accordance with the provisions of 5 U.S.C. 552, will be available for Web site viewing and printing in the Commission's Public Reference Room, 100 F Street NE., Washington, DC 20549, on official business days between the hours of 10:00 a.m. and 3:00 p.m. Copies of the filing also will be available for inspection and copying at the principal office of the Exchange. All comments received will be posted without change; the Commission does not edit personal identifying information from submissions. You should submit only information that you wish to make available publicly. All submissions should refer to File Number SR-MIAX-2016-37 and should be submitted on or before November 25, 2016.

    15 17 CFR 200.30-3(a)(12).

    For the Commission, by the Division of Trading and Markets, pursuant to delegated authority.15

    Brent J. Fields, Secretary.
    [FR Doc. 2016-26645 Filed 11-3-16; 8:45 am] BILLING CODE 8011-01-P
    SMALL BUSINESS ADMINISTRATION [Disaster Declaration #14911 and #14912] North Carolina Disaster Number NC-00081 AGENCY:

    U.S. Small Business Administration.

    ACTION:

    Amendment 9.

    SUMMARY:

    This is an amendment of the Presidential declaration of a major disaster for the State of North Carolina (FEMA-4285-DR), dated 10/10/2016.

    Incident: Hurricane Matthew.

    Incident Period: 10/04/2016 and continuing.

    Effective Date: 10/25/2016.

    Physical Loan Application Deadline Date: 12/09/2016.

    EIDL Loan Application Deadline Date: 07/10/2017.

    ADDRESSES:

    Submit completed loan applications to: U.S. Small Business Administration, Processing and Disbursement Center, 14925 Kingsport Road, Fort Worth, TX 76155.

    FOR FURTHER INFORMATION CONTACT:

    A. Escobar, Office of Disaster Assistance, U.S. Small Business Administration, 409 3rd Street SW., Suite 6050, Washington, DC 20416.

    SUPPLEMENTARY INFORMATION:

    The notice of the Presidential disaster declaration for the State of North Carolina, dated 10/10/2016 is hereby amended to include the following areas as adversely affected by the disaster:

    Primary Counties: (Physical Damage and Economic Injury Loans): Camden, Chowan, Currituck, Pasquotank. Contiguous Counties: (Economic Injury Loans Only): Virginia, Chesapeake City, Virginia Beach City.

    All other information in the original declaration remains unchanged.

    (Catalog of Federal Domestic Assistance Number 59008) Lisa Lopez-Suarez, Acting Associate Administrator for Disaster Assistance.
    [FR Doc. 2016-26637 Filed 11-3-16; 8:45 am] BILLING CODE 8025-01-P
    SMALL BUSINESS ADMINISTRATION [Disaster Declaration #14927 and #14928] South Carolina Disaster Number SC-00041 AGENCY:

    U.S. Small Business Administration.

    ACTION:

    Amendment 1.

    SUMMARY:

    This is an amendment of the Presidential declaration of a major disaster for Public Assistance Only for the State of South Carolina (FEMA-4286-DR), dated 10/18/2016.

    Incident: Hurricane Matthew.

    Incident Period: 10/04/2016 and continuing.

    Effective Date: 10/25/2016.

    Physical Loan Application Deadline Date: 12/19/2016.

    Economic Injury (EIDL) Loan Application Deadline Date: 07/18/2017.

    ADDRESSES:

    Submit completed loan applications to: U.S. Small Business Administration, Processing and Disbursement Center, 14925 Kingsport Road, Fort Worth, TX 76155.

    FOR FURTHER INFORMATION CONTACT:

    A. Escobar, Office of Disaster Assistance, U.S. Small Business Administration, 409 3rd Street SW., Suite 6050, Washington, DC 20416.

    SUPPLEMENTARY INFORMATION:

    The notice of the President's major disaster declaration for Private Non-Profit organizations in the State of South Carolina, dated 10/18/2016, is hereby amended to include the following areas as adversely affected by the disaster.

    Primary Counties: Calhoun, Charleston, Chesterfield, Clarendon, Darlington, Kershaw, Marlboro, Richland.

    All other information in the original declaration remains unchanged.

    (Catalog of Federal Domestic Assistance Number 59008) Lisa Lopez-Suarez, Acting Associate Administrator for Disaster Assistance.
    [FR Doc. 2016-26641 Filed 11-3-16; 8:45 am] BILLING CODE 8025-01-P
    SMALL BUSINESS ADMINISTRATION [Disaster Declaration #14936 and #14937] Florida Disaster Number FL-00120 AGENCY:

    U.S. Small Business Administration.

    ACTION:

    Amendment 1.

    SUMMARY:

    This is an amendment of the Presidential declaration of a major disaster for Public Assistance Only for the State of Florida (FEMA-4283-DR), dated 10/24/2016.

    Incident: Hurricane Matthew.

    Incident Period: 10/03/2016 through 10/19/2016.

    Effective Date: 10/25/2016.

    Physical Loan Application Deadline Date: 12/23/2016.

    Economic Injury (EIDL) Loan Application Deadline Date: 07/24/2017.

    ADDRESSES:

    Submit completed loan applications to: U.S. Small Business Administration, Processing and Disbursement Center, 14925 Kingsport Road, Fort Worth, TX 76155.

    FOR FURTHER INFORMATION CONTACT:

    A. Escobar, Office of Disaster Assistance, U.S. Small Business Administration, 409 3rd Street SW., Suite 6050, Washington, DC 20416.

    SUPPLEMENTARY INFORMATION:

    The notice of the President's major disaster declaration for Private Non-Profit organizations in the State of Florida, dated 10/24/2016, is hereby amended to establish the incident period for this disaster as beginning 10/03/2016 and continuing through 10/19/2016.

    All other information in the original declaration remains unchanged.

    (Catalog of Federal Domestic Assistance Number 59008) Lisa Lopez-Suarez, Acting Associate Administrator for Disaster Assistance.
    [FR Doc. 2016-26643 Filed 11-3-16; 8:45 am] BILLING CODE 8025-01-P
    SMALL BUSINESS ADMINISTRATION [Disaster Declaration #14936 and #14937] Florida Disaster Number FL-00120. AGENCY:

    U.S. Small Business Administration.

    ACTION:

    Amendment 2.

    SUMMARY:

    This is an amendment of the Presidential declaration of a major disaster for Public Assistance Only for the State of Florida (FEMA-4283-DR), dated 10/24/2016.

    Incident: Hurricane Matthew.

    Incident Period: 10/03/2016 through 10/19/2016.

    Effective Date: 10/25/2016.

    Physical Loan Application Deadline Date: 12/23/2016.

    Economic Injury (EIDL) Loan Application Deadline Date: 07/24/2017.

    ADDRESSES:

    Submit completed loan applications to: U.S. Small Business Administration, Processing and Disbursement Center, 14925 Kingsport Road, Fort Worth, TX 76155.

    FOR FURTHER INFORMATION CONTACT:

    A. Escobar, Office of Disaster Assistance, U.S. Small Business Administration, 409 3rd Street SW., Suite 6050, Washington, DC 20416.

    SUPPLEMENTARY INFORMATION:

    The notice of the President's major disaster declaration for Private Non-Profit organizations in the State of Florida, dated 10/24/2016, is hereby amended to include the following areas as adversely affected by the disaster.

    Primary Counties: Brevard, Duval, Flagler, Palm Beach, Saint Lucie.

    All other information in the original declaration remains unchanged.

    (Catalog of Federal Domestic Assistance Number 59008) Lisa Lopez-Suarez, Acting Associate Administrator for Disaster Assistance.
    [FR Doc. 2016-26638 Filed 11-3-16; 8:45 am] BILLING CODE 8025-01-P
    SMALL BUSINESS ADMINISTRATION [Disaster Declaration #14925 and #14926] Florida Disaster Number FL-00121 AGENCY:

    U.S. Small Business Administration.

    ACTION:

    Amendment 3.

    SUMMARY:

    This is an amendment of the Presidential declaration of a major disaster for the State of Florida (FEMA-4283-DR), dated 10/17/2016.

    Incident: Hurricane Matthew.

    Incident Period: 10/03/2016 through 10/19/2016.

    Effective Date: 10/25/2016.

    Physical Loan Application Deadline Date: 12/16/2016.

    EIDL Loan Application Deadline Date: 07/17/2017.

    ADDRESSES:

    Submit completed loan applications to: U.S. Small Business Administration, Processing and Disbursement Center, 14925 Kingsport Road, Fort Worth, TX 76155.

    FOR FURTHER INFORMATION CONTACT:

    A. Escobar, Office of Disaster Assistance, U.S. Small Business Administration, 409 3rd Street SW., Suite 6050, Washington, DC 20416.

    SUPPLEMENTARY INFORMATION:

    The notice of the President's major disaster declaration for the State of Florida, dated 10/17/2016 is hereby amended to establish the incident period for this disaster as beginning 10/03/2016 and continuing through 10/19/2016.

    All other information in the original declaration remains unchanged.

    (Catalog of Federal Domestic Assistance Number 59008) Lisa Lopez-Suarez, Acting Associate Administrator for Disaster Assistance.
    [FR Doc. 2016-26648 Filed 11-3-16; 8:45 am] BILLING CODE 8025-01-P
    SMALL BUSINESS ADMINISTRATION Military Reservist Economic Injury Disaster Loans Interest Rate for First Quarter FY 2017

    In accordance with the Code of Federal Regulations 13—Business Credit and Assistance § 123.512, the following interest rate is effective for Military Reservist Economic Injury Disaster Loans approved on or after October 26, 2016.

    Military Reservist Loan Program—3.125% Dated: October 27, 2016. Lisa Lopez-Suarez, Acting Associate Administrator for Disaster Assistance.
    [FR Doc. 2016-26642 Filed 11-3-16; 8:45 am] BILLING CODE 8025-01-P
    SMALL BUSINESS ADMINISTRATION [Disaster Declaration #14925 and #14926] Florida Disaster Number FL-00121 AGENCY:

    U.S. Small Business Administration.

    ACTION:

    Amendment 4.

    SUMMARY:

    This is an amendment of the Presidential declaration of a major disaster for the State of Florida (FEMA-4283-DR), dated 10/17/2016.

    Incident: Hurricane Matthew.

    Incident Period: 10/03/2016 through 10/19/2016.

    Effective Date: 10/25/2016.

    Physical Loan Application Deadline Date: 12/16/2016.

    EIDL Loan Application Deadline Date: 07/17/2017.

    Addresses: Submit completed loan applications to: U.S. Small Business Administration, Processing and Disbursement Center, 14925 Kingsport Road, Fort Worth, TX 76155.

    FOR FURTHER INFORMATION CONTACT:

    A. Escobar, Office of Disaster Assistance, U.S. Small Business Administration, 409 3rd Street SW., Suite 6050, Washington, DC 20416.

    SUPPLEMENTARY INFORMATION:

    The notice of the Presidential disaster declaration for the State of Florida, dated 10/17/2016 is hereby amended to include the following areas as adversely affected by the disaster:

    Primary Counties: (Physical Damage and Economic Injury Loans): Nassau, Seminole. Contiguous Counties: (Economic Injury Loans Only): Georgia, Camden, Charlton.

    All other information in the original declaration remains unchanged.

    (Catalog of Federal Domestic Assistance Number 59008) Lisa Lopez-Suarez, Acting Associate Administrator for Disaster Assistance.
    [FR Doc. 2016-26636 Filed 11-3-16; 8:45 am] BILLING CODE 8025-01-P
    DEPARTMENT OF STATE [Public Notice: 9782] 30-Day Notice of Proposed Information Collection: Statement of Material Change, Merger, Acquisition, or Divestment of a Registered Party ACTION:

    Notice of request for public comment.

    SUMMARY:

    The Department of State is seeking Office of Management and Budget (OMB) approval for the information collection described below. In accordance with the Paperwork Reduction Act of 1995, we are requesting comments on this collection from all interested individuals and organizations. The purpose of this notice is to allow 30 days for public comment preceding submission of the collection to OMB.

    DATES:

    The Department will accept comments from the public up to December 5, 2016.

    ADDRESSES:

    Direct comments to the Department of State Desk Officer in the Office of Information and Regulatory Affairs (OIRA) at the Office of Management and Budget (OMB). You may submit comments by the following methods:

    Email: [email protected]. You must include the DS form number, information collection title, and the OMB control number in the subject line of your message.

    Fax: 202-395-5806. Attention: Desk Officer for Department of State.

    You must include the DS form number, information collection title, and OMB control number in any correspondence.

    FOR FURTHER INFORMATION CONTACT:

    Direct requests for additional information regarding the collection listed in this notice, including requests for copies of the proposed collection instrument and supporting documents, to Steve Derscheid—Management Analyst, who may be reached at [email protected].

    SUPPLEMENTARY INFORMATION:

    Title of Information Collection: Statement of Material Change, Merger, Acquisition, or Divestiture of a Registered Party.

    OMB Control Number: 1405-XXXX.

    Type of Request: New Collection.

    Originating Office: Directorate of Defense Trade Controls, Bureau of Political Military Affairs, Department of State (T/PM/DDTC).

    Form Number: DS-7789.

    Respondents: Individuals and companies registered with DDTC and engaged in the business of manufacturing, brokering, exporting, or temporarily importing defense hardware or defense technology data.

    Estimated Number of Respondents: 1,700.

    Estimated Number of Responses: 1,700.

    Average Time per Response: 2 hours.

    Total Estimated Burden Time: 3,400 hours.

    Frequency: On occasion.

    Obligation to Respond: Mandatory.

    We are soliciting public comments to permit the Department to:

    • Evaluate whether the proposed information collection is necessary for the proper functions of the Department.

    • Evaluate the accuracy of our estimate of the time and cost burden for this proposed collection, including the validity of the methodology and assumptions used.

    • Enhance the quality, utility, and clarity of the information to be collected.

    • Minimize the reporting burden on those who are to respond, including the use of automated collection techniques or other forms of information technology.

    Please note that comments submitted in response to this Notice are public record. Before including any detailed personal information, you should be aware that your comments as submitted, including your personal information, will be available for public review.

    Abstract of Proposed Collection

    The International Traffic in Arms Regulations (ITAR) §§ 122.4 and 129.8 require registrants to notify the Directorate of Defense Trade Controls of the Department of State in the event of a change in registration information, in the event a foreign person or entity acquires a registered entity, or if the registrant is a party to a merger, acquisition, or divestiture (MAD) of an entity producing or marketing ITAR-controlled items. Based on certain conditions enunciated in the ITAR, respondents must notify DDTC of these changes at differing intervals—no less than 60 days prior to the event and/or within 5 days of its culmination. This information is necessary for DDTC to ensure registration records are accurate and to determine whether the transaction is in compliance with the regulations (e.g. with respect to ITAR § 126.1); assess the steps that need to be taken with respect to existing authorizations (e.g. transfers of licenses); and to evaluate the implications for US national security and foreign policy. This information collection is estimated to take an average of 2 hours to execute, and DDTC expects to receive approximately 1,700 responses per year; therefore, the total burden for this collection will be 3,400 hours per year.

    Summary of Public Comments Received

    On June 20, 2016, DDTC published a Federal Register Notice (81 FR 39992) soliciting public comments through August 19, 2016. DDTC received nine public comments during this period. One comment was not germane to the proposed information collection. The remaining eight comments provided significant feedback on the form. These comments are summarized below:

    One commenter remarked that the proposed 2-hour burden for the DS-7789 is low and should be revised. DDTC replies that the burden is an average of all submissions using the DS-7789, and while some responses will require a longer period based on the complexity of a transaction, many will be far below the declared burden for the form. Similarly, information previously provided to the Directorate via the DS-2032, Statement of Registration, will auto-populate into the DS-7789, saving respondents the burden of re-keying their basic information multiple times. DDTC therefore believes that a 2-hour burden is accurate for this form.

    Another comment centered on the name of the form itself, “Statement of Material Change, Merger, Acquisition, or Divestiture of a Registered Party.” The commenter was concerned that the term “material change” is inconsistent with current business usage and the use of this form could potentially affect the market value of the submitting company. While DDTC understands these concerns, the changes that require notification, and are therefore “material changes” for ITAR purposes, are defined in the regulations (see ITAR § 122.4). Therefore the title of the form will remain the same.

    Multiple commenters opined that the form, currently formatted in four separate sheets, is difficult to follow. DDTC notes that the form as currently written serves as a “placeholder” for a new case management system that is in development, and the focus throughout the form's creation has been to finalize discrete data fields and workflows more than format. The data fields on the form will be used to guide users through questions based on their previous responses; not all users will see all fields during each submission.

    Relatedly, many commenters noted that Block 1 of the form, currently named “Applicant Information,” should be changed to “Registrant Information” to avoid confusing nomenclature. DDTC notes that the title of this block is a field designator only, as the word “Applicant” will be used throughout the case management system to collate data fields for interoperability. The title of the field has no bearing on the role of the submitter of the DS-7789 and is not intended to imply that the submitter is “applying” to declare a material change. Rather, this was done in order for the electronic system to recognize the information in this block as duplicative of information that might be contained elsewhere in the user's system profile.

    DDTC believes that many of the usability issues identified by the commenters will be resolved through the guided nature of the case management system. For instance, some comments noted that the .pdf version of the DS-7789 lacked functionality to add additional supporting documentation and that text fields did not expand to accommodate easier editing. DDTC notes that the case management system will have fully functional “add” capability as well as unlimited-character text boxes which will allow for easy editing of responses. To this point, some commenters also noted that uploading information on each authorization (licenses and agreements) that will transfer ownership through a merger, acquisition, or divestiture (“MAD”) event is unduly burdensome and that respondents should have the ability to upload documentation in lieu of keying such information into the system. DDTC replies that the case management system will automatically populate this field from the registrant's information; users will then have the ability to select which authorizations will transfer under the proposed merger, acquisition, or divestiture instead of keying information on each authorization.

    One commenter noted that DDTC has historically provided a limited period for an acquired entity's registration to remain current after the date of the transaction to allow for the shipment of unshipped balances on authorizations which are transferring to the new or acquiring entity, but that this practice is not reflected on the DS-7789. DDTC replies that this practice stemmed from paper-based reporting and was used to allow companies to continue exporting goods under approved licenses while the authorizations were manually updated within DDTC. Because of the automated nature of the DS-7789, authorizations will be transferred rapidly from one entity to another and therefore the “grace period” will no longer be needed nor provided in the ordinary course.

    Several commenters also opined that DDTC should convene an industry working group to beta test the new form and system that is being developed. In fact, DDTC has already convened such a group, and all interested industry users are welcome to join by contacting [email protected].

    Many respondents provided feedback on the instructions for the DS-7789. Most comments centered on requesting more detailed guidance for specific fields on the form. DDTC is pleased to provide additional guidance and revised instructions will be made available on the DDTC Web site (https://www.pmddtc.state.gov) in conjunction with the publication of this request for public comment. For example, a commenter asked for clarification regarding DDTC's request for percent-ownership of outstanding voting securities of the foreign buyer of a registered entity. In response, DDTC revised the instructions to clarify the distinction between the 50% ownership, as referenced in 22 CFR 122.2, and the presumption of control in 22 CFR 120.37 associated with 25% ownership. The percent-ownership question in the DS-7789 facilitates DDTC's national security and foreign policy evaluation which, as part of the transactional review, includes an understanding of who has potential control of the foreign buyer.

    DDTC also received several comments related to the electronic signature requirement for the DS-7789. Numerous responses noted that the signature requirement for a senior officer is unduly burdensome for their executive-level managers; DDTC notes that the requirement for a senior officer to sign a notification of a change in registration information is enunciated in ITAR § 122.4. Similarly, one commenter opined that the requirement to provide information about senior officers and board members on the 60-day submission preceding a MAD event should be dispensed with since they may not be the same once the event actually occurs; however, DDTC needs this information to evaluate the entirety of a transaction, and it will still be required.

    Many commenters also remarked that the “60-day Buyer” portion of the form should be removed as only registered parties are required to submit information to DDTC. DDTC notes that the requested information is about the buyer and not necessarily from the buyer. For this reason, DDTC is providing registrants the ability to provide this information about the buyer or to have the buyer provide the information directly. In practice, registrants already provide buyer information in many divestitures (in other words, the buyer provides the registrant with the requested information, which the registrant then submits to DDTC). Allowing the buyer to submit the information to DDTC directly allows the buyer to provide information that they may not otherwise wish to share with the registrant. The requested information is relevant to DDTC's analysis of the foreign policy and national security implications of transactions and, in many cases, is what the acquiring company will ultimately provide in a post-transaction DS-2032 (Statement of Registration) covering the acquired entity.

    Two comments also centered on the protection of information submitted via the form's proposed electronic interface. DDTC's IT security team is working on a secure web-based system to accept proprietary data from industry users. Recognizing the sensitivity of the data submitted, the system will meet all current government standards for data security and the Privacy Act of 1974. Similarly, DDTC will protect information from public disclosure to the extent permitted by law. DDTC encourages submitters to clearly mark proprietary information in accordance with the Department of State guidelines at 22 CFR 171.12.

    Methodology

    This information will be collected by DDTC's electronic case management system and respondents will certify the data via electronic signature. Respondents will be required to enroll in DDTC's online system and will be issued an appropriate credential based on the business the user will be transacting. Lower assurance matters (such as initial registration in the system) will require a secure username and password. Matters requiring higher assurance will require multi-factor credentials, such as a certificate based login.

    Dated: October 26, 2016. Lisa Aguirre, Managing Director, Directorate of Defense Trade Controls, Department of State.
    [FR Doc. 2016-26715 Filed 11-3-16; 8:45 am] BILLING CODE 4710-25-P
    DEPARTMENT OF STATE [Public Notice: 9779] Notice of Public Meeting

    The Department of State will conduct an open meeting at 9:00 a.m. on November 9, 2016, in Room 5L18-01 of the Douglas A. Munro Coast Guard Headquarters Building at St. Elizabeth's, 2703 Martin Luther King Jr. Avenue SE., Washington, DC 20593. The primary purpose of the meeting is to prepare for the one hundred and seventeenth session of the International Maritime Organization's (IMO) Council to be held at the IMO Headquarters, United Kingdom, December 5-9, 2016.

    The agenda items to be considered include:

    —Adoption of the agenda —Report of the Secretary-General on credentials —Rules of Procedure of the Council —Strategy, planning and reform —Resource management (Human resource matters, report on investments, budget considerations for 2016-2017, Results-based budget: Outline of budgetary implications for 2018-2019) —IMO Member State Audit Scheme —Consideration of the report to the Marine Environmental Protection —Consideration of the report of the Technical Cooperation Committee —Technical Cooperation Fund: Report on activities of the 2015 programme —IMO International Maritime Law Institute —Report on the 38th Consultative Meeting of Contracting Parties to the London Convention 1972 and the 11th Meeting of Contracting Parties to the 1996 Protocol to the London Convention —Protection of vital shipping lanes —Periodic review of administrative requirements in mandatory IMO instruments —Principles to be considered in the review of existing requirements and the development of new requirements —External relations (With the U.N. and the specialized agencies, Joint Inspection Unit, relations with intergovernmental organizations, relations with non-governmental organizations, World Maritime Day, Report of the Day of the Seafarer, and IMO Maritime Ambassador Scheme) —Report on the status of the convention and membership of the Organization —Report on the status of conventions and other multilateral instruments in respect of which the Organization performs functions —Place, date and duration of the next two sessions of the Council and substantive items for inclusion in the provisional agendas for the next two sessions of Council (C 118 and C 119) —Supplementary agenda items, if any

    Members of the public may attend this meeting up to the seating capacity of the room. To facilitate the building security process, and to request reasonable accommodation, those who plan to attend should contact the meeting coordinator, LCDR Tiffany Duffy, by email at [email protected], by phone at (202) 372-1362, by fax at (202) 372-1925, or in writing at 2703 Martin Luther King Jr. Ave. SE., Stop 7509, Washington, DC 20593-7509 not later than November 2, 2016. Requests made after November 2, 2016 might not be able to be accommodated.

    Please note that due to security considerations, two valid, government issued photo identifications must be presented to gain entrance to Coast Guard Headquarters. It is recommended that attendees arrive to Coast Guard Headquarters no later than 30 minutes ahead of the scheduled meeting for the security screening process. Coast Guard Headquarters is accessible by taxi and public transportation. Parking in the vicinity of the building is extremely limited. Additional information regarding this and other IMO public meetings may be found at: www.uscg.mil/imo.

    Dated: October 21, 2016. Jonathan W. Burby, Coast Guard Liaison Officer, Office of Ocean and Polar Affairs, Department of State.
    [FR Doc. 2016-26716 Filed 11-3-16; 8:45 am] BILLING CODE 4710-09-P
    SURFACE TRANSPORTATION BOARD [Docket No. MCF 21073] National Express LLC—Acquisition of Control—Trinity, Inc., Trinity Cars, Inc., and Trinity Student Delivery, LLC AGENCY:

    Surface Transportation Board.

    ACTION:

    Notice tentatively approving and authorizing finance transaction.

    SUMMARY:

    On October 7, 2016, National Express LLC (National Express or Applicant), a noncarrier, filed an application under 49 U.S.C. 14303 to acquire control of Trinity, Inc. (Trinity), Trinity Cars, Inc. (Trinity Cars), and Trinity Student Delivery, LLC (Trinity Student) (collectively, Acquisition Carriers). The Board is tentatively approving and authorizing the transaction, and, if no opposing comments are timely filed, this notice will be the final Board action. Persons wishing to oppose the application must follow the rules at 49 CFR 1182.5 & 1182.8.

    DATES:

    Comments must be filed by December 19, 2016. Applicant may file a reply by January 3, 2017. If no opposing comments are filed by December 19, 2016, this notice shall be effective on December 20, 2016.

    ADDRESSES:

    Send an original and 10 copies of any comments referring to Docket No. MCF 21073 to: Surface Transportation Board, 395 E Street SW., Washington, DC 20423-0001. In addition, send one copy of comments to Applicant's representative: Andrew K. Light, Scopelitis, Garvin, Light, Hanson & Feary, P.C., 10 W. Market Street, Suite 1500, Indianapolis, IN 46204.

    FOR FURTHER INFORMATION CONTACT:

    Jonathon Binet (202) 245-0368. Federal Information Relay Service (FIRS) for the hearing impaired: 1-800-877-8339.

    SUPPLEMENTARY INFORMATION:

    Applicant, a noncarrier, states that it is a holding company organized under the laws of the state of Delaware that is indirectly controlled by a British corporation, National Express Group, PLC (Express Group). Applicant states that Express Group indirectly controls the following passenger motor carriers (collectively, National Express Affiliated Carriers): Beck Bus Transportation Corp. (Beck); Carrier Management Corporation (CMI); Diamond Transportation Services, Inc. (Diamond); Durham School Services, L.P. (Durham); MV Student Transportation, Inc. (MV); National Express Transit Corporation (NETC); National Express Transit Services Corporation (NETSC); Petermann Ltd. (LTD); Petermann Northeast LLC (Northeast); Petermann Northwest LLC (Northwest); Petermann Southwest LLC (Southwest); Petermann STSA, LLC (STSA); The Provider Enterprises, Inc. (Provider); Rainbow Management Service Inc. (Rainbow); Robertson Transit, Inc. (Robertson); Safeway Training and Transportation Services Inc. (Safeway); Septran, Inc. (Septran); Smith Bus Service, Inc. (Smith); Suburban Paratransit Service, Inc. (Suburban Paratransit); Trans Express, Inc. (Trans Express); and White Plains Bus Company, Inc. (White Plains).

    Applicant asserts the following facts regarding the National Express Affiliated Carriers held by Express Group:

    • Beck is a passenger motor carrier primarily engaged in providing student school bus transportation services in the states of Illinois and Indiana under contracts with regional and local school jurisdictions. Beck also provides charter passenger services to the public. It holds interstate common carrier authority from Federal Motor Carrier Safety Administration under MC-143528.

    • CMI is a passenger motor carrier doing business as Matthews Bus Company and is primarily engaged in providing student school bus transportation services in the state of Pennsylvania under contracts with regional and local school jurisdictions. CMI also provides intrastate charter passenger services to the public. CMI does not have interstate carrier authority as it is not required for the operations conducted by CMI.

    • Diamond is a passenger motor carrier providing exempt interstate and regulated intrastate paratransit and shuttle services in the District of Columbia metropolitan area. It does not have interstate carrier authority.

    • Durham is a passenger motor carrier primarily engaged in providing student school bus transportation services in approximately 32 states under contracts with regional and local school jurisdictions. Durham also provides charter passenger services to the public. It holds interstate common carrier authority under MC-163066.

    • MV is a passenger motor carrier primarily engaged in providing student school bus transportation services in the state of Missouri under contracts with regional and local school jurisdictions. MV also provides charter passenger services to the public. It holds interstate common carrier authority under MC-148934.

    • NETC is an intrastate passenger motor carrier with its principal place of business in Cincinnati, Ohio. NETC does not have interstate carrier authority.

    • NETSC is a passenger motor carrier engaged primarily in providing intrastate transit services in the areas of Westmoreland, PA; Arlington, VA; Greensboro, NC; Vallejo, CA; and Yuma, AZ. NETSC does not have interstate carrier authority as it is not required for the operations conducted by NETSC.

    • LTD is a passenger motor carrier primarily engaged in providing non-regulated school bus transportation services in the state of Ohio under contracts with regional and local school jurisdictions. LTD also provides charter passenger services to the public. It holds interstate common carrier authority under MC-364668.

    • Northeast is a passenger motor carrier primarily engaged in providing student school bus transportation services, primarily in the states of Ohio and Pennsylvania under contracts with regional and local school jurisdictions. Northeast also provides charter passenger services to the public. It holds interstate contract carrier authority under MC-723926.

    • Northwest is a passenger motor carrier primarily engaged in providing non-regulated school bus transportation services under contracts with regional and local school jurisdictions. Northwest does not have interstate carrier authority as it is not required for the operations conducted by Northwest.

    • Southwest is a passenger motor carrier primarily engaged in providing student school bus transportation services in the state of Texas under contracts with regional and local school jurisdictions. Southwest also provides charter passenger services to the public. It holds interstate contract carrier authority under MC-644996.

    • STSA is a passenger motor carrier primarily engaged in providing student school bus transportation services, primarily in the state of Kansas under contracts with regional and local school jurisdictions. STSA also provides charter passenger services to the public. It holds interstate contract carrier authority under MC-749360.

    • Provider is a passenger motor carrier doing business as Provider Bus, and is primarily engaged in providing non-regulated school bus transportation services in the state of New Hampshire under contracts with regional and local school jurisdictions. Provider does not have interstate carrier authority as it is not required for the operations conducted by Provider.

    • Rainbow provides interstate and intrastate charter and special party passenger transportation services in the state of New York. It holds interstate passenger common carrier authority under MC-490015.

    • Robertson is a passenger motor carrier primarily engaged in providing non-regulated school bus transportation services in the state of New Hampshire under contracts with regional and local school jurisdictions. Robertson also provides charter passenger service to the public. It does not have active interstate carrier authority, though MC-176053 is assigned to it.

    • Safeway is a passenger motor carrier primarily engaged in providing non-regulated school bus transportation services in the state of New Hampshire under contracts with regional and local school jurisdictions. It does not have active interstate carrier authority, though MC-522039 is assigned to it.

    • Septran is a passenger motor carrier primarily engaged in providing non-regulated school bus transportation services in the state of Illinois under contracts with regional and local school jurisdictions. It does not have active interstate carrier authority, though MC-795208 is assigned to it.

    • Smith is a passenger motor carrier primarily engaged in providing non-regulated school bus transportation services in the state of Maryland and surrounding areas under contracts with regional and local school jurisdictions. Smith does not have interstate carrier authority as it is not required for the operations conducted by Smith.

    • Suburban Paratransit is a motor carrier providing paratransit services primarily in Westchester County and Bronx, N.Y. Suburban Paratransit does not have interstate carrier authority as it is not required for the operations conducted by Suburban Paratransit.

    • Trans Express provides interstate and intrastate passenger transportation services in the state of New York. It holds interstate passenger common carrier authority under MC-187819.

    • White Plains is a passenger motor carrier doing business as Suburban Charters, and it operates primarily as a provider of non-regulated school bus transportation services in the state of New York. White Plains also operates as a motor passenger carrier providing charter service to the public. It holds interstate passenger common carrier authority under MC-160624.

    Applicant asserts the following facts with regard to the Acquisition Carriers:

    • Trinity is a Michigan corporation operating primarily as a provider of non-regulated school bus transportation services in southeastern Michigan, and also operates as a passenger motor carrier providing charter service to the public. Trinity holds common carrier operating authority under MC-364003.

    • Trinity Cars is also a Michigan corporation, operating as an intrastate passenger motor carrier as a provider of for-hire sedan and van service in southeastern Michigan. Trinity Cars holds interstate operating authority under MC-632139.

    • Trinity Student is a Michigan limited liability company and a wholly-owned subsidiary of Trinity. Trinity Student operates primarily as a provider of non-regulated school bus transportation services in the areas of Toledo and Cleveland, Ohio. Trinity Student also provides interstate charter passenger services. For purposes of its interstate passenger operations, Trinity Student holds common and contract carrier operating authority under MC-836335.

    Applicant states that all of the issued and outstanding stock of Trinity and Trinity Cars is owned and held by Jerry Sheppard, Jr., Trustee of the Jerry Sheppard, Jr. Revocable Inter-Vivos Trust U/A/D Sept. 24, 2003, as amended (Jerry Sheppard Trust), and Rebetha J. Sheppard, Trustee of the Rebetha J. Sheppard Revocable Inter-Vivos Trust U/A/D Sept. 24, 2003, as amended (Rebetha Sheppard Trust) (collectively, Sellers).

    Applicant asserts that there is one affiliate of the Acquisition Carriers, Trinity Coach, LLC, though it is not a part of the contemplated transaction. Applicant states that Trinity Coach, LLC, is a Michigan limited liability company that is a passenger motor carrier providing interstate services under common carrier authority under MC-537169. Jerry Sheppard, Jr., individually, holds a 100% membership interest in Trinity Coach, LLC.

    Applicant further states that, other than the National Express Affiliated Carriers, the Acquisition Carriers, and Trinity Coach, there are no other affiliated carriers with regulated interstate operations, and the Sellers have no other direct or indirect ownership interest in any other interstate passenger motor carrier.

    Applicant also asserts that it would acquire direct 100% control of Trinity and Trinity Cars through stock ownership, and indirect control of Trinity Student as a wholly-owned subsidiary of Trinity.

    Under 49 U.S.C. 14303(b), the Board must approve and authorize a transaction that it finds consistent with the public interest, taking into consideration at least: (1) The effect of the proposed transaction on the adequacy of transportation to the public; (2) the total fixed charges that result; and (3) the interest of affected carrier employees. Applicant submitted information, as required by 49 CFR 1182.2, including information to demonstrate that the proposed transaction is consistent with the public interest under 49 U.S.C. 14303(b), and a statement that the aggregate gross operating revenues of the National Express Affiliated Carriers and the Acquisition Carriers exceeded $2 million for the preceding 12-month period. See 49 U.S.C. 14303(g).1

    1 Applicants with gross operating revenues exceeding $2 million are required to meet the requirements of 49 CFR 1182.

    Applicant submits that the proposed transaction would have no significant impact on the adequacy of transportation services to the public, as the Acquisition Carriers would continue to provide the services they currently provide using the same names for the foreseeable future. Applicant states that the Acquisition Carriers “will continue to operate, but going forward, will be operating within the National Express corporate family.” (Appl. 14.)

    According to Applicant, “[t]he addition of the Acquisition Carriers to the carriers held by National Express is consistent with the practices within the passenger motor carrier industry of strong, well-managed transportation organizations adapting their corporate structure to operate several different passenger carriers within the same market, but in different geographic areas.” (Id.) Applicant asserts that the Acquisition Carriers are experienced in some of the same market segments already served by some of the National Express Affiliated Carriers. Applicant expects the transaction to result in operating efficiencies and cost savings derived from economies of scale, all of which, Applicant states, would help to ensure the provision of adequate service to the public. Applicant further asserts that bringing the Acquisition Carriers within the National Express corporate family would serve to enhance the viability of the overall organization and the operations of the National Express Affiliated Carriers, which would ensure the continued availability of adequate passenger transportation service for the public. (Id.)

    Applicant also claims that neither competition nor the public interest would be adversely affected by the contemplated transaction. Applicant states that the Acquisition Carriers are “relatively small carriers in the overall markets in which they compete: Unregulated metropolitan school bus operations, providers of charter services, and providers of sedan and van services.” (Id.) Applicant states that school bus operators typically occupy a limited portion of the charter business because (i) the equipment offered is not as comfortable as that offered by motor coach operators; and (ii) scheduling demands imposed by the primary school bus operation impose major constraints on charter services that can be offered. It further explains that the sedan and van services business sector is comprised of a number of providers, with no provider having a dominant market share in the sector. Applicant also explains that the charter and sedan and van services offered by the Acquisition Carriers are geographically dispersed from those of the National Express Affiliated Carriers, and that there is limited overlap in service areas and/or in customer bases among the National Express Affiliated Carriers and the Acquisition Carriers. Thus, Applicant states that the impact of the contemplated transaction on the regulated motor carrier industry would be minimal at most and that neither competition nor the public interest would be adversely affected.

    Applicant asserts that there are no fixed charges associated with the contemplated transaction. Applicant also states that it does not anticipate a measurable reduction in force or changes in compensation levels and/or benefits to employees. Applicant submits, however, that staffing redundancies could potentially result in limited downsizing of back-office or managerial level personnel.

    The Board finds that the acquisition proposed in the application is consistent with the public interest and should be tentatively approved and authorized. If any opposing comments are timely filed, these findings will be deemed vacated, and, unless a final decision can be made on the record as developed, a procedural schedule will be adopted to reconsider the application. See 49 CFR 1182.6(c). If no opposing comments are filed by the expiration of the comment period, this notice will take effect automatically and will be the final Board action.

    This action is categorically excluded from environmental review under 49 CFR 1105.6(c).

    Board decisions and notices are available on our Web site at “WWW.STB.GOV”.

    It is ordered:

    1. The proposed transaction is approved and authorized, subject to the filing of opposing comments.

    2. If opposing comments are timely filed, the findings made in this notice will be deemed vacated.

    3. This notice will be effective December 20, 2016, unless opposing comments are filed by December 19, 2016.

    4. A copy of this notice will be served on: (1) The U.S. Department of Transportation, Federal Motor Carrier Safety Administration, 1200 New Jersey Avenue SE., Washington, DC 20590; (2) the U.S. Department of Justice, Antitrust Division, 10th Street & Pennsylvania Avenue NW., Washington, DC 20530; and (3) the U.S. Department of Transportation, Office of the General Counsel, 1200 New Jersey Avenue SE., Washington, DC 20590.

    Decided: November 1, 2016.

    By the Board, Chairman Elliott, Vice Chairman Miller, and Commissioner Begeman.

    Brendetta S. Jones, Clearance Clerk.
    [FR Doc. 2016-26724 Filed 11-3-16; 8:45 am] BILLING CODE 4915-01-P
    DEPARTMENT OF TRANSPORTATION Federal Aviation Administration [Docket No. FAA-2016-9346] Passenger Facility Charge (PFC) Program; Draft FAA Order 5500.1B AGENCY:

    Federal Aviation Administration (FAA), DOT.

    ACTION:

    Notice of withdrawal.

    SUMMARY:

    FAA is rescinding the draft FAA Order 5500.1B, Passenger Facility Charge published on August 5, 2016, and withdrawing its request for public review and comment.

    DATES:

    The FAA previously extended the comment period to October 31, 2016. FAA subsequently established a public Docket FAA 2016-9346 and comments received will be entered into the public Docket.

    FOR FURTHER INFORMATION CONTACT:

    Joe Hebert, Manager, Financial Analysis and Passenger Facility Charge Branch, APP-510, Federal Aviation Administration, 800 Independence Avenue SW., Washington, DC 20591, telephone (202) 267-8375; facsimile (202) 267-5302.

    SUPPLEMENTARY INFORMATION:

    On August 5, 2016, the FAA published a notice and request for comments titled “Passenger Facility Charge (PFC) Program; Draft FAA Order 5500.1B” (81 FR 51963). The notice requested interested parties submit written comments by September 30, 2016. On September 21, 2016, the FAA extended the original comment period by 31 days, from September 30, 2016, to October 31, 2016.

    After careful consideration, the FAA has decided to rescind the draft Order and cancel the public review process. The FAA will issue a revised draft in the near future for public review and comment.

    Issued in Washington, DC, on October 31, 2016. Elliott Black, Director, Office of Airport Planning and Programming.
    [FR Doc. 2016-26630 Filed 11-3-16; 8:45 am] BILLING CODE 4910-13-P
    DEPARTMENT OF TRANSPORTATION Federal Aviation Administration Notice of Cancellation of Environmental Impact Statement for the Norfolk International Airport, Norfolk, Virginia AGENCY:

    Federal Aviation Administration (FAA), DOT.

    ACTION:

    Notice of cancellation of preparation of environmental impact statement.

    SUMMARY:

    The Federal Aviation Administration (FAA) announces that it has discontinued preparation of an Environmental Impact Statement (EIS) for the proposed construction of new Runway 5R/23L and associated development at Norfolk International Airport, Norfolk, Virginia. The FAA's discontinued preparation of the EIS is based upon the completion of the first phase of the EIS. Based on the results of the first phase (Scoping and Purpose & Need development), the FAA has determined that the fundamental purpose and need is not supported by the current or anticipated development needs of the Airport at this time.

    FOR FURTHER INFORMATION CONTACT:

    Marcus Brundage, Environmental Protection Specialist, Federal Aviation Administration, Washington Airports District Office, 23723 Air Freight Lane, Suite 210, Dulles, Virginia 20166; Telephone (703) 661-1365.

    SUPPLEMENTARY INFORMATION:

    On June 12, 2015, the FAA, published in the Federal Register a Notice of Intent (NOI) to prepare an Environmental Impact Statement (EIS) and hold two public scoping meetings in Norfolk and Virginia Beach, Virginia (Volume 80, Number 113, FR 33582-33583). The public meetings were held at the Bayside High School and at the Holiday Inn Norfolk Airport on July 22 and 23, 2015, respectively.

    The stated purpose of the project was to “meet relevant FAA airfield safety standards and enhance airfield safety without reducing runway availability.” The proposed project included the decommissioning and demolition of Runway 14/32, the construction of new Runway 5R/23L and associated development at the airport, and improvement of roadway access to the airport by realigning Robin Hood Road. Other associated infrastructure was proposed for construction or demolition or relocation including taxiways, lighting, hangers, maintenance facilities, runway safety areas and runway protection zones.

    In 2001, the FAA began preparing an EIS for similar projects based on the need to accommodate additional operations at the airport. During the first EIS process, the needs of the airport changed and it was determined that the projects were no longer justified based on the stated need and the preparation of the first EIS was cancelled. In 2013, the FAA agreed to proceed with a phased second EIS preparation to review a similar project proposed by the Norfolk Airport Authority. The first phase of the project consisted of consultant selection, EIS scoping, and an analysis of the proposed project's purpose and need to determine if the FAA should continue to the second phase, which would be completion of the EIS and determination.

    At the conclusion of the first phase of the second EIS, the FAA determined that the fundamental purpose and need of the projects were not supported by the current or anticipated needs of the airport. The FAA is now terminating the second EIS process. However, the FAA recognizes the importance of ORF to the greater Norfolk/Hampton Roads region and to the Commonwealth of Virginia. Moreover, the FAA agrees that a parallel runway may still be a viable long-term plan for the future, if and when operational demand warrants. Therefore, the FAA continues to support the proposed runway remaining on the approved Airport Layout Plan, as conditionally approved pending environmental review on October 5, 2011, and protecting the associated airspace.

    Issued in Dulles, Virginia on October 28, 2016. Matthew J. Thys, Manager, Washington Airports District Office, Eastern Region.
    [FR Doc. 2016-26631 Filed 11-3-16; 8:45 am] BILLING CODE 4910-13-P
    DEPARTMENT OF TRANSPORTATION Federal Transit Administration [Docket No. FTA-2016-0036] Notice of Proposed Buy America Waiver for Replacement Parts on Diesel Multiple Unit Rail Vehicles AGENCY:

    Federal Transit Administration, DOT.

    ACTION:

    Notice of proposed Buy America waiver and request for comment.

    SUMMARY:

    The Federal Transit Administration (FTA) received a request from the North County Transit District (NCTD) in California for a Buy America non-availability waiver for the procurement of specified replacement parts for Diesel Multiple Unit (DMU) rail vehicles. The 12 DMU rail vehicles were manufactured by Siemens as a part of their Desiro series and were placed in revenue service in 2008. Mid-life maintenance and replacement overhauls of vehicle parts are now required in order to ensure safe and continuous transit service. In accordance with 49 U.S.C. 5323(j)(3)(A), FTA is providing notice of the waiver request and seeks public comment before deciding whether to grant the request.

    DATES:

    Comments must be received by November 14, 2016. Late-filed comments will be considered to the extent practicable.

    ADDRESSES:

    Please submit your comments by one of the following means, identifying your submissions by docket number FTA-2016-0036:

    1. Web site: http://www.regulations.gov. Follow the instructions for submitting comments on the U.S. Government electronic docket site.

    2. Fax: (202) 493-2251.

    3. Mail: U.S. Department of Transportation, 1200 New Jersey Avenue SE., Docket Operations, M-30, West Building, Ground Floor, Room W12-140, Washington, DC 20590-0001.

    4. Hand Delivery: U.S. Department of Transportation, 1200 New Jersey Avenue SE., Docket Operations, M-30, West Building, Ground Floor, Room W12-140, Washington, DC 20590-0001 between 9 a.m. and 5 p.m., Monday through Friday, except Federal holidays.

    Instructions: All submissions must make reference to the “Federal Transit Administration” and include docket number FTA-2016-0036. Due to the security procedures in effect since October 2011, mail received through the U.S. Postal Service may be subject to delays. Parties making submissions responsive to this notice should consider using an express mail firm to ensure the prompt filing of any submissions not filed electronically or by hand. Note that all submissions received, including any personal information therein, will be posted without change or alteration to http://www.regulations.gov. For more information, you may review DOT's complete Privacy Act Statement in the Federal Register published April 11, 2000 (65 FR 19477), or you may visit http://www.regulations.gov.

    FOR FURTHER INFORMATION CONTACT:

    Cecelia Comito, Assistant Chief Counsel, at (202) 366-2217 or [email protected].

    SUPPLEMENTARY INFORMATION:

    The purpose of this notice is to provide notice and seek comment on whether the FTA should grant a non-availability waiver for NCTD's purchase of replacement parts on their Siemens-manufactured Desiro series DMU rail vehicles, including, but not limited to, Power Pack Assembly, Power Truck Assembly, Jakobs Truck Assembly, Transmission, Primary Suspension, Secondary Suspension, Power Wheelset Assembly, Power Truck Brake Rotors, Jakobs Truck Brake Rotors, Power Truck Wheels, Jakobs Truck Wheels, A/C Compressors, and Carbody Brake Components, Automatic Train Couplers, and HVAC Roof Mounted Units (the “Replacement Parts”). The Replacement Parts are necessary for mid-life maintenance of the DMU rail vehicles.

    With certain exceptions, FTA's Buy America requirements prevent FTA from obligating an amount that may be appropriated to carry out its program for a project unless “the steel, iron, and manufactured goods used in the project are produced in the United States.” 49 U.S.C. 5323(j)(1). A manufactured product is considered produced in the United States if: (1) All of the manufacturing processes for the product take place in the United States; and (2) all of the components of the product are of U.S. origin. A component is considered of U.S. origin if it is manufactured in the United States, regardless of the origin of its subcomponents. 49 CFR 661.5(d). If, however, FTA determines that “the steel, iron, and goods produced in the United States are not produced in a sufficient and reasonably available amount or are not of a satisfactory quality,” then FTA may issue a waiver (non-availability waiver). 49 U.S.C. 5323(j)(2)(B); 49 CFR 661.7(c).

    NCTD provides transit service to the entire North San Diego County, serving more than 12 million riders annually. In 2003, NCTD requested and received from FTA a non-availability Buy America waiver for the procurement of 12 DMU vehicles for use on NCTD's Sprinter line, with 15 light rail stations between the cities of Escondido and Oceanside. NCTD purchased the 12 DMU vehicles in 2004 and placed the vehicles into revenue service in 2008 on NCTD's Sprinter line. The useful life of the vehicles is 25 years.

    According to NCTD, the Replacements Parts for the DMU vehicles are nearing the end of their useful service lives and showing signs of wear and fatigue. Without periodic capital equipment replacement and/or rebuild, the likelihood of mechanical downtime increases significantly, equating to prolonged service outages for riders. In March 2013, NCTD removed the Sprinter service from revenue service for more than two months due to premature wear of one of the three braking systems and unavailability of domestic replacement parts. NCTD intends to replace the components over several phases during the coming years, from 2018 through 2026. The last phase is anticipated to be procured over a subsequent seven-year period. Any non-availability waiver granted would be effective for all phases of these projects and will expire upon completion of these projects.

    As a part of its search for domestic Replacement Parts, NCTD issued a Request for Information (RFI) on November 12, 2013 to maintenance and engineering communities to determine if any firms existed that could either supply Buy America compliant parts and components, or reverse engineer the parts and components utilizing plans and specifications provided. More than 300 vendors received the RFI; 19 downloaded the RFI. One vendor responded that “with proper specifications, drawings, and samples, we may be able to design and supply Buy America Compliant, OE equivalent, air bellows, primary suspension, and passenger bellows.” However, the original equipment manufacturer (“OEM”) would not provide the requested proprietary information. NCTD undertook three additional procurements for the Replacement Parts. Three responses were received; none could certify to Buy America compliance. Under 49 CFR 661.7(c)(1), “It will be presumed that the conditions exist to grant this non-availability waiver if no responsive and responsible bid is received offering an item produced in the United States.”

    NCTD's 12 vehicles are the only Siemens Sprinter vehicles in the United States. Additionally, since these vehicles were specifically designed to meet California Public Utilities Commission rail safety requirements, Sprinter is the only vehicle of its kind internationally. NCTD's multiple procurement efforts have demonstrated that there are no suppliers willing to invest in infrastructure to manufacture parts that are suitable only for NCTD's 12 vehicles.

    Finally, under 49 U.S.C. 5323(j)(6), FTA cannot deny an application for a waiver based on non-availability unless FTA can certify that (i) the steel, iron, or manufactured good (the “item”) is produced in the United States in a sufficient and reasonably available amount; and (ii) the item produced in the United States is of a satisfactory quality. Additionally, FTA must provide a list of known manufacturers in the United States from which the item can be obtained. FTA is not aware of any manufacturers who produce the Replacement Parts in the United States.

    The 12 DMUs purchased by NCTD were granted a waiver from Buy America. NCTD's efforts to identify domestic manufacturers for the various Replacement Parts were unsuccessful. FTA proposes to grant NCTD a non-availability waiver of the Buy America requirements for the Replacement Parts for the 12 DMUs which will be acquired for the replacement of the components over several phases from 2018 through 2026. Any non-availability waiver granted would be effective for all phases of these projects and will include Replacement Parts acquired to maintain the DMUs for their 25-year useful life.

    The purpose of this notice is to publish NCTD's request and seek public comment from all interested parties in accordance with 49 U.S.C. 5323(j)(3)(A). Comments will help FTA understand completely the facts surrounding the request, including the effects of a potential waiver and the merits of the request. After consideration of the comments, FTA will publish a second notice in the Federal Register with a response to comments and noting any changes made to the proposed waiver as a result of the comments received.

    Ellen Partridge, Chief Counsel.
    [FR Doc. 2016-26653 Filed 11-3-16; 8:45 am] BILLING CODE P
    DEPARTMENT OF TRANSPORTATION Federal Transit Administration [Docket No. FTA-2016-0035] Notice of Proposed Buy America Public Interest Waiver for Hurricane Sandy Emergency Relief Work Performed for the World Trade Center AGENCY:

    Federal Transit Administration, DOT.

    ACTION:

    Notice of Proposed Buy America waiver and request for comment.

    SUMMARY:

    The Federal Transit Administration (FTA) received a request from the Port Authority of New York and New Jersey (PANYNJ) for a Buy America public interest waiver for the procurement of equipment to replace what was damaged at the World Trade Center Transportation Hub (WTC Hub) project during Hurricane Sandy. PANYNJ seeks a public interest Buy America waiver for the replacement of equipment previously purchased for the WTC Hub. Hurricane Sandy damaged an existing construction site that receives federal funds but is not subject to FTA's Buy America requirements and the only option PANYNJ had to implement Sandy recovery work was to replace the damaged equipment with the same equipment previously acquired for the project. 49 U.S.C. 5323(j)(2)(A) and 49 CFR 661.7(b). In accordance with 49 U.S.C. 5323(j)(3)(A), FTA is providing notice of the public interest waiver request and seeks public comment before deciding whether to grant the request. If granted, the waiver would only apply to replacement of equipment damaged by Hurricane Sandy at the WTC Hub project and would not apply to any other PANYNJ resiliency projects for which FTA has provided funding.

    DATES:

    Comments must be received by November 14, 2016. Late-filed comments will be considered to the extent practicable.

    ADDRESSES:

    Please submit your comments by one of the following means, identifying your submissions by docket number FTA-2016-0035.

    1. Web site: http://www.regulations.gov. Follow the instructions for submitting comments on the U.S. Government electronic docket site.

    2. Fax: (202) 493-2251.

    3. Mail: U.S. Department of Transportation, 1200 New Jersey Avenue SE., Docket Operations, M-30, West Building, Ground Floor, Room W12-140, Washington, DC 20590-0001.

    4. Hand Delivery: U.S. Department of Transportation, 1200 New Jersey Avenue SE., Docket Operations, M-30, West Building, Ground Floor, Room W12-140, Washington, DC 20590-0001 between 9 a.m. and 5 p.m., Monday through Friday, except Federal holidays.

    Instructions: All submissions must make reference to the “Federal Transit Administration” and include docket number FTA-2016-0035. Due to the security procedures in effect since October 2011, mail received through the U.S. Postal Service may be subject to delays. Parties making submissions responsive to this notice should consider using an express mail firm to ensure the prompt filing of any submissions not filed electronically or by hand. Note that all submissions received, including any personal information therein, will be posted without change or alteration to http://www.regulations.gov. For more information, you may review DOT's complete Privacy Act Statement in the Federal Register published April 11, 2000 (65 FR 19477), or you may visit http://www.regulations.gov.

    FOR FURTHER INFORMATION CONTACT:

    Cecelia Comito, FTA Assistant Chief Counsel, (202) 366-2217 or [email protected].

    SUPPLEMENTARY INFORMATION:

    The purpose of this notice is to provide notice and seek public comment on whether the FTA should grant a public interest waiver to the Port Authority of New York and New Jersey (PANYNJ) for the procurement of replacement equipment damaged by Hurricane Sandy at the World Trade Center Transportation Hub (WTC Hub) project.

    With certain exceptions, FTA's Buy America requirements prevent FTA from obligating an amount that may be appropriated to carry out its program for a project unless “the steel, iron, and manufactured goods used in the project are produced in the United States.” 49 U.S.C. 5323(j)(1). If, however, FTA finds that the application of this requirement would be inconsistent with the public interest, it may waive this requirement. 49 U.S.C. 5323(j)(2)(A). In determining whether the conditions exist to grant a public interest waiver, FTA will consider all appropriate factors on a case-by-case basis, unless a general exception is specifically set out in this part. 49 U.S.C. 5323(j)(2)(A); 49 CFR 661.7(b).

    On May 13, 2015, PANYNJ requested a Buy America waiver for the replacement or repair of equipment damaged by Hurricane Sandy at the WTC Hub because the WTC Hub project is being constructed pursuant to a grant awarded in 2003, it is not feasible to replace the damaged equipment with equipment that is different than that used in the original project and it is in the public's interest to repair the damage at the WTC Hub as quickly as possible. 49 U.S.C. 5323(j)(2)(A); 49 CFR 661.7(b). Additionally, the underlying project is not subject to FTA's Buy America requirements.

    The September 11, 2001 terrorist attacks on the World Trade Center resulted in extensive damage to the WTC Hub. In August 2002, the Federal Emergency Management Agency (FEMA) entered into a memorandum of agreement with the U.S. Department of Transportation under which FEMA agreed to provide $2.75 billion to cover expenses incurred in repairing or rebuilding public transportation facilities and systems damaged by the September 11, 2001 terrorist attacks. Under the agreement, FTA would serve as the lead agency to oversee the grant and the construction of the project. In December 2003, FTA entered into a grant agreement with PANYNJ to rebuild the WTC Hub. Because the WTC Hub project was funded with FEMA grant funds, FTA's Buy America requirements did not apply to the project.

    In October 2012, the WTC Hub project was an active construction site, with an estimated project completion date of December 2015. Hurricane Sandy caused extensive damage to the construction site, resulting in more than $214 million in damage to the construction site. FTA awarded PANYNJ two grants—NY-44-X005 for $54.24 million and NY-44-X014 for $159.72 million—in Hurricane Sandy recovery funds to be used for recovery and emergency repair work for the WTC Hub project. Because the repair work was for an ongoing construction project, PANYNJ was required to use existing contracts that were originally procured in accordance with the requirements for the FEMA-funded WTC Hub project. To apply FTA's Buy America requirements to replace or repair equipment installed on an ongoing construction project would result in significant delay to completion of the project, impact contracts awarded under the FEMA funds, and potentially impact previously provided warranties. Moreover, if granted, the public interest waiver would maintain overall consistency of administration, oversight and implementation of both the ongoing WTC Hub project and the WTC Hurricane Sandy recovery work.

    Accordingly, because the original project was funded by FEMA and therefore, not subject to FTA's Buy America regulations, FTA proposes a general public interest waiver of FTA's Buy America requirements for the two grants, NY-44-X005 for $54.24 million and NY-44-X014 for $159.72 million—awarded to PANYNJ. This public interest waiver is limited to the Hurricane Sandy recovery projects at the WTC Hub only, and does not apply to separately funded resiliency projects. FTA seeks comment from all interested parties on the above public interest waiver. After consideration of the comments, FTA will publish a second notice in the Federal Register with a response to comments and noting any changes made to the public interest waiver as a result of the comments received.

    Ellen Partridge, Chief Counsel.
    [FR Doc. 2016-26656 Filed 11-3-16; 8:45 am] BILLING CODE P
    DEPARTMENT OF TRANSPORTATION Office of the Secretary [Docket No. DOT-OST-2016-0206] Advisory Committee on Transportation Equity AGENCY:

    Office of the Secretary, U.S. Department of Transportation (DOT).

    ACTION:

    Notice of establishment of Advisory Committee on Transportation Equity (ACTE).

    SUMMARY:

    Pursuant to Section 9(a)(2) of the Federal Advisory Committee Act (FACA), and in accordance with Title 41, Code of Federal Regulations, Section 102-3.65, and following consultation with the Committee Management Secretariat, General Services Administration, notice is hereby given that the ACTE will be established for a 2-year period.

    The Committee will provide advice and recommendations to the Secretary of Transportation on comprehensive, interdisciplinary issues related to transportation equity from a variety of stakeholders involved in transportation planning, design, research, policy, and advocacy. Specifically, the ACTE will inform the Department about efforts to (1) institutionalize the U.S. DOT Opportunity principles into Agency programs, policies, regulations, and activities; (2) strengthen and establish partnerships with other governmental agencies, including other Federal agencies and State, tribal, or local governments, regarding opportunity issues; (3) promote economic and related forms of opportunity by empowering communities to have a meaningful voice in local and regional transportation decisions; and (4) sharpen enforcement tools to ensure compliance with nondiscrimination programs, policies, regulations, and activities.

    The U.S. DOT Opportunity principles are to:

    (1) Support transportation projects that connect people to economic and related forms of opportunity and revitalize communities;

    (2) Ensure that current and future transportation projects connect and strengthen communities; and

    (3) Develop transportation facilities that meaningfully reflect and incorporate the input of all the people and communities they touch.

    Additionally, the establishment of the ACTE is necessary for the Department to carry out its mission and in the public interest. The Committee will operate in accordance with the provisions of the Federal Advisory Committee Act and the rules and regulations issued in implementation of that Act.

    FOR FURTHER INFORMATION CONTACT:

    Barbara McCann, U.S. Department of Transportation, Office of the Secretary, Office of Policy, Room W84-310, 1200 New Jersey Avenue SE., Washington, DC 20590; phone (202) 366-8016; email: [email protected].

    SUPPLEMENTARY INFORMATION:

    The Secretary of Transportation will appoint up to 15 voting members to the ACTE. Members will be selected with a view toward achieving varied perspectives on transportation equity, including (1) academia; (2) community groups; (3) industry/business; (4) non-government organizations; (5) State and local governments; and (6) federally recognized tribal governments and indigenous groups. The Secretary of Transportation will seek a membership that is fairly balanced in terms of points of view of the affected interests.

    The Advisory Committee on Transportation Equity's efforts will include evaluation of the Department's work in using the principles above to achieve Opportunity objectives when carrying out its strategic, research, technological, regulatory, community engagement, and economic policy activities related to transportation and opportunity.

    The Committee shall act solely in an advisory capacity and will not exercise program management responsibilities. Decisions directly affecting implementation of transportation policy will remain with the Secretary.

    Members of the Advisory Committee on Transportation Equity may be selected to serve either as representative members or as members appointed solely for their expertise. The latter will serve as special Government employees and will be subject to certain ethical restrictions, and such members will be required to submit certain information in connection with the appointment process.

    Committee members may serve for a term of 2 years or less and may be reappointed for successive terms, with no more than 2 successive terms. The Chair and Vice Chair of the Committee will be appointed by the Under Secretary of Transportation for Policy from among the selected members, and the Committee is expected to meet approximately two times per year or as necessary. Subcommittees may be formed to address specific transportation equity issues.

    The Committee will make recommendations that provide timely, comprehensive, inclusive advice to the Secretary on transportation opportunity public policy issues that advance the principles of providing opportunity and access to everyone.

    Issued in Washington, DC, on October 27, 2016. Blair C. Anderson, Under Secretary of Transportation for Policy.
    [FR Doc. 2016-26674 Filed 11-3-16; 8:45 am] BILLING CODE 4910-9X-P
    DEPARTMENT OF THE TREASURY Office of the Comptroller of the Currency Agency Information Collection Activities: Information Collection Renewal; Comment Request; Appraisals for Higher-Priced Mortgage Loans AGENCY:

    Office of the Comptroller of the Currency (OCC), Treasury.

    ACTION:

    Notice and request for comment.

    SUMMARY:

    The OCC, as part of its continuing effort to reduce paperwork and respondent burden, invites the general public and other Federal agencies to take this opportunity to comment on a continuing information collection as required by the Paperwork Reduction Act of 1995 (PRA).

    In accordance with the requirements of the PRA, the OCC may not conduct or sponsor, and the respondent is not required to respond to, an information collection unless it displays a currently valid Office of Management and Budget (OMB) control number.

    The OCC is soliciting comment concerning renewal of its information collection titled, “Appraisals for Higher-Priced Mortgage Loans.”

    DATES:

    Comments must be submitted on or before January 3, 2017.

    ADDRESSES:

    Because paper mail in the Washington, DC area and at the OCC is subject to delay, commenters are encouraged to submit comments by email, if possible. Comments may be sent to: Legislative and Regulatory Activities Division, Office of the Comptroller of the Currency, Attention: 1557-0313, 400 7th Street SW., Suite 3E-218, mail stop 9W-11, Washington, DC 20219. In addition, comments may be sent by fax to (571) 465-4326 or by electronic mail to [email protected]. You may personally inspect and photocopy comments at the OCC, 400 7th Street SW., Washington, DC 20219. For security reasons, the OCC requires that visitors make an appointment to inspect comments. You may do so by calling (202) 649-6700 or, for persons who are deaf or hard of hearing, TTY, (202) 649-5597. Upon arrival, visitors will be required to present valid government-issued photo identification and submit to security screening in order to inspect and photocopy comments.

    All comments received, including attachments and other supporting materials, are part of the public record and subject to public disclosure. Do not include any information in your comment or supporting materials that you consider confidential or inappropriate for public disclosure.

    FOR FURTHER INFORMATION CONTACT:

    Shaquita Merritt, OCC Clearance Officer, (202) 649-5490 or, for persons who are deaf or hard of hearing, TTY, (202) 649-5597, Legislative and Regulatory Activities Division, Office of the Comptroller of the Currency, 400 7th Street SW., Suite 3E-218, mail stop 9W-11, Washington, DC 20219.

    SUPPLEMENTARY INFORMATION:

    Under the PRA (44 U.S.C. 3501-3520), Federal agencies must obtain approval from the OMB for each collection of information that they conduct or sponsor. “Collection of information” is defined in 44 U.S.C. 3502(3) and 5 CFR 1320.3(c) to include agency requests or requirements that members of the public submit reports, keep records, or provide information to a third party. Section 3506(c)(2)(A) of title 44 requires Federal agencies to provide a 60-day notice in the Federal Register concerning each proposed collection of information, including each proposed extension of an existing collection of information, before submitting the collection to OMB for approval. To comply with this requirement, the OCC is publishing notice of the proposed collection of information set forth in this document.

    This information collection relates to section 1471 of the Dodd-Frank Act, which added a new section 129H to the Truth in Lending Act (TILA) establishing special appraisal requirements for “higher-risk mortgages.” For certain mortgages with an annual percentage rate that exceeds the average prime offer rate by a specified percentage, creditors must obtain an appraisal or appraisals meeting certain specified standards, provide applicants with a notification regarding the use of the appraisals, and give applicants a copy of the written appraisals used. The statute permits the OCC to issue a rule to include exemptions from these requirements.

    The information collection requirements are found in 12 CFR 34.203(c)(1), (c)(2), (d), (e) and (f). This information is required to protect consumers and promote the safety and soundness of creditors making higher-priced mortgage loans (HPMLs) subject to 12 CFR part 34, subpart G. This information is used by creditors to evaluate real estate collateral securing HPMLs subject to 12 CFR 1026.35(c) and by consumers entering these transactions. The collections of information are mandatory for creditors making HPMLs subject to 12 CFR part 34, subpart G.

    Under 12 CFR 34.203(e) and (f), a creditor must, no later than the third business day after the creditor receives a consumer's application for an HPML, provide a disclosure to the consumer that informs the consumer of the purpose of the appraisal, that the creditor will provide the consumer with a copy of any appraisal, and that the consumer may choose to have a separate appraisal conducted at the expense of the consumer (Initial Appraisal Disclosure). If a loan is an HPML subject to 12 CFR 1026.35(c), then the creditor is required to obtain a written appraisal prepared by a certified or licensed appraiser who conducts a physical visit of the interior of the property that will secure the transaction (Written Appraisal) and provide a copy of the Written Appraisal to the consumer. Under 12 CFR 34.203(d)((1), a creditor is required to obtain an additional appraisal (Additional Written Appraisal) for an HPML that is subject to 12 CFR part 34, subpart G if: (1) The seller acquired the property securing the loan 90 or fewer days prior to the date of the consumer's agreement to acquire the property and the resale price exceeds the seller's acquisition price by more than 10 percent; or (2) the seller acquired the property securing the loan 91 to 180 days prior to the date of the consumer's agreement to acquire the property and the resale price exceeds the seller's acquisition price by more than 20 percent.

    Under 12 CFR 34.203(d)(3) and (4), the Additional Written Appraisal must meet the requirements described in 12 CFR 34.203(c)(1) and also include an analysis of: (1) The difference between the price at which the seller acquired the property and the price the consumer agreed to pay; (2) changes in market conditions between the date the seller acquired the property and the date the consumer agreed to acquire the property; and (3) any improvements made to the property between the date the seller acquired the property and the date on which the consumer agreed to acquire the property. Under 12 CFR 34.203(f), a creditor is required to provide a copy of any Additional Written Appraisal to the consumer.

    Affected Public: Businesses or other for-profit.

    Burden Estimates:

    Estimated Number of Respondents: 1,399.

    Estimated Total Annual Burden: 19,946 hours.

    Frequency of Response: On occasion.

    Comments: Comments submitted in response to this notice will be summarized and included in the request for OMB approval. All comments will become a matter of public record. Comments are invited on:

    (a) Whether the collection of information is necessary for the proper performance of the functions of the OCC, including whether the information has practical utility;

    (b) The accuracy of the OCC's estimate of the information collection burden;

    (c) Ways to enhance the quality, utility, and clarity of the information to be collected;

    (d) Ways to minimize the burden of the collection on respondents, including through the use of automated collection techniques or other forms of information technology; and

    (e) Estimates of capital or start-up costs and costs of operation, maintenance, and purchase of services to provide information.

    Dated: October 31, 2016. Karen Solomon, Deputy Chief Counsel, Office of the Comptroller of the Currency.
    [FR Doc. 2016-26683 Filed 11-3-16; 8:45 am] BILLING CODE 4810-33-P
    DEPARTMENT OF THE TREASURY Office of Foreign Assets Control Sanctions Action Pursuant to Executive Order 13224 AGENCY:

    Office of Foreign Assets Control, Treasury.

    ACTION:

    Notice.

    SUMMARY:

    The Treasury Department's Office of Foreign Assets Control (OFAC) is publishing the names of 2 individuals and 1 entity whose property and interests in property are blocked pursuant to Executive Order 13224 of September 23, 2001, “Blocking Property and Prohibiting Transactions With Persons Who Commit, Threaten To Commit, or Support Terrorism.”

    DATES:

    OFAC's action described in this notice was effective on November 1, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Associate Director for Global Targeting, tel.: 202/622-2420, Assistant Director for Sanctions Compliance & Evaluation, tel.: 202/622-2490, Assistant Director for Licensing, tel.: 202/622-2480, Office of Foreign Assets Control, or Chief Counsel (Foreign Assets Control), tel.: 202/622-2410, Office of the General Counsel, Department of the Treasury (not toll free numbers).

    SUPPLEMENTARY INFORMATION:

    Electronic Availability

    The SDN List and additional information concerning OFAC sanctions programs are available from OFAC's Web site (www.treas.gov/ofac).

    Notice of OFAC Action

    On November 1, 2016, OFAC blocked the property and interests in property of the following 2 individuals and 1 entity pursuant to E.O. 13224, “Blocking Property and Prohibiting Transactions With Persons Who Commit, Threaten To Commit, or Support Terrorism”:

    Individuals EN04NO16.322 Entity

    1. AL-OMGY AND BROTHERS MONEY EXCHANGE (a.k.a. AL OMGE AND BROS COMPANY MONEY EXCHANGE; a.k.a. AL OMGE AND BROS FOR EXCHANGE COMPANY; a.k.a. AL OMGI AND BROS COMPANY; a.k.a. AL-AMAQI LIL-SARAFAH COMPANY; a.k.a. AL-AMQI EXCHANGE; a.k.a. AL-AMQI MONEY EXCHANGE; a.k.a. AL-OMAG AND BROS EXCHANGE; a.k.a. AL-OMAGI & BRO. MONEY EXCHANGE COMPANY; a.k.a. AL-OMAKI EXCHANGE COMPANY; a.k.a. AL-OMAQY EXCHANGE CORPORATION; a.k.a. ALOMGE AND BROS FOR EXCHANGE COMPANY; a.k.a. AL-OMGI EXCHANGE COMPANY; a.k.a. AL-OMGY & BROS. MONEY EXCHANGE; a.k.a. ALOMGY AND BROS MONEY EXCHANGE; a.k.a. ALOMGY AND BROS. EXCHANGE; a.k.a. AL-OMGY COMPANY FOR MONEY EXCHANGE; a.k.a. AL-OMGY EXCHANGE COMPANY; a.k.a. AL-OMQI FOR EXCHANGE; a.k.a. ALOMQY & BROS. FOR MONEY EXCHANGE; a.k.a. AL-OMQY AND BROS COMPANY FOR MONEY EXCHANGE; a.k.a. AL-OMQY FOR EXCHANGING CO.; a.k.a. ALUMGY AND BROS MONEY EXCHANGE; a.k.a. AL-UMGY AND BROS MONEY EXCHANGE; a.k.a. AL-'UMQI BUREAUX DE CHANGE; a.k.a. AL-UMQI CURRENCY EXCHANGE COMPANY; a.k.a. AL-'UMQI GROUP FOR TRADE AND INVESTMENT; a.k.a. AL-UMQI HAWALA; a.k.a. AL-'UMQI MONEY EXCHANGE COMPANY; a.k.a. OMQI COMPANY; a.k.a. UMQI EXCHANGE), Al-Mukalla Branch, Al-Kabas, Near Al-Mukalla Post Office, Al-Mukalla, Hadhramout, Yemen; Galam Street, Taiz, Yemen; 6 Dr. Mostafa Abu Zahra Street, Naser, Cairo, Egypt; Ash Shihr, Hadramawt, Yemen; Qusayir, Hadramawt, Yemen; Hadhramout, Yemen; Aden, Yemen; Taix, Yemen; Abian, Yemen; Sanaa, Yemen; Hudidah, Yemen; Ibb, Yemen; Almhahra, Yemen; Albaidah, Yemen; Shabwah, Yemen; Lahej, Yemen; Suqatra, Yemen [SDGT] (Linked To: AL-QA'IDA IN THE ARABIAN PENINSULA; Linked To: AL-OMGY, Said Salih Abd-Rabbuh; Linked To: AL-OMGY, Muhammad Salih Abd-Rabbuh).

    Dated: November 1, 2016. John E. Smith, Acting Director, Office of Foreign Assets Control.
    [FR Doc. 2016-26688 Filed 11-3-16; 8:45 am] BILLING CODE 4810-AL-P
    DEPARTMENT OF THE TREASURY Privacy Act of 1974; Department of the Treasury, Bureau of Engraving and Printing (BEP) -.051—BEP Chief Counsel Files System of Records AGENCY:

    Bureau of Engraving and Printing (BEP), Treasury.

    ACTION:

    Notice of Privacy Act system of records.

    SUMMARY:

    In accordance with the Privacy Act of 1974, as amended, 5 U.S.C. 552a the U.S. Department of the Treasury (“Treasury” or the “Department”), Bureau of Engraving and Printing (“BEP”) proposes to establish a new Department of the Treasury system of records titled, “Department of the Treasury, Bureau of Engraving and Printing (BEP)-.051—BEP Chief Counsel Files System of Records.”

    DATES:

    Submit comments on or before December 5, 2016. This new system will be effective December 5, 2016.

    ADDRESSES:

    Comments should be sent to Leslie J. Rivera-Pagán, Attorney/Adviser—Privacy Officer, Office of the Chief Counsel-Privacy Office, U.S. Department of the Treasury, Bureau of Engraving and Printing, Room 419-A, 14th & C Streets SW., Washington, DC 20228, Attention: Revisions to Privacy Act Systems of Records. Comments can also be faxed to (202) 874-2951 or emailed to [email protected]. For faxes and emails, please place “Revisions to SORN Treasury/BEP .051—BEP Chief Counsel Files” in the subject line. Comments will be made available for public inspection upon written request. The BEP will make such comments available for public inspection and copying at the above listed location, on official business days between 9:00 a.m. and 5:00 p.m. eastern time. Persons wishing to review the comments must request an appointment by telephoning (202) 874-2500. All comments received, including attachments and other supporting documents, are part of the public record and subject to public disclosure. You should submit only information that you wish to make available publicly. All comments received will be posted without change to http://www.regulations.gov, including any personal information provided.

    FOR FURTHER INFORMATION CONTACT:

    For general questions and privacy issues please contact: Leslie J. Rivera-Pagán at (202) 874-2500 or [email protected].

    SUPPLEMENTARY INFORMATION:

    In accordance with the Privacy Act of 1974, 5 U.S.C. 552a, the U.S. Department of the Treasury (Treasury), Bureau of Engraving and Printing (“BEP”) proposes to establish a new Treasury system of records titled, “Treasury/Bureau of Engraving and Printing (BEP)-.051—BEP Chief Counsel Files System of Records.”

    The Secretary of the Treasury has delegated final legal authority within Treasury to the General Counsel, who is charged to determine the structural and functional organization of the Legal Division and to establish the policies, procedures, and standards governing its functioning. The mission of the Office of the Chief Counsel of the Bureau of Engraving and Printing (“BEP”) is to provide legal advice or representation to BEP management regarding issues of compliance, investigation, and implementation of matters related to the BEP and the statutes and regulations administered by the BEP. The purpose of this system is to assist BEP attorneys in providing legal advice to BEP management on a wide variety of legal issues. In addition, the system will assist to assess the workload of the legal staff, track the status, progress, and disposition of matters assigned to the legal staff in matters such as litigation and/or administrative proceedings in which BEP is a party, and matters in which the Office of the Chief Counsel must provide advice.

    The Office of the Chief Counsel is responsible for collecting, reviewing, redacting, and producing agency records, in support of processing and resolving BEP legal matters. This system has an effect on individual privacy that is balanced by the need to collect and maintain information related to legal advice issued.

    Routine uses contained in this notice include sharing with the Department of Justice (DOJ) for legal advice and representation; to a congressional office at the request of an individual; to unions recognized as exclusive bargaining representatives to the extent necessary to obtain information pertinent to an investigation or matter under consideration; to federal, state, local, or foreign agencies, or other public authority agencies responsible for investigating or prosecuting the violations of, or for enforcing, or implementing, a statute, rule, regulation, order, or license, where the disclosing agency becomes aware of a potential violation of civil, administrative, or criminal law, or regulation; to federal, state, local, or other public authority agency maintaining civil, criminal, administrative or other relevant enforcement information or other pertinent information, which has requested information relevant to, or necessary to, the requesting agency's, bureau's, or authority's hiring or retention of an individual, or issuance of a security clearance, license, contract, grant, or other benefit; to a court, adjudicative body, or other administrative body before which BEP is authorized to appear when (a) the agency, or (b) any employee of the agency in his or her official capacity, or (c) any employee of the agency in his or her individual capacity where the Department of Justice or the agency has agreed to represent the employee, or (d) the United States, when the agency determines that litigation is likely to affect the agency, is a party to litigation or has an interest in such litigation, and the use of such records by the agency is deemed to be relevant and necessary to the litigation or administrative proceeding and not otherwise privileged; to a court, magistrate, administrative tribunal, or named parties in the course of presenting evidence, including disclosures to opposing counsel or witnesses in the course of civil discovery, litigation, or settlement negotiations or in connection with criminal law proceedings or in response to a court order where arguably relevant to a proceeding; to an arbitrator, mediator, or other neutral party, in the context of alternative dispute resolution, to the extent relevant and necessary for resolution of the matters presented, including asserted privileges; and to agencies, entities, or persons during a security or information compromise or breach.

    This newly established system will be included in Treasury's inventory of record systems.

    Below is the description of the Treasury/Bureau of Engraving and Printing (BEP)-.051—BEP Chief Counsel Files System of Records.” In accordance with 5 U.S.C. 552a(r), Treasury has provided a report of this system of records to the Office of Management and Budget and to Congress.

    Ryan Law, Acting Deputy Assistant Secretary for Privacy, Transparency, and Records Designee. System of Records

    Department of the Treasury (Treasury)/Bureau of Engraving and Printing (BEP)-.051

    System name:

    Treasury/BEP-.051 BEP Chief Counsel Files

    System location:

    Bureau of Engraving and Printing, Office of the Chief Counsel, District of Columbia Facility, 14th & C Streets SW., Room 419-A, Washington, DC 20028 and Western Currency Facility, 9000 Blue Mound Road, Fort Worth, TX 76131.

    Categories of individuals covered by the system:

    Employees and former employees of the Bureau of Engraving and Printing, applicants for employment, adjudicators and legal counsel or other representatives, parties to and persons who have requested information or action from the Office of the Chief Counsel, who are involved in litigations, actions, personnel matters, administrative claims, administrative appeals, complaints, grievances, advisories, and other matters assigned to, or under the jurisdiction of the Office of the Chief Counsel, and employees of the Office of the Chief Counsel.

    Categories of records in the system:

    • Names, titles, and contact information of the parties and individuals involved, including phone and fax numbers, home and business addresses, and email addresses;

    • Case management documents, case and/or matter names, and case and/or matter identification numbers;

    • Information and documents relating to grievances, adverse personnel actions, discrimination complaints, and other information and documents related to administrative proceedings;

    • Memoranda and litigation related materials including attorney work product;

    • Descriptions, summaries, and statuses of issues, cases and/or matters, and assignments;

    • Complaints;

    • Claim forms;

    • Reports of Investigations;

    • Accident reports;

    • Witness statements and affidavits;

    • Pleadings;

    • Discovery materials generated in connection with litigation and/or administrative actions;

    • Correspondence;

    • Administrative files;

    • Other records collected or generated in response to matters assigned to the Office of the Chief Counsel.

    Authority for maintenance of the system:

    5 U.S.C. 301, 5520a, 7301, 7351, 7353, 5 U.S.C. App. (Ethics in Government Act of 1978); 28 U.S.C. 2672; 31 U.S.C. 301, 321, 1353, 3721; 42 U.S.C. 659; 44 U.S.C. 3101.

    Purpose(s):

    The Office of the Chief Counsel creates and maintains these records to provide legal advice or representation to BEP management regarding issues of compliance, investigation, and implementation of matters related to the BEP and the statutes and regulations administered by the BEP and to maintain historical reference information pertaining to such matters. In addition, the system of records is used to assess the workload of the legal staff, track the status, progress, and disposition of matters assigned to the legal staff, and capture summary information (such as name of principal parties or subjects, case file numbers, and assignments) in matters such as litigation and/or administrative proceedings in which the BEP is a party, and matters in which the Office of the Chief Counsel must provide advice.

    Routine uses of records maintained in the system, including categories of users and the purposes of such uses:

    In addition to those disclosures generally permitted under 5 U.S.C. 552a(b) of the Privacy Act, all or a portion of the records or information contained in this system may be disclosed outside Treasury/BEP as a routine use pursuant to 5 U.S.C. 552a(b)(3) as follows:

    (1) Appropriate federal, state, local, or foreign agencies, or other public authority agencies responsible for investigating or prosecuting the violations of, or for enforcing, or implementing, a statute, rule, regulation, order, or license, where the disclosing agency becomes aware of a potential violation of civil, administrative, or criminal law, or regulation;

    (2) To federal, state, local, or other public authority agency maintaining civil, criminal, administrative or other relevant enforcement information or other pertinent information, which has requested information relevant to, or necessary to, the requesting agency's, bureau's, or authority's hiring or retention of an individual, or issuance of a security clearance, license, contract, grant, or other benefit;

    (3) To a court, adjudicative body, or other administrative body before which BEP is authorized to appear when (a) the agency, or (b) any employee of the agency in his or her official capacity, or (c) any employee of the agency in his or her individual capacity where the Department of Justice or the agency has agreed to represent the employee, or (d) the United States, when the agency determines that litigation is likely to affect the agency, is a party to litigation or has an interest in such litigation, and the use of such records by the agency is deemed to be relevant and necessary to the litigation or administrative proceeding and not otherwise privileged;

    (4) To a court, magistrate, administrative tribunal, or named parties in the course of presenting evidence, including disclosures to opposing counsel or witnesses in the course of civil discovery, litigation, or settlement negotiations or in connection with criminal law proceedings or in response to a court order where arguably relevant to a proceeding;

    (5) To an arbitrator, mediator, or other neutral party, in the context of alternative dispute resolution, to the extent relevant and necessary for resolution of the matters presented, including asserted privileges;

    (6) The U.S. Department of Justice (“DOJ”) for its use in providing legal advice to the BEP or in representing the BEP in a proceeding before a court, adjudicative body, or other administrative body before which the BEP is authorized to appear, where the BEP deems DOJ's use of such information relevant and necessary to the litigation, and such proceeding names as a party or interests:

    (a) The BEP or any component of it;

    (b) Any employee of the BEP in his or her official capacity;

    (c) Any employee of the BEP in his or her individual capacity where DOJ has agreed to represent the employee; or

    (d) The United States, where the BEP determines that litigation is likely to affect the BEP or any of its components;

    (7) Unions recognized as exclusive bargaining representatives under the Civil Service Reform Act of 1978, 5 U.S.C. 7111 and 7114 to the extent necessary to obtain information pertinent to an investigation or matter under consideration;

    (8) To a congressional office in response to an inquiry made at the request of the individual to whom the record pertains;

    (9) To appropriate agencies, entities, and persons when (a) the Department suspects or has confirmed that the security or confidentiality of information in the system of records has been compromised; (b) the Department has determined that as a result of the suspected or confirmed compromise there is a risk of harm to economic or property interests, identity theft or fraud, or harm to the security or integrity of this system or other systems or programs (whether maintained by the Department or another agency or entity) that rely upon the compromised information; and (c) the disclosure made to such agencies, entities, and persons is reasonably necessary to assist in connection with the Department's efforts to respond to the suspected or confirmed compromise and prevent, minimize, or remedy such harm.

    Policies and practices for storing, retrieving, accessing, retaining, and disposing of records in the system: Storage:

    Records in this system are stored electronically or on paper in secure facilities in a locked drawer behind a locked door. The records are stored on magnetic disc, tape, or electronic records.

    Retrievability:

    Records are retrievable by the name of the party or individual who are subjects to, or are connected to, subject matters received by or assigned; name of the office; office file number; case number; case name; staff name; case and/or matter status; case and/or matter subject; date the case and/or matter was opened; date the case and/or matter was closed; date the case and/or matter was modified; or by keyword search.

    Safeguards:

    Access to electronic and paper records is limited to the Office of the Chief Counsel personnel. Paper records are maintained in locked facilities and/or cabinets with restricted access. Electronic records are restricted to authorized personnel who have been issued non-transferrable access passwords.

    Retention and disposal:

    Paper and electronic records are retained and disposed in accordance with the Bureau of Engraving and Printing Records Schedule, N1-318-04-3, and General Records Schedules (GRS) 1, 2.8, 4.1, 4.2, and 23, as approved by the National Archives and Records Administration.

    System Manager and address:

    Chief Counsel, Bureau of Engraving and Printing, Office of the Chief Counsel, District of Columbia Facility, 14th & C Streets SW., Room 419-A, Washington, DC 20028 and Western Currency Facility, 9000 Blue Mound Road, Fort Worth, TX 76131.

    Notification procedure:

    Individuals seeking to determine whether this system of records contains their information should address written inquiries in accordance with 31 CFR Part1 to the Disclosure Officer, Bureau of Engraving and Printing, Office of the Chief Counsel—FOIA and Transparency Services, 14th & C Streets SW., Room 419-A, Washington, DC 20228.

    Record access procedures:

    See “Notification procedure” above.

    Contesting record procedures:

    See “Notification procedure” above.

    Record source categories:

    The sources of the records include: (1) Existing BEP personnel and records, (2) subject of the record, (3) parties and witnesses to disputed matters of fact or law, (4) Congressional offices, and, (5) federal, state, and/or local agencies.

    Exemptions claimed for the system:

    None.

    [FR Doc. 2016-26661 Filed 11-3-16; 8:45 am] BILLING CODE 4840-01-P
    81 214 Friday, November 4, 2016 Rules and Regulations Part II Department of Health and Human Services Centers for Medicare & Medicaid Services 42 CFR Parts 414 and 495 Medicare Program; Merit-Based Incentive Payment System (MIPS) and Alternative Payment Model (APM) Incentive Under the Physician Fee Schedule, and Criteria for Physician-Focused Payment Models; Final Rule DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare & Medicaid Services 42 CFR Parts 414 and 495 [CMS-5517-FC] RIN 0938-AS69 Medicare Program; Merit-Based Incentive Payment System (MIPS) and Alternative Payment Model (APM) Incentive Under the Physician Fee Schedule, and Criteria for Physician-Focused Payment Models AGENCY:

    Centers for Medicare & Medicaid Services (CMS), HHS.

    ACTION:

    Final rule with comment period.

    SUMMARY:

    The Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) repeals the Medicare sustainable growth rate (SGR) methodology for updates to the physician fee schedule (PFS) and replaces it with a new approach to payment called the Quality Payment Program that rewards the delivery of high-quality patient care through two avenues: Advanced Alternative Payment Models (Advanced APMs) and the Merit-based Incentive Payment System (MIPS) for eligible clinicians or groups under the PFS. This final rule with comment period establishes incentives for participation in certain alternative payment models (APMs) and includes the criteria for use by the Physician-Focused Payment Model Technical Advisory Committee (PTAC) in making comments and recommendations on physician-focused payment models (PFPMs). Alternative Payment Models are payment approaches, developed in partnership with the clinician community, that provide added incentives to deliver high-quality and cost-efficient care. APMs can apply to a specific clinical condition, a care episode, or a population. This final rule with comment period also establishes the MIPS, a new program for certain Medicare-enrolled practitioners. MIPS will consolidate components of three existing programs, the Physician Quality Reporting System (PQRS), the Physician Value-based Payment Modifier (VM), and the Medicare Electronic Health Record (EHR) Incentive Program for Eligible Professionals (EPs), and will continue the focus on quality, cost, and use of certified EHR technology (CEHRT) in a cohesive program that avoids redundancies. In this final rule with comment period we have rebranded key terminology based on feedback from stakeholders, with the goal of selecting terms that will be more easily identified and understood by our stakeholders.

    DATES:

    Effective date: The provisions of this final rule with comment period are effective on January 1, 2017.

    Comment date: To be assured consideration, comments must be received at one of the addresses provided below, no later than 5 p.m. on December 19, 2016.

    ADDRESSES:

    In commenting, please refer to file code CMS-5517-FC. Because of staff and resource limitations, we cannot accept comments by facsimile (FAX) transmission. You may submit comments in one of four ways (please choose only one of the ways listed):

    1. Electronically. You may submit electronic comments on this regulation to http://www.regulations.gov. Follow the “Submit a comment” instructions.

    2. By regular mail. You may mail written comments to the following address ONLY: Centers for Medicare & Medicaid Services, Department of Health and Human Services, Attention: CMS-5517-FC, P.O. Box 8013, Baltimore, MD 21244-8013.

    Please allow sufficient time for mailed comments to be received before the close of the comment period.

    3. By express or overnight mail. You may send written comments to the following address ONLY: Centers for Medicare & Medicaid Services, Department of Health and Human Services, Attention: CMS-5517-FC, Mail Stop C4-26-05, 7500 Security Boulevard, Baltimore, MD 21244-1850.

    4. By hand or courier. Alternatively, you may deliver (by hand or courier) your written comments ONLY to the following addresses prior to the close of the comment period:

    a. For delivery in Washington, DC—Centers for Medicare & Medicaid Services, Department of Health and Human Services, Room 445-G, Hubert H. Humphrey Building, 200 Independence Avenue SW., Washington, DC 20201.

    (Because access to the interior of the Hubert H. Humphrey Building is not readily available to persons without Federal government identification, commenters are encouraged to leave their comments in the CMS drop slots located in the main lobby of the building. A stamp-in clock is available for persons wishing to retain a proof of filing by stamping in and retaining an extra copy of the comments being filed.)

    b. For delivery in Baltimore, MD—Centers for Medicare & Medicaid Services, Department of Health and Human Services, 7500 Security Boulevard, Baltimore, MD 21244-1850.

    If you intend to deliver your comments to the Baltimore address, call telephone number (410) 786-7195 in advance to schedule your arrival with one of our staff members.

    Comments erroneously mailed to the addresses indicated as appropriate for hand or courier delivery may be delayed and received after the comment period.

    For information on viewing public comments, see the beginning of the SUPPLEMENTARY INFORMATION section.

    FOR FURTHER INFORMATION CONTACT:

    Molly MacHarris, (410) 786-4461, for inquiries related to MIPS. James P. Sharp, (410) 786-7388, for inquiries related to APMs.

    SUPPLEMENTARY INFORMATION:

    Table of Contents I. Executive Summary II. Provisions of the Proposed Regulations and Analysis of and Responses to Comments A. Establishing MIPS and the Advanced APM Incentive B. Program Principles and Goals C. Changes to Existing Programs D. Definitions E. MIPS Program Details F. Overview of Incentives for Participation in Advanced Alternative Payment Models III. Collection of Information Requirements IV. Regulatory Impact Analysis A. Statement of Need B. Overall Impact C. Changes in Medicare Payments D. Impact on Beneficiaries E. Impact on Other Health Care Programs and Providers F. Alternatives Considered G. Assumptions and Limitations H. Accounting Statement Acronyms Because of the many terms to which we refer by acronym in this rule, we are listing the acronyms used and their corresponding meanings in alphabetical order below: ABCTM Achievable Benchmark of Care ACO Accountable Care Organization APM Alternative Payment Model APRN Advanced Practice Registered Nurse ASPE HHS' Office of the Assistant Secretary for Planning and Evaluation BPCI Bundled Payments for Care Improvement CAH Critical Access Hospital CAHPS Consumer Assessment of Healthcare Providers and Systems CBSA Non-Core Based Statistical Area CDS Clinical Decision Support CEHRT Certified EHR technology CFR Code of Federal Regulations CHIP Children's Health Insurance Program CJR Comprehensive Care for Joint Replacement CMMI Center for Medicare & Medicaid Innovation (CMS Innovation Center) COI Collection of Information CPIA Clinical Practice Improvement Activity CPOE Computerized Provider Order Entry CPR Customary, Prevailing, and Reasonable CPS Composite Performance Score CPT Current Procedural Terminology CQM Clinical Quality Measure CY Calendar Year eCQM electronic Clinician Quality Measure ED Emergency Department EHR Electronic Health Record EP Eligible Professional ESRD End-Stage Renal Disease FFS Fee-for-Service FR Federal Register FQHC Federally Qualified Health Center GAO Government Accountability Office HIE Health Information Exchange HIPAA Health Insurance Portability and Accountability Act of 1996 HITECH Health Information Technology for Economic and Clinical Health HPSA Health Professional Shortage Area HHS Department of Health & Human Services HRSA Health Resources and Services Administration IHS Indian Health Service IT Information Technology LDO Large Dialysis Organization MACRA Medicare Access and CHIP Reauthorization Act of 2015 MEI Medicare Economic Index MIPAA Medicare Improvements for Patients and Providers Act of 2008 MIPS Merit-based Incentive Payment System MLR Minimum Loss Rate MSPB Medicare Spending per Beneficiary MSR Minimum Savings Rate MUA Medically Underserved Area NPI National Provider Identifier OCM Oncology Care Model ONC Office of the National Coordinator for Health Information Technology PECOS Medicare Provider Enrollment, Chain, and Ownership System PFPMs Physician-Focused Payment Models PFS Physician Fee Schedule PHS Public Health Service PQRS Physician Quality Reporting System PTAC Physician-Focused Payment Model Technical Advisory Committee QCDR Qualified Clinical Data Registry QP Qualifying APM Participant QRDA Quality Reporting Document Architecture QRUR Quality and Cost Reports RBRVS Resource-Based Relative Value Scale RFI Request for Information RHC Rural Health Clinic RIA Regulatory Impact Analysis RVU Relative Value Unit SGR Sustainable Growth Rate TCPI Transforming Clinical Practice Initiative TIN Tax Identification Number VM Value-Based Payment Modifier VPS Volume Performance Standard I. Executive Summary 1. Overview

    The Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) (Pub. L. 114-10, enacted April 16, 2015), amended title XVIII of the Social Security Act (the Act) to repeal the Medicare sustainable growth rate, to reauthorize the Children's Health Insurance Program, and to strengthen Medicare access by improving physician and other clinician payments and making other improvements. This rule finalizes policies to improve physician and other clinician payments by changing the way Medicare incorporates quality measurement into payments and by developing new policies to address and incentivize participation in Alternative Payment Models (APMs). These unified policies to promote greater value within the healthcare system are referred to as the Quality Payment Program.

    The MACRA, landmark bipartisan legislation, advances a forward-looking, coordinated framework for health care providers to successfully take part in the CMS Quality Payment Program that rewards value and outcomes in one of two ways:

    • Advanced Alternative Payment Models (Advanced APMs).

    • Merit-based Incentive Payment System (MIPS).

    The MACRA marks a milestone in efforts to improve and reform the health care system. Building off of the successful coverage expansions and improvements to access under the Patient Protection and Affordable Care Act (Affordable Care Act), the MACRA puts an increased focus on the quality and value of care delivered. By implementing MACRA to promote participation in certain APMs, such as the Shared Saving Program, Medical Home Models, and innovative episode payment models for cardiac and joint care, and by paying eligible clinicians for quality and value under MIPS, we support the nation's progress toward achieving a patient-centered health care system that delivers better care, smarter spending, and healthier people and communities. By driving significant changes in how care is delivered to make the health care system more responsive to patients and families, we believe the Quality Payment Program supports eligible clinicians in improving the health of their patients, including encouraging interested eligible clinicians in their successful transition into APMs. To implement this vision, we are finalizing a program that emphasizes high-quality care and patient outcomes while minimizing burden on eligible clinicians and that is flexible, highly transparent, and improves over time with input from clinical practices. To aid in this process, we have sought feedback from the health care community through various public avenues and solicited comment through the proposed rule. As we establish policies for effective implementation of the MACRA, we do so with the explicit understanding that technology, infrastructure, physician support systems, and clinical practices will change over the next few years. In addition, we are aware of the diversity of clinician practices in their experience with quality-based payments. As a result of these factors, we expect the Quality Payment Program to evolve over multiple years in order to achieve our national goals. In the early years of the program, we will begin by laying the groundwork for expansion towards an innovative, outcome-focused, patient-centered, resource-effective health system. Through a staged approach, we can develop policies that are operationally feasible and made in consideration of system capabilities and our core strategies to drive progress and reform efforts. Thus, due to this staged approach, we are finalizing the rule with a comment period. We commit to continue iterating on these policies.

    The Quality Payment Program aims to do the following: (1) Support care improvement by focusing on better outcomes for patients, decreased provider burden, and preservation of independent clinical practice; (2) promote adoption of alternative payment models that align incentives across healthcare stakeholders; and (3) advance existing efforts of Delivery System Reform, including ensuring a smooth transition to a new system that promotes high-quality, efficient care through unification of CMS legacy programs.

    This final rule with comment period establishes the Quality Payment Program and its two interrelated pathways: Advanced APMs and the MIPS. This final rule with comment period establishes incentives for participation in Advanced APMs, supporting the Administration's goals of transitioning from fee-for-service (FFS) payments to payments for quality and value, including approaches that focus on better care, smarter spending, and healthier people. This final rule with comment period also includes definitions of Qualifying APM Participants (QPs) in Advanced APMs and outlines the criteria for use by the Physician-Focused Payment Model Technical Advisory Committee (PTAC) in making comments and recommendations to the Secretary on physician-focused payment models (PFPMs).

    MIPS is a new program for certain Medicare-participating eligible clinicians that will make payment adjustments based on performance on quality, cost and other measures, and will consolidate components of three existing programs—the Physician Quality Reporting System (PQRS), the Physician Value-based Payment Modifier (VM), and the Medicare Electronic Health Record (EHR) Incentive Program for eligible professionals (EPs). As prescribed by Congress, MIPS will focus on: Quality—both a set of evidence-based, specialty-specific standards as well as practice-based improvement activities; cost; and use of certified electronic health record (EHR) technology (CEHRT) to support interoperability and advanced quality objectives in a single, cohesive program that avoids redundancies. Many features of MIPS are intended to simplify and integrate further during the second and third years.

    2. Quality Payment Program Strategic Objectives

    We solicited and reviewed over 4000 comments and had over 100,000 physicians and other stakeholders attend our outreach sessions. Through this outreach, we created six strategic objectives to drive continued progress and improvement.

    These objectives guided our final policies and will guide our future rulemaking in order to design, implement and evolve a Quality Payment Program that aims to improve health outcomes, promote smarter spending, minimize burden of participation, and provide fairness and transparency in operations. These strategic objectives are as follows: (1) To improve beneficiary outcomes and engage patients through patient-centered Advanced APM and MIPS policies; (2) to enhance clinician experience through flexible and transparent program design and interactions with easy-to-use program tools; (3) to increase the availability and adoption of robust Advanced APMs; (4) to promote program understanding and maximize participation through customized communication, education, outreach and support that meet the needs of the diversity of physician practices and patients, especially the unique needs of small practices; (5) to improve data and information sharing to provide accurate, timely, and actionable feedback to clinicians and other stakeholders; and (6) to ensure operational excellence in program implementation and ongoing development. More information on these objectives and the Quality Payment Program can be found at QualityPaymentProgram.cms.gov.

    With these objectives we recognize that the Quality Payment Program provides new opportunities to improve care delivery by supporting and rewarding clinicians as they find new ways to engage patients, families and caregivers and to improve care coordination and population health management. In addition, we recognize that by developing a program that is flexible instead of one-size-fits-all, clinicians will be able to choose to participate in a way that is best for them, their practice, and their patients. For clinicians interested in APMs, we believe that by setting ambitious yet achievable goals, eligible clinicians will move with greater certainty toward these new approaches of delivering care. To these ends, and to ensure this program works for all stakeholders, we further recognize that we must provide ongoing education, support, and technical assistance so that clinicians can understand program requirements, use available tools to enhance their practices, and improve quality and progress toward participation in alternative payment models if that is the best choice for their practice. Finally, we understand that we must achieve excellence in program management, focusing on customer needs, promoting problem-solving, teamwork, and leadership to provide continuous improvements in the Quality Payment Program.

    3. One Quality Payment Program

    Clinicians have told us that they do not separate their patient care into domains, and that the Quality Payment Program needs to reflect typical clinical workflows in order to achieve its goals of better patient care. Advanced APMs, the focus of one pathway of the Quality Payment Program, contribute to better care and smarter spending by allowing physicians and other clinicians to deliver coordinated, customized, high-quality care to their patients within a streamlined payment system. Within MIPS, the second pathway of the Quality Payment Program, we believe that the unification into one Quality Payment Program can best be accomplished by making connections across the four pillars of the MIPS payment structure identified in the MACRA legislation—quality, clinical practice improvement activities (referred to as “improvement activities”), meaningful use of CEHRT (referred to as “advancing care information”), and resource use (referred to as “cost”)—and by emphasizing that the Quality Payment Program is at its core about improving the quality of patient care. Indeed, the bedrock of the Quality Payment Program is high-quality, patient-centered care followed by useful feedback, in a continuous cycle of improvement. The principal way MIPS measures quality of care is through evidence-based clinical quality measures (CQMs) which MIPS eligible clinicians can select, the vast majority of which are created by or supported by clinical leaders and endorsed by a consensus-based process. Over time, the portfolio of quality measures will grow and develop, driving towards outcomes that are of the greatest importance to patients and clinicians. Through MIPS, we have the opportunity to measure quality not only through clinician-proposed measures, but to take it a step further by also accounting for activities that physicians themselves identify: Namely, practice-driven quality improvement. The MACRA requires us to measure whether technology is used meaningfully. Based on significant feedback, this area is simplified into supporting the exchange of patient information and how technology specifically supports the quality goals selected by the practice. The cost performance category has also been simplified and weighted at zero percent of the final score for the transition year of CY 2017. Given the primary focus on quality, we have accordingly indicated our intention to align these measures fully to the quality measures over time in the scoring system (see section II.E.6.a. for further details). That is, we are establishing special policies for the first year of the Quality Payment Program, which we refer to as the “transition year” throughout this final rule with comment period; this transition year corresponds to the first performance period of the program, calendar year (CY) 2017, and the first payment year, CY 2019. We envision that it will take a few years to reach a steady state in the program, and we therefore anticipate a ramp-up process and gradual transition with less financial risk for clinicians in at least the first 2 years. In the transition year in 2017, we will test this performance category alignment, for example by allowing certain improvement activities that are completed using CEHRT to achieve a bonus score in the advancing care information performance category with the intent of analyzing adoption, and in future years, potentially adding activities that reinforce integration of the program. Our hope is for the program to evolve to the point where all the clinical activities captured in MIPS across the four performance categories reflect the single, unified goal of quality improvement.

    4. Summary of the Major Provisions a. Transition Year and Iterative Learning and Development Period

    We recognize, as described through many insightful comments, that many eligible clinicians face challenges in understanding the requirements and being prepared to participate in the Quality Payment Program in 2017. As a result, we have decided to finalize transitional policies throughout this final rule with comment period, which will focus the program in its initial years on encouraging participation and educating clinicians, all with the primary goal of placing the patient at the center of the healthcare system. At the same time, we will also increase opportunities to join Advanced APMs, allowing eligible clinicians who chose to do so an opportunity to participate.

    Given the wide diversity of clinical practices, the initial development period of the Quality Payment Program implementation would allow physicians to pick their pace of participation for the first performance period that begins January 1, 2017. Eligible clinicians will have three flexible options to submit data to MIPS and a fourth option to join Advanced APMs in order to become QPs, which would ensure they do not receive a negative payment adjustment in 2019.

    In the transition year CY 2017 of the program, this rule finalizes a period during which clinicians and CMS will build capabilities to report and gain experience with the program. Clinicians can choose their course of participation in this year with four options.

    (1) Clinicians can choose to report to MIPS for a full 90-day period or, ideally, the full year, and maximize the MIPS eligible clinician's chances to qualify for a positive adjustment. In addition, MIPS eligible clinicians who are exceptional performers in MIPS, as shown by the practice information that they submit, are eligible for an additional positive adjustment for each year of the first 6 years of the program.

    (2) Clinicians can choose to report to MIPS for a period of time less than the full year performance period 2017 but for a full 90-day period at a minimum and report more than one quality measure, more than one improvement activity, or more than the required measures in the advancing care information performance category in order to avoid a negative MIPS payment adjustment and to possibly receive a positive MIPS payment adjustment.

    (3) Clinicians can choose to report one measure in the quality performance category; one activity in the improvement activities performance category; or report the required measures of the advancing care information performance category and avoid a negative MIPS payment adjustment. Alternatively, if MIPS eligible clinicians choose to not report even one measure or activity, they will receive the full negative 4 percent adjustment.

    (4) MIPS eligible clinicians can participate in Advanced APMs, and if they receive a sufficient portion of their Medicare payments or see a sufficient portion of their Medicare patients through the Advanced APM, they will qualify for a 5 percent bonus incentive payment in 2019.

    We are finalizing the 2017 performance period for the 2019 MIPS payment year to be a transition year as part of the development period in the program. For this transition year, for MIPS the performance threshold will be lowered to a threshold of 3 points. Clinicians who achieve a final score of 70 or higher will be eligible for the exceptional performance adjustment, funded from a pool of $500 million.

    For full participation in MIPS and in order to achieve the highest possible final scores, MIPS eligible clinicians are encouraged to submit measures and activities in all three integrated performance categories: Quality, improvement activities, and advancing care information. To address public comments on the cost performance category, the weighting of the cost performance category has been lowered to 0 percent for the transition year. For full participation in the quality performance category, clinicians will report on six quality measures, or one specialty-specific or subspecialty-specific measure set. For full participation in the advancing care information performance category, MIPS eligible clinicians will report on five required measures. For full participation in the improvement activities performance category, clinicians can engage in up to four activities, rather than the proposed six activities, to earn the highest possible score of 40.

    For the transition year CY 2017, for quality, clinicians who submit one out of at least six quality measures will meet the MIPS performance threshold of 3; however, more measures are required for groups who submit measures using the CMS Web Interface. For the transition year CY 2017, for quality, higher measure points may be awarded based on achieving higher performance in the measure. For improvement activities, attesting to at least one improvement activity will also be sufficient to meet the MIPS performance threshold in the transition year CY 2017. For advancing care information, clinicians reporting on the required measures in that category will meet the performance threshold in the transition year. These transition year policies for CY 2017 will encourage participation by clinicians and will provide a ramp up period for clinicians to prepare for higher performance thresholds in the second year of the program.

    Historical evidence has shown that clinical practices of all sizes can successfully submit data, including over 110,000 solo and small practices with 15 or fewer clinicians who participated in PQRS in 2015. The transition year and development period approach gives clinicians structured, practical choices that can best suit their practices. Resources will be made available to assist clinicians and practices through this transition. The hope is that by lowering the barriers to participation at the outset, we can set the foundation for a program that supports long-term, high-quality patient care through feedback and open communication between CMS and other stakeholders.

    We anticipate that the iterative learning and development period will last longer than the first year, CY 2017, of the program as we move towards a steady state; therefore, we envision CY 2018 to also be transitional in nature to provide a ramp-up of the program and of the performance thresholds. We anticipate making proposals on the parameters of this second transition year through rule-making in 2017.

    b. Legacy Quality Reporting Programs

    This final rule with comment period will sunset payment adjustments under the current Medicare EHR Incentive Program for EPs (section 1848(o) of the Act), the PQRS (section 1848(k) and (m) of the Act), and the VM (section 1848(p) of the Act) programs after CY2018. Components of these three programs will be carried forward into MIPS. This final rule with comment period establishes new subpart O of our regulations at 42 CFR part 414 to implement the new MIPS program as required by the MACRA.

    c. Significant Changes From Proposed Rule

    In developing this final rule with comment period, we sought feedback from stakeholders throughout the process, including through Requests for Information in October 2015 and through the comment process for the proposed rule from April to June 2016. We received thousands of comments from a broad range of sources including professional associations and societies, physician practices, hospitals, patient groups, and health IT vendors, and we thank our many commenters and acknowledge their valued input throughout the proposed rule process. In response to comments to the proposed rule, we have made significant changes in this final rule with comment period, including (1) bolstering support for small and independent practices; (2) strengthening the movement towards Advanced Alternative Payment Models by offering potential new opportunities such as the Medicare ACO Track 1+ (3) securing a strong start to the program with a flexible, pick-your-own-pace approach to the initial years of the program; and (4) connecting the statutory domains into one unified program that supports clinician-driven quality improvement. These themes are illustrated in the following specific policy changes: (1) The creation of a transition year and iterative learning and development period in the beginning of the program; (2) the adjustment of the MIPS low-volume threshold; (3) the establishment of an Advanced APM financial risk standard that promotes participation in robust, high-quality models; (4) the simplification of prior “all-or-nothing” requirements in the use of certified EHR technology; and (5) the establishment of Medical Home Model standards that promote care coordination.

    We intend to continue open communication with stakeholders, including consultation with tribes and tribal officials, on an ongoing basis as we develop the Quality Payment Program in future years.

    d. Small Practices

    As outlined above, protection of small, independent practices is an important thematic objective for this final rule with comment. For 2017, many small practices will be excluded from new requirements due to the low-volume threshold, which has been set at less than or equal to $30,000 in Medicare Part B allowed charges or less than or equal to 100 Medicare patients, representing 32.5 percent of pre-exclusion Medicare clinicians but only 5 percent of Medicare Part B spending. Stakeholder comments suggested setting a higher low-volume threshold for exclusion from MIPS but allowing clinicians that would be excluded by the threshold to opt in to the program if they wished to report to MIPS and receive a MIPS payment adjustment for the year. We considered this option but determined that it was inconsistent with the statutory MIPS exclusion based on the low-volume threshold. We anticipate that more clinicians will be determined to be eligible to participate in the program in future years.

    MACRA also provides that solo and small practices may join “virtual groups” and combine their MIPS reporting. Many commenters suggested that we allow groups with more than 10 clinicians to participate as virtual groups. As noted, the statute limits the virtual group option to individuals and groups of not more than 10 clinicians. We are not implementing virtual groups in the transition year CY 2017 of the program; however, through the policies of the transition year and development period, we believe we have addressed some of the concerns expressed by clinicians hesitant to participate in the Quality Payment Program. CMS wants to make sure the virtual group technology is meaningful and simple to use for clinicians, and we look forward to stakeholder engagement on how to structure and implement virtual groups in future years of the program.

    In keeping with the objectives of providing education about the program and maximizing participation, and as mandated by the MACRA, $100 million in technical assistance will be available to MIPS eligible clinicians in small practices, rural areas, and practices located in geographic health professional shortage areas (HPSAs), including IHS, tribal, and urban Indian clinics, through contracts with quality improvement organizations, regional health collaboratives, and others to offer guidance and assistance to MIPS eligible clinicians in practices of 15 or fewer MIPS eligible clinicians. Priority will be given to practices located in rural areas, defined as clinicians in zip codes designated as rural, using the most recent Health Resources and Services Administration (HRSA) Area Health Resource File data set available; medically underserved areas (MUAs); and practices with low MIPS final scores or in transition to APM participation. The MACRA also includes provisions requiring an examination of the pooling of financial risk for physician practices, in particular for small practices. Specifically, section 101(c)(2)(C) of MACRA requires the Government Accountability Office (GAO) to submit a report to Congress, not later than January 1, 2017, examining whether entities that pool financial risk for physician practices, such as independent risk managers, can play a role in supporting physician practices, particularly small physician practices, in assuming financial risk for the treatment of patients. We have been closely engaged with the GAO throughout their study to better understand the unique needs and challenges faced by clinicians in small practices and practices in rural or health professional shortage areas. We have provided information to the GAO, and the GAO has shared some of their initial findings regarding these challenges. We look forward to further engagement with the GAO on this topic and to the release of GAO's final report. Using the knowledge obtained from small practices, other stakeholders, and the public, as well as from GAO, we continue to work to improve the flexibility and support available to small, underserved, and rural practices. Throughout the evolution of the Quality Payment Program that will unfold over the years to come, CMS is committed to working together with stakeholders to address the unique challenges these practices encounter.

    Using updated policies for the transition year and development period, we performed an updated regulatory impact analysis, including for small and solo practices. With the extensive changes to policy and increased flexibility, we believe that estimating impacts of this final rule with comment period using only historic 2015 quality submission data significantly overestimates the impact on small and solo practices. Although small and solo practices have historically been less likely to engage in PQRS and quality reporting, we believe that small and solo practices will respond to MIPS by participating at a rate close to that of other practice sizes. In order to quantify the impact of the rule on MIPS eligible clinicians, including small and solo practices, we have prepared two sets of analyses that assume the participation rates for some categories of small practices will be similar to those of other practice size categories. Specifically, our primary analysis assumes that each practice size grouping will achieve at least 90 percent participation rate and our alternative assumption is that each practice size grouping will achieve at least an 80 percent participation rate. In both sets of analyses, we estimate that over 90 percent of MIPS eligible clinicians will receive a positive or neutral MIPS payment adjustment in the transition year, and that at least 80 percent of clinicians in small and solo practices with 1-9 clinicians will receive a positive or neutral MIPS payment adjustment.

    e. Advanced Alternative Payment Models (Advanced APMs)

    In this rule, we finalize requirements we will use for the purposes of the incentives for participation in Advanced APMs, and the following is a summary of our finalized policies. The MACRA defines APM for the purposes of the incentive as a model under section 1115A of the Act (excluding a health care innovation award), the Shared Savings Program under section 1899 of the Act, a demonstration under section 1866C of the Act, or a demonstration required by federal law.

    APMs represent an important step forward in the Administration's efforts to move our healthcare system from volume-based to value-based care. APMs that meet the criteria to be Advanced APMs provide the pathway through which eligible clinicians, who would otherwise participate in MIPS, can become Qualifying APM Participants (QPs), and therefore, earn incentive payments for their Advanced APM participation. In the proposed rule, we estimated that 30,000 to 90,000 clinicians would be QPs in 2017. With new Advanced APMs expected to become available for participation in 2017 and 2018, including the Medicare ACO Track 1 Plus (1+), and anticipated amendments to reopen applications for or modify current APMs, such as the Maryland All-Payer Model and Comprehensive Care for Joint Replacement (CJR) model, we anticipate higher numbers of QPs—approximately 70,000 to 120,000 in 2017 and 125,000 to 250,000 in 2018.

    As discussed in section II.F.4.b. of this final rule with comment period, we are exploring development of the Medicare ACO Track 1+ Model to begin in 2018. The model would be voluntary for ACOs currently participating in Track 1 of the Shared Savings Program or ACOs seeking to participate in the Shared Savings Program for the first time. It would test a payment model that incorporates more limited downside risk than is currently present in Tracks 2 or 3 of the Shared Savings Program but sufficient financial risk in order to be an Advanced APM. We will announce additional information about the model in the future.

    This rule finalizes two types of Advanced APMs: Advanced APMs and Other Payer Advanced APMs. To be considered an Advanced APM, an APM must meet all three of the following criteria, as required under section 1833(z)(3)(D) of the Act: (1) The APM must require participants to use CEHRT; (2) The APM must provide for payment for covered professional services based on quality measures comparable to those in the quality performance category under MIPS and; (3) The APM must either require that participating APM Entities bear risk for monetary losses of a more than nominal amount under the APM, or be a Medical Home Model expanded under section 1115A(c) of the Act. In this rule, we finalize proposals pertaining to all of these criteria.

    To be an Other Payer Advanced APM, as set forth in section 1833(z)(2) of the Act, a payment arrangement with a payer (for example, Medicaid or a commercial payer) must meet all three of the following criteria: (1) The payment arrangement must require participants to use CEHRT; (2) The payment arrangement must provide for payment for covered professional services based on quality measures comparable to those in the quality performance category under MIPS and; (3) The payment arrangement must require participants to either bear more than nominal financial risk if actual aggregate expenditures exceed expected aggregate expenditures; or be a Medicaid Medical Home Model that meets criteria comparable to Medical Home Models expanded under section 1115A(c) of the Act.

    We are completing an initial set of Advanced APM determinations that we will release as soon as possible but no later than January 1, 2017. For new APMs that are announced after the initial determination, we will include Advanced APM determinations in conjunction with the first public notice of the APM, such as the Request for Applications (RFA) or final rule. All determinations of Advanced APMs will be posted on our Web site and updated on an ad hoc basis, but no less frequently than annually, as new APMs become available and others end or change.

    An important avenue for the creation of innovative payment models is the PTAC, created by the MACRA. The PTAC is an 11-member independent federal advisory committee to the HHS Secretary. The PTAC will review stakeholders' proposed PFPMs, and make comments and recommendations to the Secretary regarding whether the PFPMs meet criteria established by the Secretary. PTAC comments and recommendations will be reviewed by the CMS Innovation Center and the Secretary, and we will post a detailed response to them on the CMS Web site.

    (i) QP Determination

    QPs are eligible clinicians in an Advanced APM who have a certain percentage of their patients or payments through an Advanced APM. QPs are excluded from MIPS and receive a 5 percent incentive payment for a year beginning in 2019 through 2024. We finalize our proposal that professional services furnished at Critical Access Hospitals (CAHs), Rural Health Clinics (RHCs), and Federally Qualified Health Centers (FQHCs) that meet certain criteria be counted towards the QP determination using the patient count method.

    We finalize definitions of Medical Home Model and Medicaid Medical Home Model and the unique standards by which Medical Home Models may meet the financial risk criterion to be an Advanced APM.

    The statute sets thresholds for the level of participation in Advanced APMs required for an eligible clinician to become a QP for a year. The Medicare Option, based on Part B payments for covered professional services or counts of patients furnished covered professional services under Part B, is applicable beginning in the payment year 2019. The All-Payer Combination Option, which utilizes the Medicare Option as well as an eligible clinician's participation in Other Payer Advanced APMs, is applicable beginning in the payment year 2021. For eligible clinicians to become QPs through the All-Payer Combination Option, an Advanced APM Entity or eligible clinician must participate in an Advanced APM under Medicare and also submit information to CMS so that we can determine whether payment arrangements with non-Medicare payers are an Other Payer Advanced APMs and whether an eligible clinician meets the requisite QP threshold of participation. We are finalizing our methodologies to evaluate eligible clinicians using the Medicare and All-Payer Combination Options.

    We are finalizing the two methods by which we will calculate Threshold Scores to compare to the QP thresholds and make QP determinations for eligible clinicians. The payment amount method assesses the amount of payments for Part B covered professional services that are furnished through an Advanced APM. The patient count method assesses the amount of patients furnished Part B covered professional services through an Advanced APM.

    We are finalizing our proposal to identify individual eligible clinicians by a unique APM participant identifier using the individuals' APM, APM Entity, and TIN/NPI combinations, and to assess as an APM Entity group all individual eligible clinicians listed as participating in an Advanced APM Entity to determine their QP status for a year. We are finalizing that if an individual eligible clinician who participates in multiple Advanced APM Entities does not achieve QP status through participation in any single APM Entity, we will assess the eligible clinician individually to determine QP status based on combined participation in Advanced APMs.

    We are finalizing the method to calculate and disburse the lump-sum APM Incentive Payments to QPs, and we are finalizing a specific approach for calculating the APM Incentive Payment when a QP also receives non-FFS payments or has received payment adjustments through the Medicare EHR Incentive Program, PQRS, VM, or MIPS during the prior period used for determining the APM Incentive Payment.

    We are finalizing a modified policy such that, following a final determination that an Advanced APM Entity group or eligible clinician is determined to be a Partial Qualifying APM Participant (Partial QP), the Advanced APM Entity—or eligible clinician in the case of an individual determination—will make an election on behalf of all of its eligible clinicians in the group of whether to report to MIPS, thus making all eligible clinicians in the Advanced APM Entity group subject to MIPS payment adjustments; or not report to MIPS, thus excluding all eligible clinicians in the APM Entity group from MIPS adjustments. We finalize our proposals to vet and monitor APM Entities, Advanced APM Entities, and eligible clinicians participating in those entities. We are finalizing a definition for PFPMs and criteria for use by the PTAC in fulfilling its responsibility to evaluate proposals for PFPMs.

    We are finalizing an accelerated timeline for making QP determinations, and will notify eligible clinicians of their QP status as soon as possible, in advance of the end of the MIPS performance period so that QPs will know whether they are excluded from MIPS prior to having to submit information to CMS for purposes of MIPS.

    We are finalizing the requirement that MIPS eligible clinicians, as well as EPs, eligible hospitals, and CAHs under the existing Medicare and Medicaid EHR Incentive Programs demonstrate cooperation with certain provisions concerning blocking the sharing of information under section 106(b)(2) of the MACRA and, separately, to demonstrate engagement with activities that support health care providers with the performance of their CEHRT such as cooperation with ONC direct review of certified health information technologies.

    f. Merit-Based Incentive Payment System (MIPS)

    In establishing MIPS, this final rule with comment period will define MIPS participants as “MIPS eligible clinicians” rather than “MIPS EPs” as that term is defined at section 1848(q)(1)(C) and used throughout section 1848(q) of the Act. MIPS eligible clinicians will include physicians, physician assistants, nurse practitioners, clinical nurse specialists, certified registered nurse anesthetists, and groups that include such clinicians who bill under Medicare Part B. The rule finalizes definitions and requirements for groups. In addition to finalizing definitions for MIPS eligible clinicians, the rule also finalizes rules for the specific Medicare-enrolled clinicians that will be excluded from MIPS, including newly Medicare-enrolled MIPS eligible clinicians, QPs, certain Partial QPs, and clinicians that fall under the finalized low-volume threshold.

    For the 2017 performance period, we estimate that more than half of clinicians—approximately 738,000 to 780,000—billing under the Medicare PFS will be excluded from MIPS due to several factors, including the MACRA itself. We estimate that nearly 200,000 clinicians, or approximately 14.4 percent, are not one of the eligible types of clinicians for the transition year CY 2017 of MIPS under section 1848(q)(1)(C) of the Act. The largest cohort of clinicians excluded from MIPS is low-volume clinicians, defined as those clinicians with less than or equal to $30,000 in allowed charges or less than or equal to 100 Medicare patients, representing approximately 32.5 percent of all clinicians billing Medicare Part B services or over 380,000 clinicians. Additionally, between 70,000 and 120,000 clinicians (approximately 5-8 percent of all clinicians billing under the Medicare Part B) will be excluded from MIPS due to being QPs based on participation in Advanced APMs. In aggregate, the eligible clinicians excluded from MIPS represent only 22 to 27 percent of total Part B allowed charges.

    This rule finalizes MIPS performance standards and a minimum MIPS performance period of any 90 continuous days during CY 2017 (January 1 through December 31) for all measures and activities applicable to the integrated performance categories. After consideration of public comments, this rule finalizes a shorter than annual performance period in 2017 to allow flexible participation options for MIPS eligible clinicians as the program begins and evolves over time. For performance periods occurring in 2017, MIPS eligible clinicians will be able to pick a pace of participation that best suits their practices, including submitting data, in special circumstances as discussed in section II.E.5. of this rule, for a period of less than 90 days, to avoid a negative MIPS payment adjustment. Further, we are finalizing our proposal to use performance in 2017 as the performance period for the 2019 payment adjustment. Therefore, the first performance period will start in 2017 and consist of a minimum period of any 90 continuous days during the calendar year in order for clinicians to be eligible for payment adjustment above neutral. Performance in that period of 2017 will be used to determine the 2019 payment adjustment. This timeframe is needed to allow data and claims to be submitted and data analysis to occur in the initial years. In subsequent years, we intend to explore ways to shorten the period between the performance period and the payment year, and ongoing performance feedback will be provided more frequently. The final policies for CY 2017 provide flexibilities to ensure clinicians have ample participation opportunities.

    As directed by the MACRA, this rule finalizes measures, activities, reporting, and data submission standards across four integrated performance categories: Quality, cost, improvement activities, and advancing care information, each linked by the same overriding mission of supporting care improvement under the vision of one Quality Payment Program. Consideration will be given to the application of measures and activities to non-patient facing MIPS eligible clinicians.

    Under the requirements finalized in this rule, there will be options for reporting as an individual MIPS eligible clinician or as part of a group. Some data may be submitted via relevant third party intermediaries, such as qualified clinical data registries (QCDRs), health IT vendors,1 qualified registries, and CMS-approved survey vendors.

    1 We also note that throughout this final rule, as in the proposed rule, we use the terms “EHR Vendor” and “Health IT Vendor.” First, the use of the term “health IT” and “EHR” are based on the common terminology within the specified program (see 80 FR 62604; and the advancing care information performance category in this rule). Second, we recognize that a “health IT vendor” may or may not also be a “health IT developer” and, in some cases, the developer and the vendor of a single product may be different entities. Under the ONC Health IT Certification Program (Program), a health IT developer constitutes a vendor, self-developer, or other entity that presents health IT for certification or has health IT certified under the Program. Therefore, for purposes of this final rule, we clarify that the term “vendor” shall also include developers who create or develop health IT. Throughout this final rule, we use the term “health IT vendor” or “EHR vendor” to refer to entities that support the health IT requirements of a MIPS eligible clinician participating in the proposed Quality Payment Program. This use is consistent with prior CMS rules, see for example the 2014 CEHRT Flexibility final rule (79 FR 52915).

    Within each performance category, we are finalizing specific requirements for full participation in MIPS which involves submitting data on quality measures, improvement activities, and use of certified EHR technology on a minimum of any continuous 90 days up to the full calendar year in 2017 in order to be eligible for a positive MIPS payment adjustment. It is at the MIPS eligible clinician's discretion whether to submit data for the same 90-day period for the various measures and activities or for different time periods for different measures and activities. Note that during the 2017 transition year, MIPS eligible clinicians may choose to report a minimum of a single measure in the quality performance category, a single activity in the improvement activities performance category or the required measures in the advancing care information performance category, in order to avoid a negative payment adjustment. For full participation in MIPS, the specific requirements are as follows:

    (i) Quality

    Quality measures will be selected annually through a call for quality measures process, and a final list of quality measures will be published in the Federal Register by November 1 of each year. For MIPS eligible clinicians choosing full participation in MIPS and the potential for a higher payment adjustment, we note that for a minimum of a continuous 90-day performance period, the MIPS eligible clinician or group will report at least six measures including at least one outcome measure if available. If fewer than six measures apply to the individual MIPS eligible clinician or group, then the MIPS eligible clinician or group will only be required to report on each measure that is applicable.

    Alternatively, for a minimum of a continuous 90-day period, the MIPS eligible clinician or group can report one specialty-specific measure set, or the measure set defined at the subspecialty level, if applicable. If the measure set contains fewer than six measures, MIPS eligible clinicians will be required to report all available measures within the set. If the measure set contains six or more measures, MIPS eligible clinicians can choose six or more measures to report within the set. Regardless of the number of measures that are contained in the measure set, MIPS eligible clinicians reporting on a measure set will be required to report at least one outcome measure or, if no outcome measures are available in the measure set, report another high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measures) within the measure set in lieu of an outcome measure.

    (ii) Improvement Activities

    Improvement activities are those that support broad aims within healthcare delivery, including care coordination, beneficiary engagement, population management, and health equity. In response to comments from experts and stakeholders across the healthcare system, improvement activities were given relative weights of high and medium. We are reducing the number of activities required to achieve full credit from six medium-weighted or three high-weighted activities to four medium-weighted or two high-weighted activities to receive full credit in this performance category in CY 2017. For small practices, rural practices, or practices located in geographic health professional shortage areas (HPSAs), and non-patient facing MIPS eligible clinicians, we will reduce the requirement to only one high-weighted or two medium-weighted activities. We also expand our definition of how CMS will recognize a MIPS eligible clinician or group as being a certified patient-centered medical home or comparable specialty practice to include certification from a national program, regional or state program, private payer or other body that administers patient-centered medical home accreditation. As previously mentioned, in recognition of improvement activities as supporting the central mission of a unified Quality Payment Program, we will include a designation in the inventory of improvement activities of which activities also qualify for the advancing care information bonus score, consistent with our desire to recognize that EHR technology is often deployed to improve care in ways that our programs should recognize.

    (iii) Advancing Care Information Performance Category

    Measures and objectives in the advancing care information performance category focus on the secure exchange of health information and the use of certified electronic health record technology (CEHRT) to support patient engagement and improved healthcare quality. We are maintaining alignment of the advancing care information performance category with the other integrated performance categories for MIPS. We are reducing the total number of required measures from eleven in the proposed rule to only five in our final policy. All other measures would be optional for reporting. Reporting on all five of the required measures would earn the MIPS eligible clinician 50 percent. Reporting on the optional measures would allow a clinician to earn a higher score. For the transition year, we will award a bonus score for improvement activities that utilize CEHRT and for reporting to public health or clinical data registries.

    Public commenters requested that the advancing care information performance category allow for reporting on “use cases” such as the use of CEHRT to manage referrals and consultations (“closing the referral loop”) and other practice-based activities for which CEHRT is used as part of the typical workflow. This is an area we intend to explore in future rulemaking but did not finalize any such policies in this rule. However, for the 2017 transition year, we will award bonus points for improvement activities that utilize CEHRT and for reporting to a public health or clinical data registry, reflecting the belief that the advancing care information performance category should align with the other performance categories to achieve the unified goal of quality improvement.

    (iv) Cost

    For the transition year, we are finalizing a weight of zero percent for the cost performance category in the final score, and MIPS scoring in 2017 will be determined based on the other three integrated MIPS performance categories. Cost measures do not require reporting of any data by MIPS eligible clinicians to CMS. Although cost measures will not be used to determine the final score in the transition year, we intend to calculate performance on certain cost measures and give this information in performance feedback to clinicians. We intend to calculate measures of total per capita costs for all attributed beneficiaries and a Medicare Spending per Beneficiary (MSPB) measure. In addition, we are finalizing 10 episode-based measures that were previously made available to clinicians in feedback reports and met standards for reliability. Starting in performance year 2018, as performance feedback is available on at least an annual basis, the cost performance category contribution to the final score will gradually increase from 0 to the 30 percent level required by MACRA by the third MIPS payment year of 2021.

    (v) Clinicians in MIPS APMs

    We are finalizing standards for measures, scoring, and reporting for MIPS eligible clinicians across all four performance categories outlined in this section II.E.5.h of this final rule with comment period. Beginning in 2017, some APMs, by virtue of their structure, will not meet statutory requirements to be categorized as Advanced APMs. Eligible clinicians in these APMs, hereafter referred to as MIPS APMs, will be subject to MIPS reporting requirements and the MIPS payment adjustment. In addition, eligible clinicians who are in Advanced APMs but do not meet participation thresholds to be excluded from MIPS for a year will be subject to the scoring standards for MIPS reporting requirements and the MIPS payment adjustment. In response to comments, in an effort to recognize these eligible clinicians' participation in delivery system reform and to avoid potential duplication or conflicts between these APMs and MIPS, we finalize an APM scoring standard that is different from the generally applicable standard. We finalize our proposal that MIPS eligible clinicians who participate in MIPS APMs will be scored using the APM scoring standard instead of the generally applicable MIPS scoring standard.

    (vi) Scoring Under MIPS

    We are finalizing that MIPS eligible clinicians have the flexibility to submit information individually or via a group or an APM Entity group; however, the MIPS eligible clinician will use the same identifier for all performance categories. The finalized scoring methodology has a unified approach across all performance categories, which will help MIPS eligible clinicians understand in advance what they need to do in order to perform well in MIPS. The three performance category scores (quality, improvement activities, and advancing care information) will be aggregated into a final score. The final score will be compared against a MIPS performance threshold of 3 points. The final score will be used to determine whether a MIPS eligible clinician receives an upward MIPS payment adjustment, no MIPS payment adjustment, or a downward MIPS payment adjustment as appropriate. Upward MIPS payment adjustments may be scaled for budget neutrality, as required by MACRA. The final score will also be used to determine whether a MIPS eligible clinician qualifies for an additional positive adjustment factor for exceptional performance. The performance threshold will be set at 3 points for the transition year, such that clinicians engaged in the program who successfully report one quality measure can avoid a downward adjustment. MIPS eligible clinicians submitting additional data for one or more of the three performance categories for at least a full 90-day period may quality for varying levels of positive adjustments.

    In future years of the program, we will require longer performance periods and higher performance in order to avoid a negative MIPS payment adjustment.

    (vii) Performance Feedback

    We are finalizing a process for providing performance feedback to MIPS eligible clinicians. Initially, we will provide performance feedback on an annual basis. In future years, we aim to provide performance feedback on a more frequent basis, as well as providing feedback on the performance categories of improvement activities and advancing care information in line with clinician requests for timely, actionable feedback that they can use to improve care. We are finalizing our proposal to make performance feedback available using a web-based application. Further, we are finalizing our proposal to leverage additional mechanisms such as health IT vendors and registries to help disseminate data contained in the performance feedback to MIPS eligible clinicians where applicable.

    (viii) Targeted Review Processes

    We are finalizing a targeted review process under MIPS wherein a MIPS eligible clinician may request that we review the calculation of the MIPS payment adjustment factor and, as applicable, the calculation of the additional MIPS payment adjustment factor applicable to such MIPS eligible clinician for a year.

    (ix) Third Party Intermediaries

    We are finalizing requirements for third party data submission to MIPS that are intended to decrease burden to individual clinicians. Specifically, qualified registries, QCDRs, health IT vendors, and CMS-approved survey vendors will have the ability to act as intermediaries on behalf of MIPS eligible clinicians and groups for submission of data to CMS across the quality, improvement activities, and advancing care information performance categories.

    (x) Public Reporting

    We are finalizing a process for public reporting of MIPS information through the Physician Compare Web site, with the intention of promoting fairness and transparency. We are finalizing public reporting of a MIPS eligible clinician's data; for each program year, we will post on a public Web site, in an easily understandable format, information regarding the performance of MIPS eligible clinicians or groups under MIPS.

    5. Payment Adjustments

    We estimate that approximately 70,000 to 120,000 clinicians will become QPs in 2017 and approximately 125,000 to 250,000 clinicians will become QPs in 2018 through participation in Advanced APMs; they are estimated to receive between $333 million and $571 million in APM Incentive Payments for CY 2019. As with MIPS, we expect that APM participation will drive quality improvement for clinical care provided to Medicare beneficiaries and to all patients in the health care system.

    Under the policies finalized in this rule, we estimate that, between approximately 592,000 and 642,000 eligible clinicians will be required to participate in MIPS in its transition year. In 2019, MIPS payment adjustments will be applied based on MIPS eligible clinicians' performance on specified measures and activities within three integrated performance categories; the fourth category of cost, as previously outlined, will be weighted to zero in the transition year. Assuming that 90 percent of eligible clinicians of all practice sizes participate in the program, we estimate that MIPS payment adjustments will be approximately equally distributed between negative MIPS payment adjustments ($199 million) and positive MIPS payment adjustments ($199 million) to MIPS eligible clinicians, to ensure budget neutrality. Positive MIPS payment adjustments will also include an additional $500 million for exceptional performance payments to MIPS eligible clinicians whose performance meets or exceeds a threshold final score of 70. These MIPS payment adjustments are expected to drive quality improvement in the provision of MIPS eligible clinicians' care to Medicare beneficiaries and to all patients in the health care system. However, the distribution could change based on the final population of MIPS eligible clinicians for CY 2019 and the distribution of scores under the program. We believe that starting with these modest initial MIPS payment adjustments, representing less than 0.2 percent of Medicare expenditures for physician and clinical services, is in the long-term best interest of maximizing participation and starting the Quality Payment Program off on the right foot, even if it limits the upside during the transition year. The increased availability of Advanced APM opportunities, including through Medical Home models, also provides earlier avenues to earn bonus payments for those who choose to participate.

    6. The Broader Context of Delivery System Reform and Healthcare System Innovation

    In January 2015, the Administration announced new goals for transforming Medicare by moving away from traditional FFS payments in Medicare towards a payment system focused on linking physician reimbursements to quality care through APMs (http://www.hhs.gov/about/news/2015/01/26/better-smarter-healthier-in-historic-announcement-hhs-sets-clear-goals-and-timeline-for-shifting-medicare-reimbursements-from-volume-to-value.html#) and other value-based purchasing arrangements. This is part of an overarching Administration strategy to transform how health care is delivered in America, changing payment structures to improve quality and patient health outcomes. The policies finalized in this rule are intended to continue to move Medicare away from a primarily volume-based FFS payment system for physicians and other professionals.

    The Affordable Care Act includes a number of provisions, for example, the Medicare Shared Savings Program, designed to improve the quality of Medicare services, support innovation and the establishment of new payment models, better align Medicare payments with health care provider costs, strengthen Medicare program integrity, and put Medicare on a firmer financial footing.

    The Affordable Care Act created the Center for Medicare and Medicaid Innovation (Innovation Center). The Innovation Center was established by section 1115A of the Act (as added by section 3021 of the Affordable Care Act). The Innovation Center's mandate gives it flexibility within the parameters of section 1115A of the Act to select and test promising innovative payment and service delivery models. The Congress created the Innovation Center for the purpose of testing innovative payment and service delivery models to reduce program expenditures while preserving or enhancing the quality of care provided to those individuals who receive Medicare, Medicaid, or CHIP benefits. See https://innovation.cms.gov/about/index.html. The Secretary may through rulemaking expand the duration and scope of a model being tested if (1) the Secretary finds that such expansion (i) is expected to reduce spending without reducing the quality of care, or (ii) improve the quality of patient care without increasing spending; (2) the CMS Chief Actuary certifies that such expansion would reduce (or would not result in any increase in) net program spending under applicable titles; and (3) the Secretary finds that such expansion would not deny or limit the coverage or provision of benefits under the applicable title for applicable individuals.

    The Innovation Center's portfolio of models has attracted participation from a broad array of health care providers, states, payers, and other stakeholders, and serves Medicare, Medicaid, and CHIP beneficiaries in all 50 states, the District of Columbia, and Puerto Rico. We estimate that over 4.7 million Medicare, Medicaid, and CHIP beneficiaries are or soon will be receiving care furnished by the more than 61,000 eligible clinicians currently participating in models tested by the CMS Innovation Center.

    Beyond the care improvements for these beneficiaries, the Innovation Center models are affecting millions of additional Americans by engaging thousands of other health care providers, payers, and states in model tests and through quality improvement efforts across the country. Many payers other than CMS have implemented alternative payment arrangements or models, or have collaborated in the Innovation Center models. The participation of multiple payers in alternative delivery and payment models increases momentum for delivery system transformation and encourages efficiency for health care organizations.

    The Innovation Center works directly with other CMS components and colleagues throughout the federal government in developing and testing new payment and service delivery models. Other federal agencies with which the Innovation Center has collaborated include the Centers for Disease Control and Prevention (CDC), Health Resources and Services Administration (HRSA), Agency for Healthcare Research and Quality (AHRQ), Office of the National Coordinator for Health Information Technology (ONC), Administration for Community Living (ACL), Department of Housing and Urban Development (HUD), Administration for Children and Families (ACF), and the Substance Abuse and Mental Health Services Administration (SAMHSA). These collaborations help the Innovation Center effectively test new models and execute mandated demonstrations.

    7. Stakeholder Input

    In developing this final rule with comment period, we sought feedback from stakeholders and the public throughout the process such as in the 2016 Medicare PFS Proposed Rule; the Request for Information Regarding Implementation of the Merit-based Incentive Payment System, Promotion of Alternative Payment Models, and Incentive Payments for Participation in Eligible Alternative Payment Models (hereafter referred to as the MIPS and APMs RFI); listening sessions; conversations with a wide number of stakeholders; and consultation with tribes and tribal officials through an All Tribes' Call on May 19, 2016 and several conversations with the CMS' Tribal Technical Advisory Group. Through the MIPS and APMs RFI published in the Federal Register on October 1, 2015 (80 FR 59102 through 59113), the Secretary of Health and Human Services (the Secretary) solicited comments regarding implementation of certain aspects of the MIPS and broadly sought public comments on the topics in section 101 of the MACRA, including the incentive payments for participation in APMs and increasing transparency of PFPMs. We received numerous public comments in response to the MIPS and APMs RFI from a broad range of sources including professional associations and societies, physician practices, hospitals, patient groups, and health IT vendors. On May 9, 2016, we published in the Federal Register a proposed rule for the Merit-based Incentive Payment System and Alternative Payment Model Incentive under the Physician Fee Schedule, and Criteria for Physician-Focused Payment Models (81 FR 28161 through 28586). In our proposed rule, we provided the public with proposed policies, implementation strategies, and regulation text, in addition to seeking additional comments on alternative and future approaches for MIPS and APMs. The comment period closed June 27, 2016.

    In response to both the RFI and the proposed rule, we received a high degree of interest from a broad spectrum of stakeholders. We thank our many commenters and acknowledge their valued input throughout the proposed rule process. We discuss and respond to the substance of relevant comments in the appropriate sections of this final rule with comment period. In general, commenters continue to support establishment of the Quality Payment Program and maintain optimism as we move from FFS Medicare payment towards an enhanced focus on the quality and value of care. Public support for our proposed approach and policies in the proposed rule focused on the potential for improving the quality of care delivered to beneficiaries and increasing value to the public—while rewarding eligible clinicians for their efforts. In this early stage of a new program, commenters urged CMS to maintain flexibility and promote maximized clinician participation in MIPS and APMs. Commenters also expressed a willingness and desire to work with CMS to increase the relevance of MIPS activities and measures for physicians and patients and to expand the number and scope of APMs. We have sought to adopt these sentiments throughout relevant sections of this final rule with comment period. Commenters continue to express concern with elements of the legacy programs incorporated into MIPS. We appreciate the many comments received regarding the proposed measures and activities and address those throughout this final rule with comment period. We intend to work with stakeholders to continually seek to connect the program to activities and measures that will result in improvement in care for Medicare beneficiaries. Commenters also continue to be concerned regarding the burden of current and future requirements. Although many commenters recognize the reduced burden from streamlined reporting in MIPS compared to prior programs, they believe CMS could undertake additional steps to improve reporting efficiency. We appreciate provider concerns with reporting burden and have tried to reduce burden where possible while meeting the intent of the MACRA, including our obligations to improve patient outcomes through this quality program.

    In several cases, commenters made suggestions for changes that we considered and ultimately found to be inconsistent with the statute. In keeping with our objectives of maintaining transparency in the program, we outline in the appropriate sections of the rule suggestions from commenters that were considered but found to be inconsistent with the statute.

    Commenters have many concerns about their ability to participate effectively in MIPS in 2017 and the program's impacts on small practices, rural practitioners, and various specialty practitioner types. We have attempted to address these concerns by including transitional policies and additional flexibility in relevant sections of the final rule with comment period to encourage participation by all eligible clinicians and practitioner types, and avoid undue impact on any particular group.

    Commenters present substantial enthusiasm for broadening opportunities to participate in APMs and the development of new Advanced APMs. Commenters suggest a number of resources should be made available to assist them in moving towards participation in APMs and have submitted numerous proposals for enhancing the APM portfolio and shortening the development process for new APMs. In particular, commenters urged us to modify existing Innovation Center models so they can be classified as Advanced APMs. We appreciate commenters' eagerness to participate in Advanced APMs and to be a part of transforming care. While not within the scope of this rule, we note that CMS has developed in conjunction with this rule a new strategic vision for the development of Advanced APMs over the coming years that will provide significantly enhanced opportunities for clinicians to participate in the program. We thank stakeholders again for their considered responses throughout our process, in various venues, including comments to the MIPS and APMs RFI and the proposed rule. We intend to continue open communication with stakeholders, including consultation with tribes and tribal officials, on an ongoing basis as we develop the Quality Payment Program in future years.

    II. Provisions of the Proposed Regulations and Analysis of and Responses to Comments A. Establishing MIPS and the Advanced APM Incentive

    Section 1848(q) of the Act, as added by section 101(c) of the MACRA, requires establishment of MIPS. Section 101(e) of the MACRA promotes the development of, and participation in, Advanced APMs for eligible clinicians.

    B. Program Principles and Goals

    Through the implementation of the Quality Payment Program, we strive to continue to support health care quality, efficiency, and patient safety. MIPS promotes better care, healthier people, and smarter spending by evaluating MIPS eligible clinicians using a final score that incorporates MIPS eligible clinicians' performance on quality, cost, improvement activities, and advancing care information. Under the incentives for participation in Advanced APMs, our goals, described in greater detail in section II.F of this final rule with comment period, are to expand the opportunities for participation in both APMs and Advanced APMs, improve care quality and reduce health care costs in current and future Advanced APMs, create clear and attainable standards for incentives, promote the continued flexibility in the design of APMs, and support multi-payer initiatives across the health care market. The Quality Payment Program is designed to encourage eligible clinicians to participate in Advanced APMs. The APM Incentive Payment will be available to eligible clinicians who qualify as QPs through Advanced APMs. MIPS eligible clinicians participating in APMs (who do not qualify as QPs) will receive favorable scoring under certain MIPS categories.

    Our strategic objectives in developing the Quality Payment Program include: (1) Improve beneficiary outcomes through patient-centered MIPS and APM policy development and patient engagement and achieve smarter spending through strong incentives to provide the right care at the right time; (2) enhance clinician experience through flexible and transparent program design and interactions with exceptional program tools; (3) increase the availability and adoption of alternative payment models; (4) promote program understanding and participation through customized communication, education, outreach and support; (5) improve data and information sharing to provide accurate, timely, and actionable feedback to clinicians and other stakeholders; (6) deliver IT systems capabilities that meet the needs of users and are seamless, efficient and valuable on the front- and back-end; and (7) ensure operational excellence in program implementation and ongoing development.

    C. Changes to Existing Programs 1. Sunsetting of Current Payment Adjustment Programs

    Section 101(b) of the MACRA calls for the sunsetting of payment adjustments under three existing programs for Medicare enrolled physicians and other practitioners:

    • The PQRS that incentivizes EPs to report on quality measures;

    • The VM that provides for budget neutral, differential payment adjustment for EPs in physician groups and solo practices based on quality of care compared to cost; and

    • The Medicare EHR Incentive Program for EPs that entails meeting certain requirements for the use of CEHRT.

    Accordingly, we are finalizing revisions to certain regulations associated with these programs. We are not deleting these regulations entirely, as the final payment adjustments under these programs will not occur until the end of 2018. For PQRS, we are revising § 414.90(e) introductory text and § 414.90(e)(1)(ii) to continue payment adjustments through 2018.

    Similarly, for the Medicare EHR Incentive Program for EPs we are amending § 495.102(d) to remove references to the payment adjustment percentage for years after the 2018 payment adjustment year and add a terminal limit of the 2018 payment adjustment year.

    We did not make changes to 42 CFR part 414, subpart N—Value-Based Payment Modifier Under the PFS (§§ 414.1200 through 414.1285). These regulations are already limited to certain years.

    The following is a summary of the comments we received regarding sunsetting current payment adjustment programs:

    Comment: Several commenters expressed appreciation for CMS's decision to streamline the prior reporting programs into MIPS.

    Response: We appreciate the commenters support for our proposals.

    Comment: Some commenters were confused by the term “sunsetting,” the timeline for when the prior programs “end,” and whether there would be an overlap in reporting.

    Response: Because of the nature of regulatory text and statutory requirements, we cannot delete text from the public record in order to end or change regulatory programs. Instead, we must amend the text with a date that marks an end to the program, and we refer to this as “sunsetting.” We would also like to clarify that the PQRS, VM, and Medicare EHR Incentive Program for FFS EPs will “end” in 2018 because that is the final year in which payment adjustments for each of these programs will be applied. As the commenters noted, however, the reporting periods or performance periods associated with the 2018 payment year for each of these programs occur prior to 2018. As discussed in section II.E.4. of this final rule with comment period, beginning in 2017, MIPS eligible clinicians will report data for MIPS during at minimum any period of 90 continuous days within CY 2017, and MIPS payment adjustments will begin in 2019 based on the 2017 performance year. Eligible clinicians may also seek to qualify as QPs through participation in Advanced APMs. Eligible clinicians who are QPs for the year are not subject to the MIPS reporting requirements and payment adjustment.

    We plan to provide additional educational materials so that clinicians can easily understand the timelines and requirements for the existing and the new programs.

    Based on the comments received we are finalizing the revision to PQRS at § 414.90(e) introductory text and § 414.90(e)(1)(ii) and to the Medicare EHR Incentive Program at § 495.102(d) as proposed.

    2. Supporting Health Care Providers With the Performance of Certified EHR Technology, and Supporting Health Information Exchange and the Prevention of Health Information Blocking a. Supporting Health Care Providers With the Performance of Certified EHR Technology

    We proposed to require EPs, eligible hospitals, and CAHs to attest (as part of their demonstration of meaningful use under the Medicare and Medicaid EHR Incentive Programs) that they have cooperated with the surveillance and direct review of certified EHR technology under the ONC Health IT Certification Program, as authorized by 45 CFR part 170, subpart E. Similarly, we proposed to require such an attestation from all eligible clinicians under the advancing care information performance category of MIPS, including eligible clinicians who report on the advancing care information performance category as part of an APM Entity group under the APM scoring standard.

    As we note below, it is our intent to support MIPS eligible clinicians, eligible clinicians part of an APM Entity, EPs, eligible hospitals, and CAHs' (hereafter collectively referred to in this section as “health care providers”) participation in health IT surveillance and direct review activities. While cooperating with these activities may require prioritizing limited time and other resources, we note that ONC will work with health care providers to accommodate their schedules and consider other circumstances (80 FR 62715). Additionally, ONC has established certain safeguards that can minimize potential burden on health care providers in the event that they are asked to cooperate with the surveillance of their certified EHR technology. Examples of these safeguards, which we described in the proposed rule (81 FR 28171), include: (1) Requiring ONC-Authorized Certification Bodies (ONC-ACBs) to use consistent, objective, valid, and reliable methods when selecting locations at which to perform randomized surveillance of certified health IT (80 FR 62715); (2) allowing ONC-ACBs to use appropriate sampling methodologies to minimize disruption to any individual provider or class of providers and to maximize the value and impact of ONC-ACB surveillance activities for all providers and stakeholders (80 FR 62715); and (3) allowing ONC-ACBs to excuse a health care provider from surveillance and select a different health care provider under certain circumstances (80 FR 62716).

    As background to this proposal, we noted that on October 16, 2015, ONC published the 2015 Edition Health Information Technology (Health IT) Certification Criteria, 2015 Edition Base Electronic Health Record (EHR) Definition, and ONC Health IT Certification Program Modifications final rule (“2015 Edition final rule”). The 2015 Edition final rule made changes to the ONC Health IT Certification Program that enhance the testing, certification, and surveillance of health IT. Importantly, the rule strengthened requirements for the ongoing surveillance of certified EHR technology and other health IT certified on behalf of ONC. Under these requirements established by the 2015 Edition final rule, ONC-ACBs are required to conduct more frequent and more rigorous surveillance of certified technology and capabilities “in the field” (80 FR 62707).

    The purpose of in-the-field surveillance is to provide greater assurance that health IT meets certification requirements not only in a controlled testing environment, but also when used by health care providers in actual production environments (80 FR 62707). In-the-field surveillance can take two forms: First, ONC-ACBs conduct “reactive surveillance” in response to complaints or other indications that certified health IT may not conform to the requirements of its certification (45 CFR 170.556(b)). Second, ONC-ACBs carry out ongoing “randomized surveillance” based on a randomized sample of all certified Complete EHRs and Health IT Modules to assess certified capabilities and other requirements prioritized by the National Coordinator (45 CFR 170.556(c)). Consistent with the purpose of ONC-ACB surveillance—which is to verify that certified health IT performs in accordance with the requirements of its certification when it is implemented and used in the field—an ONC-ACB's assessment of a certified capability must be based on the use of the capability in the live production environment in which the capability has been implemented and is in use (45 CFR 170.556(a)(1)) and must use production data unless test data is specifically approved by the National Coordinator (45 CFR 170.556(a)(2)). Throughout this section, we refer to surveillance by an ONC-ACB as “surveillance.”

    On October 19, 2016, ONC will publish the ONC Enhanced Oversight and Accountability final rule, which enhances oversight under the ONC Health IT Certification Program by establishing processes to facilitate ONC's direct review and evaluation of the performance of certified health IT in certain circumstances, including in response to problems or issues that could pose serious risks to public health or safety (see the October 19, 2016 Federal Register). ONC's direct review of certified health IT may require ONC to review and evaluate the performance of health IT in the production environment in which it has been implemented. Throughout this section, we refer to actions carried out by ONC under the ONC Enhanced Oversight and Accountability final rule as “direct review.”

    When carrying out ONC-ACB surveillance or ONC direct review, ONC-ACBs and/or ONC may request that health care providers supply information (for example, by way of telephone inquiries or written surveys) about the performance of the certified EHR technology capabilities the provider possesses and, when necessary, may request access to the provider's certified EHR technology (and data stored in such certified EHR technology) to confirm that capabilities certified by the developer are functioning appropriately. Health care providers may also be asked to demonstrate capabilities and other aspects of the technology that are the focus of such efforts.

    In the Quality Payment Program proposed rule, we explained that these efforts to strengthen surveillance and direct review of certified health IT are critical to the success of HHS programs and initiatives that require the use of certified health IT to improve health care quality and the efficient delivery of care. We explained that effective ONC-ACB surveillance and ONC direct review is fundamental to providing basic confidence that the certified health IT used under the HHS programs consistently meets applicable standards, implementation specifications, and certification criteria adopted by the Secretary when it is used by health care providers, as well as by other persons with whom health care providers need to exchange electronic health information to comply with program requirements. In particular, the need to ensure that certified health IT consistently meets applicable standards, implementation specifications, and certification criteria is important both at the time the technology is certified (by meeting the requirements for certification in a controlled testing environment) and on an ongoing basis to ensure that the technology continues to meet certification requirements when it is actually implemented and used by health care providers in real-world production environments. We explained that efforts to strengthen surveillance and direct review of certified EHR technology in the field will become even more important as the types and capabilities of certified EHR technology continue to evolve and with the onset of Stage 3 of the Medicare and Medicaid EHR Incentive Programs and MIPS, which include heightened requirements for sharing electronic health information with other providers and with patients. Finally, we noted that effective surveillance and direct review of certified EHR technology is necessary if health care providers are to be able to rely on certifications issued under the ONC Health IT Certification Program as the basis for selecting appropriate technologies and capabilities that support the use of certified EHR technology while avoiding potential implementation and performance issues (81 FR 28170-28171).

    For all of these reasons, the effective surveillance and direct review of certified health IT, and certified EHR technology as it applies to providers covered by this provision, provide greater assurance to health care providers that their certified EHR technology will perform in a manner that meets their expectations and that will enable them to demonstrate that they are using certified EHR technology in a meaningful manner as required by sections 1848(o)(2)(A)(i) and 1886(n)(3)(A)(i) of the Act. We stressed in the proposed rule (81 FR 28170-28171), however, that such surveillance and direct review will not be effective unless health care providers are actively engaged and cooperate with these activities, including by granting access to and assisting ONC-ACBs and ONC to observe the performance of production systems (see also the 2015 Edition final rule at 80 FR 62716).

    Accordingly, we proposed that as part of demonstrating the use of certified EHR technology in a meaningful manner, a health care provider must demonstrate its good faith cooperation with authorized surveillance and direct review. We proposed to revise the definition of a meaningful EHR user at § 495.4 as well as the attestation requirements at § 495.40(a)(2)(i)(H) and § 495.40(b)(2)(i)(H) to require EPs, eligible hospitals, and CAHs to attest their cooperation with certain authorized health IT surveillance and direct review activities as part of demonstrating meaningful use under the Medicare and Medicaid EHR Incentive Programs. Similarly, we proposed to include an identical attestation requirement in the submission requirements for MIPS eligible clinicians under the advancing care information performance category proposed at § 414.1375.

    We proposed that health care providers would be required to attest that they have cooperated in good faith with the authorized ONC-ACB surveillance and ONC direct review of their health IT certified under the ONC Health IT Certification Program, as authorized by 45 CFR part 170, subpart E, to the extent that such technology meets (or can be used to meet) the definition of CEHRT. Under the terms of the attestation, we stated that such cooperation would include responding in a timely manner and in good faith to requests for information (for example, telephone inquiries and written surveys) about the performance of the certified EHR technology capabilities in use by the provider in the field (81 FR 28170 through 28171). It would also include accommodating requests (from ONC-ACBs or from ONC) for access to the provider's certified EHR technology (and data stored in such certified EHR technology) as deployed by the health care provider in its production environment, for the purpose of carrying out authorized surveillance or direct review, and to demonstrate capabilities and other aspects of the technology that are the focus of such efforts, to the extent that doing so would not compromise patient care or be unduly burdensome for the health care provider.

    We stated that the proposed attestation would support providers in meeting the requirements for the meaningful use of certified EHR technology while at the same time minimizing burdens for health care providers and patients (81 FR 28170 through 28171). We requested public comment on this proposal.

    Through public forums, listening sessions, and correspondence received by CMS and ONC, and through the methods available for health care providers to submit 2 technical concerns related to the function of their certified EHR technology, we have received requests that ONC and CMS assist providers in mitigating issues with the performance of their technology, including issues that relate to the safety and interoperability of health IT. Our proposal was designed to help health care providers with these very issues by strengthening participation in surveillance and direct review activities that help assure that their certified EHR technology performs as intended. However, the comments we have received, and which we discuss below, suggest that the support that the policy provides for health IT performance was not understood by some stakeholders. For this reason, we are adopting a modification to the title and language describing this policy in this final rule with comment period to reflect the intent articulated in the proposed rule and to be responsive to the concerns raised by commenters.

    As we have explained, our proposal to require that health care providers cooperate with ONC-ACB surveillance of certified health IT and ONC direct review of certified health IT reflects the need to address technical issues with the functionality of certified EHR technology and to support health care providers with the performance of their certified EHR technology. By cooperating with these activities, health care providers would assist ONC-ACBs and ONC in working with health IT developers to identify and rectify problems and issues with their technology. In addition, a health care provider who assists an ONC-ACB or ONC with these activities is also indirectly supporting other health care providers, interoperability goals, and the health IT infrastructure by helping to ensure the integrity and efficacy of certified health IT products in health care settings. To more clearly and accurately communicate the context and role of health care providers in these activities, and consistent with our approach to clarifying terminology and references, we have adopted new terminology in this final rule with comment period that focuses on the requirements for the health care provider rather than ONC or ONC-ACB actions and processes. In this section, the activities to be engaged in by health care providers in cooperation with ONC direct review or ONC-ACB surveillance are intended to support health care providers with the performance of certified EHR technology. We therefore use the phrase “Supporting Providers with the Performance of Certified EHR technology activities” (hereinafter referred to as “SPPC activities”) to refer to a health care provider's actions related to cooperating in good faith with ONC-ACB authorized surveillance and, separately or collectively as the context requires, a health care provider's actions in cooperating in good faith with ONC direct review.

    Notwithstanding the terminology used in this final rule with comment period, and to avoid any confusion for health care providers engaging with ONC-ACBs or ONC in the future, we note that, when communicating with health care providers about the surveillance or direct review of certified health IT, ONC-ACBs and ONC will use the terminology in the 2015 Edition final rule, the ONC Enhanced Oversight and Accountability final rule, or other relevant ONC rulemakings and regulations, if applicable. In particular, a request for cooperation made by an ONC-ACB to a health care provider will not refer to “SPPC activities.” Rather, the request will typically refer to the ONC-ACB's need to carry out “surveillance” of the certified health IT used by the health care provider. Similarly, if ONC requests the cooperation of a health care provider in connection with ONC's direct review of certified health IT, as described in the ONC Enhanced Oversight and Accountability final rule scheduled for publication in the Federal Register on October 19, 2016, ONC will not use the terminology “SPPC activities.” Rather, ONC will request the cooperation of the health care provider with ONC's “direct review” or “review” of the certified health IT. In addition, throughout this final rule with comment period, we use the term “health IT vendor” to refer to third party entities supporting providers with technology requirements for the Quality Payment Program. In this section, we instead use the term “health IT developer” to distinguish between these third parties and those developers of a health IT product under the ONC rules. In order to maintain consistency with the ONC rules, we use the term “health IT developer” for those that have presented a health IT product to ONC for certification.

    We received public comment on the proposals and our response follows.

    Comment: Several commenters expressed concern that the proposed attestation would be unduly burdensome for health care providers. A number of commenters stated that requiring health care providers to engage in SPPC activities related to their certified EHR technology would place a disproportionate burden on providers relative to other stakeholders who share the responsibility of advancing the use of health IT and the exchange of electronic health information. More specifically, several commenters stated that SPPC activities related to a provider's certified EHR technology could disrupt health care operations. According to one commenter, this disruption may be especially burdensome for small practices who may need to engage a third party to assist them in cooperating in good faith to a request to assist ONC or an ONC-ACB, such as evaluating the performance of certified EHR technology capabilities in the field. Another commenter requested clarification on how evaluations of certified EHR technology would be conducted in production environments without disturbing patient encounters and clinical workflows.

    Commenters offered a number of suggestions to reduce the potential burden of this proposal on health care providers. First, some commenters strongly endorsed the safeguards established by ONC—including methods used to select locations, such as sampling and weighting considerations and the exclusion of certain locations in appropriate circumstances. In addition, one commenter recommended that, where ONC-ACB surveillance or ONC direct review involves evaluating certified EHR technology in the field, the ONC-ACB surveillance or ONC direct review should be scheduled 30 days in advance and at a time that is convenient to accommodate the health care providers' schedules, such as after hours or on weekends. The commenter suggested that this would avoid disruption both to administrative operations and patient care.

    Response: We understand that, if a request to assist ONC or an ONC-ACB is received, cooperating in good faith may require providers to prioritize limited time and other resources—especially for in-the-field evaluations of certified EHR technology. As we explained in the proposed rule, we believe that several safeguards established by ONC will minimize the burden of these activities (81 FR 28171). We note that under the 2015 Edition final rule, randomized surveillance is limited annually to 2 percent of unique certified health IT products (80 FR 62714). To illustrate the potential impact of these activities, for CY 2016 ONC estimates that up to approximately 24 products would be selected by each of its three ONC-ACBs, for a maximum of 72 total products selected across all ONC-ACBs (80 FR 62714). While ONC-ACB surveillance may be carried out at one or more locations for each product selected, we believe the likelihood that a health care provider will be asked to participate in the ONC-ACB surveillance of that product will in many cases be quite small due to the number of other health care providers using the health IT product. Further, the 2015 Edition final rule states that ONC-ACBs may use appropriate sampling methodologies to minimize disruption to any individual or class of health care providers and to maximize the value and impact of randomized surveillance for all health care providers and stakeholders (80 FR 62715). In addition, we reiterate that if an ONC-ACB is unable to complete its randomized surveillance of certified EHR technology at a particular location—such as where, despite a good faith effort, the health care provider at a chosen location is unable to provide the requisite cooperation—the ONC-ACB may exclude the location and substitute a different location for observation (see ONC 2015 Edition final rule 80 FR 62716). ONC has also explained that in many cases in-the-field evaluations of certified EHR technology may be accomplished through an in-person site visit or may instead be accomplished remotely (80 FR 62708). Thus, in general, we expect that health care providers will be presented with a choice of evaluation approaches and be able to choose one that is convenient for their practice.

    We also understand the concerns expressed by some commenters that engaging in SPPC activities should not unreasonably disrupt the workflow or operations of a health care provider. In consultation with ONC, we expect that in most cases ONC and ONC-ACBs will accommodate providers' schedules and other circumstances, and that in most cases providers will be given ample notice of and time to respond to requests from ONC and ONC-ACBs. We note that in some cases it may be necessary to secure a health care provider's cooperation relatively quickly, such as if a potential problem or issue with certified EHR technology poses potentially serious risks to public health or safety (see the ONC Enhanced Oversight and Accountability final rule scheduled for publication in the Federal Register on October 19, 2016).

    Finally, through public comment on the proposed rule, we note that in addition to these specific concerns expressed and addressed regarding SPPC activities, stakeholders share a general concern over the risks and potential negative impact of transitioning to MIPS and upgrading certified health IT in a short time without adequate preparation and support. Stakeholders are particularly concerned about this impact on solo practitioners, small practices, and health care providers with limited resources that may be providing vital access to health care in under-served communities. As noted previously, we believe the safeguards and policies established for ONC-ACBs' activities, discussed above, mitigate the risk of disruption to health care providers under normal circumstances. However, consistent with our overall approach for implementing new programs and requirements such as the Quality Payment Program and historically under the EHR Incentive Programs, we are modifying our final policy from the proposal to allow for additional flexibility for health care providers.

    Our proposed policy would require health care providers to attest that they cooperated in good faith with ONC-ACB surveillance and ONC's direct review of certified health IT in order to demonstrate they have used certified EHR technology in a meaningful manner. In this final rule with comment period, we are finalizing a modified approach that splits the SPPC activities into two parts and draws a distinction between cooperation with ONC direct review and cooperation with ONC-ACB surveillance requests.

    We are finalizing as proposed the requirement to cooperate in good faith with a request relating to ONC direct review of certified health IT. We do not believe it is appropriate to modify this requirement because ONC direct review is designed to mitigate potentially serious risk to public health and safety and to address practical challenges in reviewing certified health IT by an ONC-ACB. However, we are finalizing a modification to the requirement to cooperate with a request relating to ONC-ACB surveillance, which is different from ONC direct review (see discussion above). The modification to ONC-ACB surveillance will allow providers to choose whether to participate in SPPC activities supporting ONC-ACB surveillance of certified EHR technology.

    As described in this section, ONC direct review focuses on situations involving (1) public health and safety and (2) practical challenges for ONC-ACBs, such as when a situation exceeds an ONC-ACB's resources or expertise. We maintain that cooperation in ONC direct review, when applicable, is important to demonstrating that a health care provider used certified EHR technology in a meaningful manner as required by sections 1848(o)(2)(A)(i) and 1886(n)(3)(A)(i) of the Act as stated in the proposed rule (81 FR 28170 through 28171).

    We are therefore finalizing a two part attestation that splits the SPPC activities. As it relates to ONC direct review, the attestation is required. As it relates to ONC-ACB surveillance, the attestation is optional. The attestations are as follows:

    • Health care providers must attest that they engaged in good faith in SPPC activities related to ONC direct review by: (1) Attesting their acknowledgment of the requirement to cooperate in good faith with ONC direct review of their health information technology certified under the ONC Health IT Certification Program if a request to assist in ONC direct review is received; and (2) if a request is received, attesting that they cooperated in good faith in ONC direct review of health IT under the ONC Health IT Certification Program to the extent that such technology meets (or can be used to meet) the definition of certified EHR technology.

    • Optionally, health care providers may attest that they engaged in good faith in SPPC activities related to ONC-ACB surveillance by: (1) Attesting their acknowledgement of the option to cooperate in good faith with ONC-ACB surveillance of their health information technology certified under the ONC Health IT Certification Program if a request to assist in ONC-ACB surveillance is received; and (2) if a request is received, attesting that they cooperated in good faith in ONC-ACB surveillance of health IT under the ONC Health IT Certification Program, to the extent that such technology meets (or can be used to meet) the definition of certified EHR technology.

    As noted previously, only a small percentage of providers are likely to receive a request for assistance from ONC or an ONC-ACB in a given year. Therefore under this final policy, for both the mandatory attestation and for the optional attestation, a health care provider is considered to be engaging in SPPC activities related to supporting providers with the performance of certified EHR technology first by an attestation of acknowledgment of the policy and second by an attestation of cooperation in good faith if a request to assist was received from ONC or an ONC-ACB. However, we reiterate that the attestation requirement as it pertains to cooperation with ONC-ACB surveillance is optional for health care providers.

    Operationally, we expect that the submission method selected by the health care provider will influence how these attestations are accomplished (see section II.E.5.a on MIPS submission mechanisms for details or the 2015 EHR Incentive Programs final rule (80 FR 62896-62901). For example, a Medicaid EP attesting to their state for the EHR Incentive Programs may be provided a series of statements within the attestations system. In this case the attestation would be offered in two parts. For the first part, in order to successfully demonstrate meaningful use, the EP must attest that they engaged in SPPC activities related to ONC direct review of certified EHR technology, first by their acknowledgement of the policy, and second by attesting that they cooperated in good faith with ONC direct review of the certified EHR technology if a request to assist was received. For the second part in this example, the Medicaid EP may choose to attest that they engaged in SPPC activities related to ONC-ACB surveillance of certified EHR technology, including attesting to having cooperated in good faith if a request to assist was received, or the EP may choose not to so attest.

    A health care provider electronically submitting data for MIPS would be required to use the form and manner specified for the submission mechanism to indicate their attestation to the first part, and may indicate their attestation to the second part if they so choose. CMS and ONC will also offer continued support and guidance both through educational resources to support participating in and reporting to CMS programs, and through specific guidance for those health care providers who receive requests related to engaging in SPPC activities.

    Comment: Several commenters opposed any in-the-field observation of a health care provider's certified EHR technology and insisted that such observations be conducted with the developer of the certified EHR technology instead. Some commenters questioned the need to perform observations of certified EHR technology in production environments, observing that health care providers and other users of certified EHR technology often depend on the developer of the certified EHR technology to deliver required functionality and capabilities. One commenter recommended that the observation of certified EHR technology be limited to the use of test systems and test data rather than observation of production systems and data.

    Several commenters stated that health care providers should not be required to cooperate with on-premises observation of their certified EHR technology because an ONC-ACB should be able to access and evaluate the performance of certified health IT capabilities using remote access methods. By contrast, other commenters stated that remote observation could create security risks and that all observations should be conducted on the premises, preferably under the direction of the health care provider's clinical staff.

    Response: To provide adequate assurance that certified EHR technology meets applicable certification requirements and provides the capabilities health care providers need, it is critical to determine not only how certified EHR technology performs in a controlled testing environment but also how it performs in the field. Indeed, a fundamental purpose of ONC-ACB surveillance and ONC direct review is to allow ONC-ACBs and ONC to identify problems or deficiencies in certified EHR technology that may only become apparent once the technology has been implemented and is in use by health care providers in production environments (80 FR 62709). These activities necessarily require the cooperation of the clinicians and other persons who actually use the capabilities of certified EHR technology implemented in production environments, including health care providers. (See 81 FR 28170-71). This cooperation ultimately benefits health care providers and is critical to provider success in the Medicare and Medicaid EHR Incentive Programs and MIPS because it provides confidence that certified EHR technology capabilities will function as expected and that health care providers will be able to demonstrate compliance with CMS program requirements.

    We decline to limit health care providers' engagement in SPPC activities to any particular form of observation, such as on-premises or remote observation of certified capabilities. We note that in the 2015 Edition final rule, ONC explained the observation of certified health IT capabilities in a production environment may require a variety of methodologies and approaches (80 FR 62709). In addition, as the comments suggest, individual health care providers are likely to have different preferences and should have the flexibility to work with an ONC-ACB or ONC to identify an approach to these activities that is most effective and convenient. In this connection, we have consulted with ONC and expect that, where feasible, a health care provider's preference for a particular form of observation will be accommodated.

    For similar reasons, we decline to limit engagement in SPPC activities to the use of test systems or test data. The use of test systems and test data may be allowed in some circumstances, but may not be appropriate in all circumstances. For example, a problem with certified EHR technology capabilities may be difficult or impossible to replicate with test systems or test data. More fundamentally, limiting cooperation to observations of test systems and test data may not provide the same degree of assurance that certified EHR technology used by health care providers (for example, production systems used with production data) continue to meet applicable certification requirements and function in a manner that supports health care providers participation in the EHR Incentive Programs and MIPS.

    Comment: One commenter suggested that health care providers who engage in SPPC activities be able to file a formal complaint with ONC or CMS in the event that the ONC-ACB were to “handle matters inappropriately,” and that the ONC-ACB should not be permitted to continue its activities until the complaint has been resolved.

    Response: If a provider has any concerns about the propriety of an ONC-ACB's conduct, including in connection with a request to assist in ONC-ACB surveillance of certified health IT or during in-the-field surveillance of the certified EHR technology, the health care provider should make a formal complaint to ONC detailing the conduct in question. For further information, we direct readers to ONC's Web site: https://www.healthit.gov/healthitcomplaints.

    Comment: A number of commenters were opposed to or raised concerns regarding this proposal on the grounds that requiring health care providers to engage in SPPC activities would violate the HIPAA Rules. Relatedly, a number of commenters stated that requiring providers to give ONC or ONC-ACBs access to their production systems may be inconsistent with a health care organization's privacy or security policies and could introduce security risks. A few commenters stated that observation of certified EHR technology in the field would violate patients' or providers' privacy rights or expectations. Some of these commenters expressed the view that any requirement to engage in SPPC activities would be an unjustified governmental invasion of privacy or other interests.

    Response: As noted in the Quality Payment Program proposed rule and in the 2015 Edition final rule, in consultation with the Office for Civil Rights, ONC has clarified that as a result of ONC's health oversight authority a health care provider is permitted, without patient authorization, to disclose PHI to an ONC-ACB or directly to ONC for purposes of engaging in SPPC activities in cooperation with a request to assist from ONC or an ONC-ACB (81 FR 28171; 80 FR 62716). Health care providers are permitted without patient authorization to make disclosures to a health oversight authority (as defined in 45 CFR 164.501) for oversight activities authorized by law (as described in 45 CFR 164.512(d)), including activities to determine compliance with program standards, and ONC may delegate its authority to ONC-ACBs to perform surveillance of certified health IT under the Program.3 This disclosure of PHI to an ONC-ACB does not require a business associate agreement with the ONC-ACB since the ONC-ACB is not performing a function on behalf of the covered entity. In the same way, a provider, health IT developer, or other person or entity is permitted to disclose PHI directly to ONC, without patient authorization and without a business associate agreement, for purposes of ONC's direct review of certified health IT or the performance of any other oversight responsibilities of ONC to determine compliance under the Program.

    3 See, 45 CFR 164.512(d)(1)(iii); 80 FR 62716 and ONC Regulation FAQ #45 [12-13-045-1]. Available at http://www.healthit.gov/policy-researchers-implementers/45-question-12-13-045.

    We disagree with commenters who maintained that the disclosure of PHI to ONC or an ONC-ACB could be inconsistent with reasonable privacy or other organizational policies or would otherwise be an unjustified invasion of privacy or any other interest. As noted, the disclosure of this information would be authorized by law on the basis that it is a disclosure to a health oversight agency (ONC) for the purpose of determining compliance with a federal program (the ONC Health IT Certification Program). In addition, we note that any further disclosure of PHI by an ONC-ACB or ONC would be limited to disclosures authorized by law, such as under the federal Privacy Act of 1974, or the Freedom of Information Act (FOIA), as applicable.

    Comment: Several commenters requested clarification concerning the types of production data that ONC or an ONC-ACB would be permitted to access (and that a health care provider would make accessible to ONC, or the ONC-ACB) when assessing certified EHR technology in a production environment. Several commenters recommended that production data be limited to the certified capabilities and not extend to other aspects of the health IT.

    Response: A request to assist in ONC-ACB surveillance or ONC direct review may include in-the-field surveillance or direct review of the certified EHR technology to determine whether the capabilities of the health IT are functioning in accordance with the requirements of the ONC Health IT Certification Program. We note that it is common for certified EHR technology to be deployed and integrated with other technologies (including technologies that produce data used across multiple systems and components). Therefore, we believe it is feasible that determining whether certified EHR technology is operating as it should could mean, for example, ONC reviewing whether the certified EHR technology does not operate as it should when it interacts with other technologies. We also refer commenters to the 2015 Edition final rule and the ONC Enhanced Oversight and Accountability final rule for more information about the scope of ONC-ACB surveillance and ONC direct review, and for a discussion about the types of capabilities that may be subject to ONC-ACB surveillance and ONC direct review.

    Comment: A commenter observed that while the proposed attestation would be retrospective, health care providers may be unaware of the requirement to engage in SPPC activities until they are presented with the attestation statement. The commenter suggested that health care providers be required to attest only that they will prospectively engage in SPPC activities.

    Response: The attestation is retrospective because it is part of health care provider's demonstration that it has used certified EHR technology in a meaningful manner for a certain period. Based on our consultation with ONC, the health care providers will be made aware of both their obligation to cooperate if they are contacted to assist in ONC direct review of certified health IT and their option to cooperate if they are contacted to assist an ONC-ACB in surveillance of certified health IT. Thus, we believe that health care providers will be able to appropriately engage in SPPC activities for CMS programs and attest to their cooperation.

    Comment: A commenter urged that health care providers be held harmless if engagement in SPPC activities results in a finding that their certified EHR technology no longer conforms to the requirements of the ONC Health IT Certification Program due to the actions of the certified EHR technology developer.

    Response: ONB-ACB surveillance and ONC direct review provide an opportunity to assess the performance of certified EHR technology capabilities in a production environment to determine whether the technology continues to perform in accordance with the requirements of the ONC Health IT Certification Program. This analysis will necessarily be focused on the performance of the technology, which may require the consideration of a provider's use of the technology. However, health care providers that cooperate with the analysis of the performance of certified EHR technology are not themselves subject to ONC or an ONC-ACB's authority under, as applicable, the surveillance requirements of the 2015 Edition final rule, or the direct review requirements of the ONC Enhanced Oversight and Accountability final rule. As such, no adverse finding or determination can be made by ONC or an ONC-ACB against a provider in connection with ONC direct review or ONC-ACB surveillance. If ONC or an ONC-ACB determined that the performance issue being analyzed arose solely from the provider's use of the technology and not from a problem with the technology itself, ONC or an ONC-ACB would not make a nonconformity finding against the health IT, but may decide to notify the provider of its determination for information purposes only. We do acknowledge, however, that if in the course of ONC-ACB surveillance or ONC direct review, ONC became aware of a violation of law or other requirements, ONC could share that information with relevant federal or state entities. If a certified health IT product is determined to no longer conform with the requirements of the ONC Health IT Certification Program and the health IT's certification were to be terminated by ONC or withdrawn by an ONC-ACB, there exists a process by which an affected health care provider may apply for exception from payment adjustments related to CMS programs on the basis of significant hardship or exclusion from the requirement. For example, we direct readers to CMS FAQ# 12657 4 related to hardship exceptions for the EHR Incentive Programs related to the certification of a health IT product being terminated or withdrawn.

    4 CMS FAQ#12657 “What if your product is decertified?”: https://questions.cms.gov/faq.php?isDept=0&search=decertified&searchType=keyword&submitSearch=1&id=5005.

    Comment: Multiple commenters suggested that, in lieu of the proposed attestation, we provide incentives to encourage voluntary participation in SPPC activities, such as counting voluntary participation towards an eligible clinician's performance score for the advancing care information category of MIPS.

    Response: We have considered the commenters' suggestion but conclude that it would be impracticable for two main reasons. First, a key component of the oversight of certified EHR technology is the randomized surveillance of certified EHR technology by ONC-ACBs. To ensure a representative sample, we believe it is important that all health care providers are required to use certified EHR technology as an EP, eligible hospital, or CAH under the Medicare and Medicaid EHR Incentive Programs and as a MIPS eligible clinicians under the advancing care information performance category be part of the pool from which ONC-ACBs select locations for in-the-field surveillance, not only those who volunteer for participation. Second, as we explained in connection with commenters' concerns regarding the potential impact of SPPC activities on providers, we anticipate that the opportunity for health care providers to participate in randomized surveillance of their certified EHR technology will arise relatively infrequently due to the relatively small number of practices and other locations that would be selected for this type of ONC-ACB surveillance. This means that only a limited number of health care providers would have an opportunity to participate in this way for reasons outside the control of the health care provider. Consequently, health care providers would not have an equal opportunity to participate in these activities, which would make adopting an incentive within the scoring methodology for these activities potentially unfair to providers who are participating in CMS programs but are not selected by the randomized selection process. This would unfairly skew scores in a manner unrelated to a health care provider's performance in a given program. For these reasons we decline to adopt such an arrangement.

    Comment: Multiple commenters stated that this proposal was premature because ONC has yet to finalize the ONC Health IT Certification Program: Enhanced Oversight and Accountability proposed rule. Commenters urged us to withdraw the proposal until such time as any changes to the ONC Health IT Certification Program have been finalized.

    Response: We recognize that the pendency of the ONC Health IT Certification Program: Enhanced Oversight and Accountability proposed rule, which outlines the policies for ONC direct review of certified health IT, at the time of our proposal may have been challenging for some commenters. However, health care provider engagement in SPPC activities is important regardless of whether a request to assist relates to ONC direct review of certified health IT or ONC-ACB surveillance of certified health IT. As we have explained, we expect health care providers will engage in SPPC activities because doing so is fundamental to ensuring that certified EHR technology performs in a manner that supports the goals of health care providers seeking to meet the requirements of the MIPS and Medicare and Medicaid EHR Incentive Programs. We further believe that the publication of the ONC Enhanced Oversight and Accountability final rule in concert with the flexibilities finalized in this final rule with comment period, as well as the timeline for implementation of these policies, which apply to reporting periods beginning in CY 2017, supports resolution of this concern.

    Comment: A commenter stated that the proposed attestation would compel meaningful EHR users to cooperate with far-ranging or unbounded inquiries into their certified health IT. Other commenters expressed similar concerns and pointed to what they perceived as the broad range of issues that could be subject to ONC's direct review under the ONC Health IT Certification Program: Enhanced Oversight and Accountability proposed rule.

    Response: We reiterate that, whatever form engagement in SPPC activities may take, any conclusions by ONC or ONC-ACBs will necessarily be focused on the performance of the technology. Moreover, as we have explained, health care providers will only be required to attest their engagement in SPPC activities in relation to requests received to assist in ONC direct review of certified capabilities of their health IT that meet (or can be used to meet) the definition of certified EHR technology. Further, because a health care provider's attestation will be retrospective as noted previously, the attestation relates only to acknowledgment if no request was received or the health care provider's cooperation with requests for assistance that have already been received at the time of making the attestation. The attestation requirement does not require that health care providers commit to engaging in unknown future activities.

    Comment: A commenter requested more information about the circumstances that would trigger direct review of certified EHR technology. Separately, the commenter recommended that such review be conducted only as part of an audit of a health care provider's demonstration of meaningful use or an eligible clinician's reporting for the advancing care information performance category.

    Response: ONC determines the requirements for and circumstances under which health IT may be subject to ONC-ACB surveillance or ONC direct review under the ONC Health IT Certification Program. We refer the commenter to the 2015 Edition final rule (80 FR 62601) for a discussion of existing requirements related to the observation of certified health IT by ONC-ACBs and to the ONC Enhanced Oversight and Accountability final rule (scheduled for publication in the Federal Register on October 19, 2016) for a discussion of ONC's direct review activities. To, be effective, ONC-ACB surveillance or ONC direct review of SPPC activities must be timely to identify an issue with the certified health IT. If these actions are limited to the timing of retrospective audits of a health care provider's compliance with program requirements, they may not reflect the current implementation of the technology in a production setting where the issue exists. For these reasons, it is not appropriate for a health care provider's cooperation to be limited to the context of a program audit on prior participation.

    Comment: To assist health care providers in complying with the proposed attestation, a commenter recommended that any requests for engagement in SPPC activities be clearly labeled as such so as to differentiate them from other types of communications.

    Response: We acknowledge this commenter's concern that, to support health care providers engaging in SPPC activities, a request to assist should be designed to clearly inform the recipient as to the purpose of the communication and avoid, as much as possible, the request being inadvertently overlooked or unnoticed. We have consulted with ONC and clarify that ONC-ACBs currently initiate contact with health care providers for randomized surveillance by emailing the person or office holder of a practice or organization that is the primary contact for the health IT developer whose product is being surveilled or reviewed. The contact information is supplied by the developer, and ONC-ACBs would not ordinarily contact a health care provider directly unless they are identified by the developer as being the most appropriate point of contact for a practice location. However, we note that in addition to clarity on the point of contact, clarity within the request itself is essential for the health care provider engaging in SPPC activities. This relates not only to clarity as to the purpose of the request, but also in relation to the mandatory and optional SPPC activities which are differentiated based on if the request is for ONC direct review of certified health IT or ONC-ACB surveillance of certified health IT.

    As program guidance is developed, CMS and ONC will work to ensure that requests from ONC and ONC-ACBs provide clear context and guidance for health care providers when requesting that health care providers engage in SPPC activities as part of their participation in CMS programs.

    Comment: A commenter stated that some EHR contracts specifically prohibit customers or users of certified EHR technology from providing ONC or ONC-ACBs with access to the technology or data.

    Response: Developers of certified health IT are required to cooperate with ONC program activities such as ONC direct review or ONC-ACB surveillance of certified health IT, which includes furnishing information to ONC or an ONC-ACB that is necessary to the performance of these activities (see 80 FR 62716-18) in order to obtain and maintain certification of health IT. Access to certified health IT that is under observation by ONC or an ONC-ACB, together with production data relevant to the certified capability or capabilities being assessed, is essential to this process. For example, in the 2015 Edition final rule, ONC stated that a health IT developer must furnish to the ONC-ACB upon request, accurate and complete customer lists, user lists, and other information that the ONC-ACB determines is necessary to enable it to carry out its surveillance responsibilities (80 FR 62716). If a health care provider reasonably believes that it is unable to engage in SPPC activities due to these or other actions of its health IT developer, the health care provider should notify ONC or the ONC-ACB, as applicable. If the developer has indeed limited, discouraged, or prevented the health care provider from cooperation in good faith with a request to assist ONC direct review, the health care provider would not be required to cooperate with such activities unless and until the developer removed the contractual restrictions or other impediments.

    Comment: A commenter expressed concern about sharing data with ONC or an ONC-ACB without a clear description of the data to be accessed.

    Response: The nature of the data that will need to be accessed by ONC or an ONC-ACB will be made clear to the health care provider at the time that their cooperation is sought. To alleviate any concerns commenters may have, we will work with ONC to provide guidance to ONC-ACBs and to providers, as necessary, to address issues such as the communication protocols to be used when requesting a health care provider's engagement in SPPC activities.

    Comment: Several commenters requested additional guidance on specific actions health care providers would be expected to take to engage in SPPC activities and cooperate in good faith with a request to assist if so requested. One commenter recommended that CMS and ONC create a check-list tool that clinicians could use to track their compliance with the required activities.

    Response: As specified in the proposed rule, engaging in SPPC activities and cooperation in good faith may simply require the provision of information, such as in response to telephone inquiries and written surveys, about the performance of the certified EHR technology being used. Engagement in SPPC activities and cooperation in good faith might also involve facilitating requests (from ONC or ONC-ACBs) for access to the certified EHR technology (and related data) as deployed in the provider's production environment and to demonstrate capabilities and other aspects of the technology that are the focus of the ONC-ACB surveillance or ONC direct review.

    Because assistance with ONC-ACB surveillance or ONC direct review will typically be carried out at a practice or facility level, we expect that it will be rare for a health care provider to be directly involved in the conduct of many of these activities, including in-the-field observations of certified EHR technology capabilities. To comply with the attestation requirements, a health care provider should establish to their own satisfaction that appropriate processes and policies are in place in their practice to ensure that all relevant personnel, such as a practice manager or IT officer, are aware of the health care provider's obligation to engage in SPPC activities related to requests to assist in ONC direct review of certified health IT and the health care provider's option to engage in SPPC activities related to requests to assist in ONC-ACB surveillance of certified health IT. This includes understanding the requirement to cooperate in good faith with a request to assist in ONC direct review if received. Health care providers should also ensure that appropriate processes and policies are in place for the practice to document all requests and communications concerning SPPC activities as they would for other requirements of CMS programs in which they participate. We note that for a health care provider participating in a CMS program as an individual, if that health care provider practices at multiple locations or switches locations throughout the course of a year, they would only need to make inquiries about any requests to assist in ONC direct review of certified health IT during the period in which the eligible clinician or EP worked at the practice.

    We acknowledge the commenter's desire for a checklist tool to provide greater certainty for clinicians. However, as ONC explained in the 2015 Edition final rule, an evaluation of certified health IT in a production environment may require a variety of methodologies and approaches (80 FR 62709) and individual health care providers are able to express different preferences and should have the flexibility to work with ONC or an ONC-ACB to identify an effective approach that is most convenient. Because the specific actions required will be addressed on a case-by-case basis, the development of a checklist tool may not be feasible. Rather, as noted previously, if any request is made, ONC or an ONC-ACB will work directly with the health care provider to provide clear guidance on the actions needed to assist in the request. The health care provider would then retain any such documentation concerning the request for their records as they would for other similar requirements in CMS programs.

    Comment: A commenter asked how ONC-ACBs will identify themselves and how a health care provider will be able to verify that it is not dealing with an imposter.

    Response: Each health IT developer contracts with one or more ONC-ACBs to provide certification services. As such, health IT developers should be familiar with the processes used by their ONC-ACB(s) and have existing practices for communicating with the personnel of their ONC-ACB(s). A health care provider can, on receipt of a request to assist an ONC-ACB, contact their health IT developer and request information about the identity of the ONC-ACB personnel that will carry out the activities. Health care providers should, before providing access to their facility or the certified health IT, request that the ONC-ACB personnel provide appropriate identification that matches the information about the ONC-ACB provided by the provider's certified health IT developer.

    Comment: Several commenters requested that we elaborate on the requirements for engaging in SPPC activities “in good faith” and for permitting timely access to certified EHR technology.

    Response: Health care providers are required to attest to engaging in SPPC activities which requires that they cooperate in good faith and in a timely manner with a request to assist in ONC direct review of certified health IT if such a request is received. A health care provider may also optionally attest to engaging in SPPC activities, including having cooperated in good faith, in response to a request to assist an ONC-ACB with surveillance of certified health IT. This includes cooperating in a manner that aids and assists ONC or an ONC-ACB to perform ONC direct review or ONC-ACB surveillance activities to the extent that such cooperation is practicable and not unduly burdensome to the provider. As previously mentioned, the particular needs of any request for assistance from ONC or an ONC-ACB may vary depending on a wide range of factors. In addition, “in good faith” is necessarily dependent upon the particular facts and circumstances of the health care provider who attests. For example, a request for assistance may relate to a capability the health care provider does not have enabled in their EHR as it is not needed for their unique practice, which might be costly, time consuming, or otherwise unreasonable for the provider to enable solely for the purposes of ONC direct review of that function. In such a case, the health care provider who communicates these limitations to ONC, and maintains documentations of the request and these circumstances related to their practice, may be found to have cooperated in good faith based on this documentation. However, if the health care provider received such a request and provided no response to the request and did not retain documentation of these circumstances, they may be found not to have cooperated in good faith.

    Comment: One commenter asked us to clarify that a health care provider will have satisfied the requirements of the proposed attestation in the event that the health care provider was never approached by ONC or an ONC-ACB with a request for assistance during the relevant reporting period.

    Response: In the circumstances the commenter describes, the health care provider would be able to attest to both the mandatory attestation (related to ONC direct review) and the optional attestation (related to ONC-ACB surveillance) on the basis that they acknowledge the policy. In other words, for the mandatory attestation, the health care provider that receives no request related to ONC direct review could successfully meet the attestation requirement by attesting that they acknowledge the requirement to cooperate in good faith with all requests for assistance with ONC direct review of their certified EHR technology. Likewise, a health care provider that did not receive a request for assistance with ONC-ACB surveillance during the reporting year but still seeks to attest to the optional attestation would attest that they are aware of the option to cooperate in good faith with all requests for assistance in ONC-ACB surveillance. We have revised the regulation text provisions at §§ 495.4, 495.40(a)(2)(i)(H), 495.40(b)(2)(i)(H), and 414.1375(b)(3)(i) to state that a health care provider engages in SPPC activities by cooperating in good faith with the ONC-ACB surveillance or ONC direct review of its certified EHR technology, to the extent that the health care provider receives a request from an ONC-ACB or ONC during the relevant reporting period; and that in the absence of any requests being made during the reporting period, the health care provider would demonstrate their engagement in the SPPC activities simply by attesting that they are aware of the SPPC policy.

    Comment: Several commenters requested clarification regarding the documentation that would be required to demonstrate compliance with the terms of the attestation so that health care providers could plan and prepare for an audit of this requirement. Among other topics, commenters requested guidance on expected documentation requirements related to a health care provider's responsiveness to requests for engagement in SPPC activities and the extent of cooperation required.

    Response: We acknowledge commenters' concerns about required documentation in cases of an audit. We clarify that we will provide guidance to auditors relating to this final rule with comment period and the attestation process in a similar manner as guidance is provided for other requirements under current CMS programs. This instruction includes requiring auditors to work closely with health care providers on identifying the appropriate supporting documentation applicable to the health care provider's individual case. We further stress that audit determinations are made on a case by case basis, which allows us to give individual consideration to each health care provider. We believe that such case-by-case review will allow us to adequately account for the varied circumstances that may be relevant.

    Comment: Commenters requested clarification concerning the effective date of the attestation requirement and, more specifically, the period to which an attestation that a health care provider engaged in SPPC activities would apply. Several commenters expressed concerns related to the timing of the attestation, noting that health care providers may submit attestations for reporting periods that have already begun or that will have begun prior to the effective date of this final rule with comment period.

    Response: We understand the commenters' concerns and are finalizing the requirement to attest to engagement in SPPC activities for health care providers for MIPS performance periods or EHR reporting periods beginning on or after January 1, 2017. The requirement includes only requests to engage in SPPC activities received after the effective date of this final rule with comment period. In other words, if a health care provider receives a request from ONC or an ONC-ACB to engage in SPPC activities before the effective date of this final rule with comment period, the attestation requirement will not apply to that request, and the health care provider is not required to cooperate with the request.

    After review and consideration of public comment, we are finalizing revisions to the definition of a meaningful EHR user at §§ 495.4 and 414.1305 to include “engaging in activities related to supporting providers with the performance of certified EHR technology.”

    We are finalizing modifications to the attestation requirements at § 495.40(a)(2)(i)(H) and (b)(2)(i)(H) to require an EP, eligible hospital or CAH to attest that they engaged in SPPC activities by attesting that they: (1) Acknowledge the requirement to cooperate in good faith with ONC direct review of their health information technology certified under the ONC Health IT Certification Program if a request to assist in ONC direct review is received; and (2) if requested, cooperated in good faith with ONC direct review of their health information technology certified under the ONC Health IT Certification Program, as authorized by 45 CFR part 170, subpart E, to the extent that such technology meets (or can be used to meet) the definition of CEHRT, including by permitting timely access to such technology and demonstrating its capabilities as implemented and used by the EP, eligible hospital, or CAH in the field.

    Additionally, we are finalizing that, optionally, the EP, eligible hospital, or CAH may also attest that they engaged in SPPC activities by attesting that they: (1) Acknowledge the option to cooperate in good faith with ONC-ACB surveillance of their health information technology certified under the ONC Health IT Certification Program if a request to assist in ONC-ACB surveillance is received; and (2) if requested, cooperated in good faith with ONC-ACB surveillance of their health information technology certified under the ONC Health IT Certification Program, as authorized by 45 CFR part 170, subpart E, to the extent that such technology meets (or can be used to meet) the definition of CEHRT, including by permitting timely access to such technology and demonstrating its capabilities as implemented and used by the EP, eligible hospital, or CAH in the field.

    We are also finalizing at § 404.1375(3) that the same attestations be made by all eligible clinicians under the advancing care information performance category of MIPS, including eligible clinicians who report on the advancing care information performance category as part of an APM Entity group under the APM scoring standard, as discussed in section II.E.5.h. of this final rule with comment period (see 81 FR 28170-71).

    b. Support for Health Information Exchange and the Prevention of Information Blocking

    To prevent actions that block the exchange of information, section 106(b)(2)(A) of the MACRA amended section 1848(o)(2)(A)(ii) of the Act to require that, to be a meaningful EHR user, an EP must demonstrate that he or she has not knowingly and willfully taken action (such as to disable functionality) to limit or restrict the compatibility or interoperability of certified EHR technology. Section 106(b)(2)(B) of MACRA made corresponding amendments to section 1886(n)(3)(A)(ii) of the Act for eligible hospitals and, by extension, under section 1814(l)(3) of the Act for CAHs. Sections 106(b)(2)(A) and (B) of the MACRA provide that the manner of this demonstration is to be through a process specified by the Secretary, such as the use of an attestation. Section 106(b)(2)(C) of the MACRA states that the demonstration requirements in these amendments shall apply to meaningful EHR users as of the date that is 1 year after the date of enactment, which would be April 16, 2016.

    As legislative background, on December 16, 2014, in an explanatory statement accompanying the Consolidated and Further Continuing Appropriations Act,5 the Congress advised ONC to take steps to “decertify products that proactively block the sharing of information because those practices frustrate congressional intent, devalue taxpayer investments in certified EHR technology, and make certified EHR technology less valuable and more burdensome for eligible hospitals and eligible providers to use.” 6 The Congress also requested a detailed report on health information blocking (referred to in this final rule with comment period as “the Information Blocking Report”). In the report, which was submitted to the Congress on April 10, 2015,7 ONC concluded from its experience and available evidence that some persons and entities—including some health care providers—are knowingly and unreasonably interfering with the exchange or use of electronic health information in ways that limit its availability and use to improve health and health care.8

    5 Public Law 113-235.

    6 160 Cong. Rec. H9047, H9839 (daily ed. Dec. 11, 2014) (explanatory statement submitted by Rep. Rogers, chairman of the House Committee on Appropriations, regarding the Consolidated and Further Continuing Appropriations Act, 2015).

    7 ONC, Report to Congress on Health Information Blocking (April 10, 2015), available at https://www.healthit.gov/sites/default/files/reports/info_blocking_040915.pdf.

    8Id. at 33.

    We explained in the proposed rule that the demonstration required by section 106(b)(2) of the MACRA must provide substantial assurance not only that certified EHR technology was connected in accordance with applicable standards during the relevant EHR reporting period, but that the health care provider acted in good faith to implement and use the certified EHR technology in a manner that supported and did not interfere with the electronic exchange of health information among health care providers and with patients to improve quality and promote care coordination (81 FR 28172). We proposed that such a demonstration be made through an attestation (referred to in this section of the preamble as the “information blocking attestation”), which would comprise three statements related to health information exchange and information blocking, which were described in the proposed rule (81 FR 28172). Accordingly, we proposed to revise the definition of a meaningful EHR user at § 495.4 and to revise the corresponding attestation requirements at § 495.40(a)(2)(i)(I) and (b)(2)(i)(I) to require this attestation for all EPs, eligible hospitals, and CAHs under the Medicare and Medicaid EHR Incentive Programs, beginning with attestations submitted on or after April 16, 2016. Further, we proposed this attestation requirement (at § 414.1375(b)(3)(ii)) for all eligible clinicians under the advancing care information performance category of MIPS, including eligible clinicians who report on the advancing care information performance category as part of an APM Entity group under the APM scoring standard, as discussed in section II.E.5.h of the proposed rule (81 FR 28181).

    We invited public comment on this proposal, including whether the proposed attestation statements could provide the Secretary with adequate assurances that an eligible clinician, EP, eligible hospital, or CAH has complied with the statutory requirements for information exchange. We also encouraged public comment on whether there are additional facts or circumstances to which eligible clinicians, EPs, eligible hospitals, and CAHs should be required to attest, or whether there is additional information that they should be required to report.

    Comment: A number of commenters expressed strong support for this proposal and urged us to finalize the information blocking attestation as proposed. Commenters anticipated that such an attestation would discourage information blocking; encourage more robust sharing of information among all members of a patient's care team; increase demand for more open and interoperable health IT platforms and systems; and strengthen efforts to enhance health care quality and value, including the capturing and sharing of information about quality, costs, and outcomes. One commenter stated that the information blocking attestation would also help independent physicians compete by deterring predatory information sharing policies or practices, especially by large health systems or hospitals.

    Many commenters expressed partial support for this proposal but voiced concerns about the particular content or form of the information blocking attestation as proposed. Several commenters stated that the language of the attestation was unclear and should provide more detail regarding the specific actions health care providers would be required to attest. Conversely, several commenters (including some of the same commenters) believe that the language of the attestation was too prescriptive. Some commenters recommended revising or removing one or more of the three statements that comprise the attestation. A few commenters suggested that we finalize only the first statement—which mirrors the statutory language in section 106(b)(2) of the MACRA—and contended that the other statements were unnecessary or, alternatively, go beyond what section 106(b)(2) requires.

    Some commenters were opposed in principle to requiring health care providers to attest to any statement regarding information blocking. Most of these commenters insisted that such a requirement would impose unnecessary burdens or unfair obligations on health care providers, who, in the view of the commenters, are seldom responsible for information blocking.

    The majority of commenters, whether they supported or opposed the proposal, stressed that certain factors that prevent interoperability and the ability to successfully exchange and use electronic health information are beyond the ability of a health care provider to control. Many of these commenters stated that EHR vendors should be required to submit an information blocking attestation because they have greater control over these factors and, in the experience of some commenters, are more likely to engage in information blocking.

    Response: After consideration of the comments as well as the statutory provisions cited above, and in consultation with ONC, we believe the proposed attestation requirement is an appropriate and effective means to implement the demonstration required by section 106(b)(2) of the MACRA; we are therefore finalizing this requirement as proposed, as discussed in greater detail below and in our responses to specific comments that follow.

    As many commenters recognized, the information blocking concerns expressed by Congress are serious and reflect a systemic problem: A growing body of evidence establishes that persons and entities—including some health care providers—have strong incentives to unreasonably interfere with the exchange and use of electronic health information, undermining federal programs and investments in the meaningful use of certified EHR technology to improve health and the delivery of care.9 While effectively addressing this problem will require additional and more comprehensive measures,10 section 106(b)(2) of the MACRA represents an important first step towards increasing accountability for certain types of information blocking in the specific context of meaningful EHR users.

    9See, for example, Julia Adler-Milstein and Eric Pfeifer, Information Blocking: Is it occurring and what policy strategies can address it?, Milbank Quarterly (forthcoming Mar 2017) (reporting results of national survey of health information leaders in which 25 percent of respondents experienced routine information blocking by hospitals and health systems and over 50 percent of respondents experienced routine information blocking by EHR vendors); American Society of Clinical Oncology, Barriers to interoperability and information blocking (2015), http://www.asco.org/sites/www.asco.org/files/position_paper_for_clq_briefing_09142015.pdf (describing a growing number of reports from members concerning information blocking and stating that preventing these practices “is critically important to ensuring that every patient with cancer receives the highest quality health care services and support”); David C. Kendrick, Statement to the Senate, Committee on Health, Education, Labor, and Pensions, Achieving the promise of health information technology: information blocking and potential solutions, Hearing (Jul 23, 2015), available at http://www.help.senate.gov/hearings/achieving-the-promise-of-health-information-technology-information-blocking-and-potential-solutions (describing information blocking as “intentional interruption or prevention of interoperability” by providers or EHR vendors and stating “we have so many specific experiences with inappropriate data blocking . . . that we have created a nomenclature [to classify the most common types].”); David C. Kibbe, Statement to Senate, Committee on Health, Education, Labor, and Pensions, Achieving the promise of health information technology: information blocking and potential solutions, Hearing (Jul 23, 2015), available at http://www.help.senate.gov/hearings/achieving-the-promise-of-health-information-technology-information-blocking-and-potential-solutions (testifying that despite progress in interoperable health information exchange, “information blocking by health care provider organizations and their EHRs, whether intentional or not, is still a problem”); H.R. 6, 114th Cong. § 3001 (as passed by House of Representatives, July 10, 2015) (prohibiting information blocking and providing enforcement mechanisms, including civil monetary penalties and decertification of products); see also H.R. Rep. No. 114-190, pt. 1, at 126 (2015) (reporting that provisions of H.R. 6 “would refocus national efforts on making systems interoperable and holding individuals responsible for blocking or otherwise inhibiting the flow of patient information throughout our healthcare system.”); Connecticut Public Act No. 15-146 (enacted June 30, 2015) (making information blocking an unfair trade practice, authorizing state attorney general to bring civil enforcement actions for penalties and punitive damages); ONC, Report to Congress on Health Information Blocking (April 10, 2015), available at https://www.healthit.gov/sites/default/files/reports/info_blocking_040915.pdf (“[B]ased on the evidence and knowledge available, it is apparent that some health care providers and health IT developers are knowingly interfering with the exchange or use of electronic health information in ways that limit its availability and use to improve health and health care. This conduct may be economically rational for some actors in light of current market realities, but it presents a serious obstacle to achieving the goals of the HITECH Act and of health care reform.”)

    10See ONC, FY 2017: Justification of Estimates for Appropriations Committee, https://www.healthit.gov/sites/default/files/final_onc_cj_fy_2017_clean.pdf (2016), Appendix I (explaining that current law does not directly prohibit or provide an effective means to investigate and address information blocking by EHR vendors, health care providers, and other persons and entities, and proposing that Congress prohibit and prescribe appropriate penalties for these practices, including civil monetary penalties and program exclusion).

    The proposed information blocking attestation consists of three statements that contain several specific representations about a health care provider's implementation and use of certified EHR technology. These representations, taken together, will enable the Secretary to infer with reasonable confidence that the attesting health care provider acted in good faith to support the appropriate exchange of electronic health information and therefore did not knowingly and willfully limit or restrict the compatibility or interoperability of certified EHR technology.

    We believe that this level of specificity is necessary and that a more generalized attestation would not provide the necessary assurances described above. This does not mean, however, that the information blocking attestation imposes unnecessary or unreasonable requirements on health care providers. To the contrary, we have carefully tailored the attestation to the demonstration required by section 106(b)(2) of the MACRA. In particular, the attestation focuses on whether a health care provider acted in good faith to implement and use certified EHR technology in a manner that supports interoperability and the appropriate exchange of electronic health information. Recognizing that a variety of factors may prevent the exchange or use of electronic health information, and consistent with the focus of section 106(b)(2) on actions that are knowing and willful, this good faith standard takes into account health care providers' individual circumstances and does not hold them accountable for consequences they cannot reasonably influence or control.

    For these and the additional reasons set forth in our responses to comments immediately below, and subject to the clarifications therein, we are finalizing this attestation requirement as proposed.

    Comment: A number of commenters, several of whom expressed support for our proposal, regarded the language of the attestation as quite broad and stated that additional guidance may be needed to enable health care providers to understand the actions they would be required to attest.

    Response: We agree that health care providers must be able to understand and comply with program requirements. For this reason, the information blocking attestation consists of three statements related to health information exchange and the prevention of health information blocking. These statements—which we are finalizing at § 495.40(a)(2)(i)(I) for EPs, § 495.40(b)(2)(i)(I) for eligible hospitals and CAHs, and § 414.1375(b)(3)(ii) for eligible clinicians—contain specific representations about a health care provider's implementation and use of certified EHR technology. We believe that these statements, taken together, communicate with appropriate specificity the actions health care providers must attest to in order to demonstrate that they have complied with the requirements established by section 106(b)(2) of the MACRA. To provide further clarity, we set forth and explain each of these statements in turn below.

    Statement 1: A health care provider must attest that it did not knowingly and willfully take action (such as to disable functionality) to limit or restrict the compatibility or interoperability of certified EHR technology.

    This statement mirrors the language of section 106(b)(2) of the MACRA. We note that except for one illustrative example (concerning actions to disable functionality), the above statement does not contain specific guidance as to the types of actions that are likely to “limit or restrict” the compatibility or interoperability of certified EHR technology, nor the circumstances in which a health care provider who engages in such actions does so “knowingly and willfully.” The information blocking attestation supplements the foregoing statement with two more detailed statements concerning the specific actions a health care provider took to support interoperability and the exchange of electronic health information.

    Statement 2: A health care provider must attest that it implemented technologies, standards, policies, practices, and agreements reasonably calculated to ensure, to the greatest extent practicable and permitted by law, that the certified EHR technology was, at all relevant times: (1) Connected in accordance with applicable law; (2) compliant with all standards applicable to the exchange of information, including the standards, implementation specifications, and certification criteria adopted at 45 CFR part 170; (3) implemented in a manner that allowed for timely access by patients to their electronic health information (including the ability to view, download, and transmit this information); and (4) implemented in a manner that allowed for the timely, secure, and trusted bi-directional exchange of structured electronic health information with other health care providers (as defined by 42 U.S.C. 300jj(3)), including unaffiliated health care providers, and with disparate certified EHR technology and vendors.

    This statement focuses on the manner in which a health care provider implemented its certified EHR technology during the relevant reporting period, which is directly relevant to whether the health care provider took any actions to limit or restrict the compatibility or interoperability of the certified EHR technology. By attesting to this statement, a health care provider represents that it acted in good faith to implement its certified EHR technology in a manner that supported—and did not limit or restrict—access to and the exchange of electronic health information, to the extent that such access or exchange was appropriate (that is, practicable under the circumstances and authorized, permitted, or required by law). More specifically, the health care provider represents that it took reasonable steps (including working with its health IT developer and others as necessary) to verify that its certified EHR technology was connected (that is, implemented and configured) in accordance with applicable standards and law.

    In addition to verifying that certified EHR technology was connected and accessible during the relevant reporting period, a health care provider must represent that it took reasonable steps to implement corresponding technologies, standards, policies, practices, and agreements to enable the use of certified EHR technology, including by patients and by other health care providers, and not to limit or restrict appropriate access to or use of information in the health care provider's certified EHR technology. For example, actions to limit or restrict compatibility or interoperability could include implementing or configuring certified EHR technology so as to limit access to certain types of data elements or to the “structure” of the data, or implementing certified EHR technology in ways that limit the types of persons or entities that may be able to access and exchange information, or the types of technologies through which they may do so.

    Statement 3: A health care provider must attest that it responded in good faith and in a timely manner to requests to retrieve or exchange electronic health information, including from patients, health care providers (as defined by 42 U.S.C. 300jj(3)), and other persons, regardless of the requestor's affiliation or technology vendor.

    This third and final statement builds on a health care provider's representations concerning the manner in which its certified EHR technology was implemented by focusing on how the health care provider actually used the technology during the relevant reporting period. By attesting to this statement, a health care provider represents that it acted in good faith to use the certified EHR technology to support the appropriate exchange and use of electronic health information. This includes, for example, taking reasonable steps to respond to requests to access or exchange information, provided that such access or exchange is appropriate, and not unreasonably discriminating on the basis of the requestor's affiliation, technology vendor, or other characteristics, as described in the statement.

    We provide further discussion and analysis of the foregoing statements and their application in our responses to the specific comments summarized in the remainder of this section. We believe that these statements, taken together, provide a clear and appropriately detailed description of a health care provider's obligations under section 106(b)(2) of the MACRA, will enable them to demonstrate compliance to the satisfaction of the Secretary, and will promote fair and consistent application of program requirements across all attesting health care providers.

    Comment: Several commenters asked us to identify the specific actions and circumstances that would support a finding that a health care provider has knowingly and willfully limited or restricted the compatibility or interoperability of certified EHR technology. Some commenters inquired whether this determination would turn on a health care provider's individual circumstances or other case-by-case considerations, such as a health care provider's practice size, setting, specialty, and level of technology adoption. Commenters also asked whether other circumstances could justify limitations or restrictions on the compatibility or interoperability of certified EHR technology. For example, a commenter asked whether an office-based clinic that periodically turns its computer network off overnight to perform system maintenance would be deemed to have limited the interoperability of its certified EHR technology on the basis that other health care providers might be unable to request and retrieve records during that time. Commenters gave other potential justifications for blocking access to or the exchange of information, such as privacy or security concerns or the need to temporarily block the disclosure of sensitive test results to allow clinicians who order tests an opportunity to discuss the results with their patients prior to sharing the results with other health care providers.

    One commenter suggested that we approach this question in the manner described in the Information Blocking Report, which focuses on whether actions that interfere with the exchange or use of electronic health information have any objectively reasonable justification.

    Response: The compatibility or interoperability of certified EHR technology may be limited or restricted in ways that are too numerous and varied to catalog. While section 106(b)(2) of the MACRA specifically mentions actions to disable the functionality of certified EHR technology, other actions that are likely to interfere with the exchange or use of electronic health information could limit or restrict compatibility or interoperability. For example, the Information Blocking Report describes certain categories of business, technical, and organizational practices that are inherently likely to interfere with the exchange or use of electronic health information.11 These practices include but are not limited to:

    11 ONC, Report to Congress on Health Information Blocking (April 10, 2015) at 13, available at https://www.healthit.gov/sites/default/files/reports/info_blocking_040915.pdf.

    • Contract terms, policies, or other business or organizational practices that restrict individuals' access to their electronic health information or restrict the exchange or use of that information for treatment and other permitted purposes.

    • Charging prices or fees that make exchanging and using electronic health information cost prohibitive.

    • Implementing certified EHR technology in non-standard ways that are likely to substantially increase the costs, complexity, or burden of sharing electronic health information (especially when relevant interoperability standards have been adopted by the Secretary).

    • Implementing certified EHR technology in ways that are likely to “lock in” users or electronic health information (including using certified EHR technology to inappropriately limit or steer referrals).

    Such actions would be contrary to section 106(b)(2) only when engaged in “knowingly and willfully.” We believe the purpose of this requirement is to ensure that health care providers are not penalized for actions that are inadvertent or beyond their control.

    To illustrate these concepts, we consider several hypothetical scenarios raised by the commenters. First, we consider the situation suggested by one commenter in which a health care provider disables its computer network overnight to perform system maintenance. In this situation, the health care provider knows that the natural and probable consequence of its actions will be to prevent access to information in the certified EHR technology and in this way limit and restrict the interoperability of the technology. However, we recognize that health IT requires maintenance to ensure that capabilities function properly, including in accordance with applicable standards and law. We also appreciate that in many cases it may not be practicable to implement redundant capabilities and systems for all functionality within certified EHR technology, especially for physician practices and other health care providers with comparatively less health IT resources and expertise. Assuming that a health care provider acts in good faith to disable functionality for the purpose of performing system maintenance, it is unlikely that the health care provider would knowingly and willfully limit or restrict the compatibility or interoperability of the certified EHR technology. We note that our assumption that the health care provider acted in good faith presupposes that it did not disable functionality except to the extent and for the duration necessary to ensure the proper maintenance of its certified EHR technology, and that it took reasonable steps to minimize the impact of such maintenance on the ability of patients and other health care providers to appropriately access and exchange information, such as by scheduling maintenance overnight and responding to any requests for access or exchange once the maintenance has been completed and it is otherwise practicable to do so.

    Next, we consider the situation in which a health care provider blocks access to information in its certified EHR technology due to concerns related to the security of the information. Depending on the circumstances, certain access restrictions may be reasonable and necessary to protect the security of information maintained in certified EHR technology. In contrast, restrictions that are unnecessary or unreasonably broad could constitute a knowing and willful restriction of the compatibility or interoperability of the certified EHR technology. Because of the complexity of these issues, determining whether a health care provider's actions were reasonable would require additional information about the health care provider's actions and the circumstances in which they took place.

    As a final example, we consider whether it would be permissible for a health care provider to restrict access to a patient's sensitive test results until the clinician who ordered the tests, or another designated health care professional, has had an opportunity to review and appropriately communicate the results to the patient. We assume for purposes of this example that, consistent with the HIPAA Privacy Rule, the restriction does not apply to the patient herself or to the patient's request in writing to send this information to any other person the patient designates. With that assumption and under the circumstances we have described, it is likely that the health care provider is knowingly restricting interoperability. We believe that the restriction may be reasonable so long as the health care provider reasonably believes, based on its relationship with the particular patient and its best clinical judgment, that the restriction is necessary to protect the health or wellbeing of the patient. We note that our analysis would be different if the restriction were not based on a health care provider's individualized assessment of the patient's best interests and instead reflected a blanket policy to block access to test results until released by the ordering physician. Similarly, while clinical judgment and the health care provider-patient relationship are entitled to substantial deference, they may not be used as a pretext for limiting or restricting the compatibility or interoperability of certified EHR technology.

    The examples provided in this section of the final rule with comment period are intended to be illustrative. We reiterate the need to consider the unique facts and circumstances in each case in order to determine whether a health care provider knowingly and willfully limited or restricted the compatibility or interoperability of certified EHR technology.

    Comment: One commenter asked whether the requirement that certified EHR technology complies with federal standards precludes the use of other standards for the exchange of electronic health information.

    Response: In general, while certified EHR technology must be connected in accordance with applicable federal standards, this requirement does not preclude the use of other standards or capabilities, provided the use of such standards or capabilities does not limit or restrict the compatibility or interoperability of the certified EHR technology.

    Comment: Several commenters requested that we clarify our expectations for timeliness of access to or exchange of information.

    Response: As we have explained, whether a health care provider has knowingly and willfully limited or restricted the interoperability of certified EHR technology will depend on the relevant facts and circumstances. While for this reason we decline to adopt any bright-line rules, we reiterate that a health care provider must attest that it responded in good faith and in a timely manner to requests to retrieve or exchange electronic health information. What will be “timely” will of course vary based on relevant factors such as a health care provider's level of technology adoption and the types of information requested. For requests from patients, we note that while the HIPAA Privacy Rule provides that a covered entity may take up to 30 days to respond to a patient's written request for access to his or her PHI maintained by the covered entity, it is expected that the use of technology will enable the covered entity to fulfill the individual's request in far fewer than 30 days.12 Where information requested or directed by a patient can be readily provided using the capabilities of certified EHR technology, access should in most cases be immediate and in all cases as expeditious as is practicable under the circumstances.

    12 HHS Office for Civil Rights, Individuals' Right under HIPAA to Access their Health Information 45 CFR 164.524, http://www.hhs.gov/hipaa/for-professionals/privacy/guidance/access/index.html (last accessed Sept. 6, 2016).

    Comment: Many commenters stated that health care professionals and organizations should not be held responsible for adherence to health IT certification standards or other technical details of health IT implementation that are beyond their expertise or control. According to these commenters, requiring health care providers to attest to these technical implementation details would unfairly place them at financial risk for factors that are beyond the scope of their medical training. Additionally, many commenters took the position that EHR vendors are in the best position to ensure that certified EHR technology is connected in accordance with applicable law and compliant with applicable standards, implementation specifications, and certification criteria.

    Response: We reiterate that a health care provider will not be held accountable for factors that it cannot reasonably influence or control, including the actions of EHR vendors. Nor do we expect health care providers themselves to have any special technical expertise or to personally tend to the technical details of their health IT implementations. We do expect, however, that a health care provider will take reasonable steps to verify that the certified EHR technology is connected (that is, implemented and configured) in accordance with applicable standards and law and in a manner that will allow the health care provider to attest to having satisfied the conditions described in the information blocking attestation. In this respect, a health care provider's obligations include communicating these requirements to health IT developers, implementers, and other persons who are responsible for implementing and configuring the health care provider's certified EHR technology. In addition, the health care provider should obtain adequate assurances from these persons to satisfy itself that its certified EHR technology was connected in accordance with applicable standards and law and in a manner that will enable the health care provider to demonstrate that it has not knowingly and willfully take action to limit or restrict the compatibility or interoperability of certified EHR technology.

    Comment: Several commenters supported the attestation's emphasis on the bi-directional exchange of structured electronic health information. Multiple commenters suggested that this requirement would expand access to relevant information by members of a patient's care team, allowing them to deliver more effective and comprehensive care, enhance health outcomes, and contribute directly to the goals of quality and affordability. As an example, commenters stated that the bi-directional exchange of information among pharmacists and other clinicians can provide important information for comprehensive medication management.

    Other commenters opposed or raised concerns regarding this aspect of our proposal, stating that bi-directional information exchange may not be feasible for many health care providers or may raise a variety of technical and operational challenges and potential privacy or security concerns.

    Some commenters requested that CMS clarify the term “bi-directional exchange” and the actions a health care provider would be expected to take to satisfy this aspect of the attestation. One commenter inquired specifically whether bi-directional exchange could include using a health information exchange or other intermediary to connect disparate certified EHR technology so that users could both send and receive information in an interoperable manner. If so, the commenter asked whether a health care provider would be expected to participate in multiple arrangements of this kind (and, if so, how many). Multiple commenters stated that it is not appropriate to allow bi-directional exchange in all circumstances and that privacy, security, safety, and other considerations require health care providers to restrict the types of information that the certified EHR technology will accept and the persons or other sources of that information.

    Response: We appreciate that bi-directional exchange of information presents challenges, including the need to validate the authenticity, accuracy, and integrity of data received from outside sources, mitigating potential privacy and security risks, and overcoming technical, workflow, and other related challenges. We also acknowledge that accomplishing bi-directional exchange may be challenging for certain health care providers or for certain types of information or use cases. However, a significant number of health care providers are already exchanging some types of electronic health information in a bi-directional manner. Based upon data collected in 2014, approximately one-fifth of non-federal acute care hospitals electronically sent, received, found (queried), and were able to easily integrate summary of care records into their EHRs.13 We also note that meaningful EHR users are required to use certified EHR technology that has the capacity to “exchange electronic health information with, and to integrate such information from other sources,” as required by the 2014 and 2015 Edition Base EHR definitions at 45 CFR 170.102 and corresponding certification criteria, such as the transitions of care criteria (45 CFR 170.314(b)(1) and (2) (2014 Edition) and 45 CFR 170.315(b)(2) (2015 Edition)).

    13 Charles D, Swain M Patel V. (August 2015) Interoperability among U.S. Non-federal Acute Care Hospitals. ONC Data Brief, No. 25 ONC: Washington DC. https://www.healthit.gov/sites/default/files/briefs/onc_databrief25_interoperabilityv16final_081115.pdf Similar data for office-based physicians will be available in 2016. ONC, Request for Information Regarding Assessing Interoperability for MACRA, 81 FR 20651 (April 8, 2016).

    We expect these trends to increase as standards and technologies improve and as health care providers, especially those participating in Advanced APMs, seek to obtain more complete and accurate information about their patients with which to coordinate care, manage population health, and engage in other efforts to improve quality and value.

    We clarify that bi-directional exchange may include using certified EHR technology with a health information exchange or other intermediary to connect disparate certified EHR technology so that users could both send and receive information in an interoperable manner. Whether a health care provider could participate in arrangements of this kind, or multiple arrangements, would depend on its particular circumstances, including its technological capabilities and sophistication, its financial resources, its role within the local health care community, and the availability of state or regional health information exchange infrastructure, among other relevant factors. A health care provider is not obligated to participate in every information sharing arrangement or to accommodate every request to connect via a custom interface. On the other hand, a health care provider with substantial resources that refuses to participate in any health information exchange efforts might invite scrutiny if, combined with other relevant facts and circumstances, there were reason to suspect that the health care provider's refusal to participate in certain health information exchange efforts were part of a larger pattern of behavior or a course of conduct to knowingly and willfully limit the compatibility or interoperability of the certified EHR technology.

    Comment: Several commenters were concerned about the requirement to respond to requests to retrieve or exchange electronic health information. Commenters stated that health care providers may have difficulty responding to requests from unaffiliated health care providers or from EHR vendors with whom they do not have a business associate agreement.

    A few commenters were concerned that health care providers may be penalized for limiting or restricting access to information despite not knowing whether an unaffiliated health care provider or EHR vendor is authorized or permitted to access a patient's PHI. Another commenter noted that some state laws require written patient consent before certain types of health information may be exchanged electronically. Some commenters contested the technical feasibility of exchanging information with unaffiliated health care providers and across disparate certified EHR technologies, explaining that federally-adopted standards such as the Direct standard do not support such robust information sharing. In particular, there is no widely-accepted and standardized method to encode requests in Direct messages, which means that a receiving system will often be unable to understand what information is being requested.

    Response: The ability to exchange and use information across multiple systems and health care organizations is integral to the concept of interoperability and, consequently, to a health care provider's demonstration under section 106(b)(2) of the MACRA. Consistent with its attestation, a health care provider must implement technologies, standards, policies, practices, and agreements reasonably calculated to ensure, to the greatest extent practicable and permitted by law, that the certified EHR technology was, at all relevant times implemented in a manner that allowed for timely access by patients to their electronic health information (including the ability to view, download, and transmit this information) and implemented in a manner that allowed for the timely, secure, and trusted bi-directional exchange of structured electronic health information with other health care providers, including unaffiliated providers, and with disparate certified EHR technology and vendors.

    We recognize that technical, legal, and other practical constraints may prevent a health care provider from responding to some requests to access, exchange, or use electronic health information in a health care provider's certified EHR technology, even when the requester has permission or the right to access and use the information. We reiterate that in these circumstances a health care provider probably would not have knowingly and willfully limited or restricted the compatibility or interoperability of the certified EHR technology. We expect that these technical and other challenges will become less significant over time and that health care providers will be able to respond to requests from an increasing range of health care providers and health IT systems.

    In response to the concerns regarding the disclosure of PHI without a business associate agreement, we remind commenters that the HIPAA Privacy Rule expressly permits covered entities to disclose PHI for treatment, payment, and operations. We refer commenters to numerous guidance documents and fact sheets issued by the HHS Office for Civil Rights and ONC on this subject.14 We also caution that mischaracterizing or misapplying the HIPAA Privacy Rule or other legal requirements in ways that are likely to limit or restrict the compatibility or interoperability of certified EHR technology might be inconsistent with the requirements of section 106(b)(2) of the MACRA and a health care provider's information blocking attestation. As an example, a health system that maintains a policy or practice of refusing to share PHI with unaffiliated health care providers on the basis of generalized and unarticulated “HIPAA compliance concerns” could be acting contrary to section 106(b)(6) and the information blocking attestation. The same would be true were a health care provider to inform a patient that it is unable to share information electronically with the patient's other health care professionals “due to HIPAA.”

    14See, e.g., HHS Office for Civil Rights, Understanding Some of HIPAA's Permitted Uses and Disclosures, http://www.hhs.gov/hipaa/for-professionals/privacy/guidance/permitted-uses/index.html (last accessed Sept. 1, 2016); see also Lucia Savage and Aja Brooks, The Real HIPAA Supports Interoperability, Health IT Buzz Blog, https://www.healthit.gov/buzz-blog/electronic-health-and-medical-records/interoperability-electronic-health-and-medical-records/the-real-hipaa-supports-interoperability/ (last accessed Sept. 1, 2016).

    Comment: A small number of commenters, primarily health IT developers, recommended that any requirements to exchange information be limited to the use of certified health IT capabilities required by the 2015 Edition health IT certification criteria or 2014 Edition EHR certification criteria (45 CFR 170.102), as applicable. In contrast, a commenter stated that a significant amount of health information is exchanged through means other than the standards and capabilities supported by ONC's certification criteria for health IT. The commenter cited as an example the widespread use of health information exchanges (HIEs) and network-to-network exchanges, which may or may not incorporate the use of certified health IT capabilities. The commenter insisted that these approaches should not be regarded as information blocking and should be treated as evidence that a health care provider is supporting and participating in efforts to exchange electronic health information. Another commenter stated that the requirement to respond to requests to retrieve or exchange electronic health information should be satisfied by connecting certified EHR technology to a network that can be accessed by other health care providers.

    Response: We decline to limit the attestation to the use of certified health IT capabilities or to give special weight to any particular form or method of exchange. As observed by the commenters, certified EHR technology may be implemented and used in many different ways that support the exchange and use of electronic health information. A health care provider's use of these forms and methods of exchange may be relevant to determining whether it acted in good faith to implement and use its certified EHR technology in a manner that supported and did not limit or restrict the compatibility or interoperability of the technology. As an example, certified EHR technology may come bundled with a health information service provider (HISP) that limits the ability to send and receive Direct messages to certain health care providers, such as those whose EHR vendor participates in a particular trust network. To overcome this or other technical limitations, a health care provider may participate in a variety of other health information sharing arrangements, whether to expand the reach of its Direct messaging capabilities or to enable other methods of exchanging and using electronic health information in its certified EHR technology. We believe that these and similar actions may be relevant to and should not be excluded from the consideration of the health care provider's overall actions to enable the interoperability of its certified EHR technology and to respond in good faith to requests to access or exchange electronic health information.

    Comment: Some commenters recommended that we revise the language of the attestation in whole or in part. Most of these commenters suggested removing certain language or statements, or combining them, to make the requirements of the attestation easier to understand or comply with. One commenter suggested that we abandon the proposed language and adopt the commenter's alternative language, which would require health care providers to attest that they established a workflow for responding to requests to retrieve or exchange electronic health information and did not knowingly or willfully limit or restrict the compatibility or interoperability of certified EHR technology during the development or implementation of the workflow, or in any subsequent actions related to the workflow.

    Response: We appreciate commenters' suggestions, but for the reasons we have explained, we do not believe it is appropriate to remove or to further simplify the language of the attestation. Although we do not adopt the alternative language suggested by one commenter, we observe that the actions the commenter describes are consistent with our expectation that health care providers implement certified EHR technology in a manner reasonably calculated to facilitate interoperability, to the greatest extent practicable, and respond in good faith to requests to retrieve or exchange information.

    Comment: Several commenters claimed that the proposed attestation is not necessary because most health care providers are not knowingly or willfully engaging in actions to limit or restrict the interoperability or compatibility of certified EHR technology, or to otherwise interfere with the exchange or use of electronic health information. Some of these commenters, while acknowledging that some health care providers may be engaging in actions that could limit or restrict the interoperability or compatibility of certified EHR technology, maintained that such actions are justified or are beyond a health care provider's control. Some commenters supported an attestation for hospitals or health systems but not for physicians, on the basis that the majority of individual EHR users are not engaging in information blocking.

    Response: The belief that health care providers do not engage in information blocking is contradicted by an increasing body of evidence and research, by the experience of CMS and ONC, and by many of the comments on this proposal.15 It is also inconsistent with section 106(b)(2) of the MACRA, which is entitled “Preventing Blocking The Sharing Of Information” and expressly requires health care providers to demonstrate that they did not knowingly and willingly take action to limit or restrict the interoperability of certified EHR technology.

    15See, for example, Julia Adler-Milstein and Eric Pfeifer, et al. referenced in this final rule with comment period.

    We need not contemplate whether health systems or any other class of health care provider is more predisposed to engage in information blocking, because the attestation we are finalizing implements section 106(b)(2) of the MACRA, which extends to all MIPS eligible clinicians, eligible clinicians part of an APM Entity, EPs, eligible hospitals, and CAHs.

    Comment: Some commenters suggested that, in lieu of an attestation, that CMS allow health care providers to demonstrate compliance with section 106(b)(2) by reporting on objectives and measures under the Medicare and Medicaid EHR Incentive Programs or the advancing care information performance category of MIPS. Commenters noted that health care providers participating in these programs must utilize CEHRT, including application programing interfaces (APIs) that provide access to patient data, and that participation in these programs should itself provide an adequate assurance that health care providers are not knowingly and willfully limiting or restricting the compatibility or interoperability of certified EHR technology.

    Response: We do not believe that a health care provider's reporting of objectives and measures can provide the demonstration required by section 106(b)(2) of the MACRA. The compatibility or interoperability of certified EHR technology may be limited or restricted in numerous and varied ways that are difficult to anticipate and that may not be reflected in objectives and measures under the EHR Incentive Programs and MIPS, which address a broad range of aspects related to the use of certified health IT. It is therefore entirely possible that a health care provider could implement and use certified EHR technology and meet relevant objectives and measures while still engaging in many actions that limit or restrict compatibility or interoperability. While in theory we could specify additional objectives and measures specifically related to the prevention of health information blocking, at this time we believe a less burdensome and more effective way to obtain adequate assurances that health care providers have not engaged in these prohibited practices is through the information blocking attestation we proposed and are finalizing.

    Comment: Many commenters stated that EHR vendors, not health care providers, are the primary cause of existing barriers to interoperability and information exchange. Many of these commenters stated that EHR vendors are engaging in information blocking, with some commenters alleging that EHR vendors are routinely engaging in these practices. Commenters alleged that EHR vendors are unwilling to share data in certain circumstances or charge fees that make such sharing cost-prohibitive for most physicians, which poses a significant barrier to interoperability and the efficient exchange of electronic health information.

    For these reasons, many commenters suggested that CMS or ONC to require EHR vendors and other health IT developers to attest to an information blocking attestation or to impose other requirements and penalties on developers to deter them from limiting or restricting the interoperability of certified EHR technology and to encourage them to proactively facilitate the sharing of electronic health information. For example, commenters supported the decertification of EHR vendors that charge excessive fees or engage in other practices that may constitute information blocking.

    Response: We agree that eligible clinicians, EPs, eligible hospitals, and CAHs are by no means the only persons or entities that may engage in information blocking. However, requirements for EHR vendors or other health IT developers are beyond the scope of section 106(b)(2) of the MACRA and this rulemaking.

    We note a series of legislative proposals included in the President's Fiscal Year 2017 Budget would prohibit information blocking by health IT developers and others and to provide civil monetary penalties and other remedies to deter this behavior.16 In addition, ONC has taken a number of immediate actions to expose and discourage information blocking by health IT developers, including requiring developers to disclose material information about limitations and types of costs associated with their certified health IT (see 45 CFR 170.523(k)(1); see also 80 FR 62719) and requiring ONC-ACBs to conduct more extensive and more stringent surveillance of certified health IT, including surveillance of certified health IT “in the field” (see 45 CFR 170.556; see also 80 FR 62707). ONC has also published resources, including a new guide to EHR contracts that can assist health care providers to compare EHR vendors and products and negotiate appropriate contract terms that do not block access to data or otherwise impair the use of certified EHR technology.17

    16See ONC, FY 2017: Justification of Estimates for Appropriations Committee, https://www.healthit.gov/sites/default/files/final_onc_cj_fy_2017_clean.pdf (2016), Appendix I (explaining that current law does not directly prohibit or provide an effective means to investigate and address information blocking by EHR vendors, health care providers, and other persons and entities, and proposing that Congress prohibit and prescribe appropriate penalties for these practices, including civil monetary penalties and program exclusion).

    17 ONC, EHR Contracts Untangled: Selecting Wisely, Negotiating Terms, and Understanding the Fine Print (Sept. 2016), available at https://www.healthit.gov/sites/default/files/EHR_Contracts_Untangled.pdf.

    Comment: Several commenters requested clarification regarding the documentation that would be required to demonstrate compliance with the terms of the attestation so that health care providers could both better understand and prepare for an audit of this requirement. Among other topics, commenters requested guidance on expected documentation requirements related to particular technologies or capabilities as well as a health care provider's responsiveness to requests to exchange information.

    Response: We acknowledge commenters' concerns about required documentation in cases of an audit. To alleviate those concerns, we clarify that we will provide guidance to auditors relating to the final policy and the attestation process. This instruction should include requiring auditors to work closely with health care providers on the supporting documentation needed applicable to the health care provider's individual case. We further stress that audit determinations are made on a case by case basis, which allows us to give individual consideration to each health care provider. We believe that such case-by-case review will allow us to adequately account for the varied circumstances that may be relevant to assessing compliance.

    Comment: Some commenters stated that it would be inappropriate for ONC or an ONC-ACB to perform surveillance of a health care provider's certified EHR technology to determine whether the health care provider is limiting or restricting interoperability.

    Response: The scope of ONC-ACB surveillance or, if finalized, ONC's review of a health care provider's certified EHR technology is limited to determining whether the technology continues to perform in accordance with the requirements of the ONC Health IT Certification Program. Because this oversight focuses on the performance of the technology itself, not on the actions of health care providers or users of the technology, we do not anticipate that information obtained in the course of such ONC-ACB surveillance or ONC review would be used to audit a health care provider's compliance with its information blocking attestation. As a caveat, we acknowledge that if ONC became aware that a health care provider had submitted a false attestation or engaged in other actions in violation of federal law or requirements, ONC could share that information with relevant federal entities.

    Comment: Some commenters asked how often attestations would be required (for example, once per year). Commenters also stated that the information blocking attestation should apply prospectively, possibly beginning with reporting periods commencing in 2017, to provide reasonable notice to affected parties.

    Response: MIPS eligible clinicians, eligible clinicians part of an APM Entity, EPs, eligible hospitals, and CAHs must submit an information blocking attestation covering each reporting period during which they seek to demonstrate that they were a meaningful EHR user or for which they seek to report on the advancing care information performance category. We agree that the attestation requirements should apply only to actions occurring after the effective date of this final rule with comment period. For this reason and to promote alignment with other reporting requirements, we are finalizing the information blocking attestation for attestations covering EHR reporting periods and MIPS performance periods beginning on or after January 1, 2017.

    After review and consideration of public comment, we are finalizing the attestation requirement as proposed. We are finalizing this requirement for EPs, eligible hospitals, and CAHs under the Medicare and Medicaid EHR Incentive Programs and for eligible clinicians under the advancing care information performance category in MIPS, including eligible clinicians who report on the advancing care information performance category as part of an APM Entity group under the APM scoring standard. We are finalizing this requirement for attestations covering EHR reporting periods and MIPS performance periods beginning on or after January 1, 2017.

    We have revised and are finalizing the proposed regulation text accordingly. Specifically, we are finalizing the revisions to the definition of a meaningful EHR user at § 495.4 and we are adding the same to the definition of a meaningful EHR user for MIPS at § 414.1305. We are finalizing the attestation requirements at § 495.40(a)(2)(i)(I) and (b)(2)(i)(I) to require such an attestation from EPs, eligible hospitals, and CAHs as part of their demonstration of meaningful EHR use under the Medicare and Medicaid EHR Incentive Programs. We are also finalizing § 414.1375(b)(3) to require this attestation from all eligible clinicians under the advancing care information performance category of MIPS, including eligible clinicians who report on the advancing care information performance category as part of an APM Entity group under the APM scoring standard as discussed in section II.E.5.h. of this final rule with comment period.

    D. Definitions

    At § 414.1305, in subpart O, we proposed definitions for the following terms:

    • Additional performance threshold.

    • Advanced Alternative Payment Model (Advanced APM).

    • Advanced APM Entity.

    • Affiliated practitioner.

    • Affiliated practitioner list.

    • Alternative Payment Model (APM).

    • APM Entity.

    • APM Entity group.

    • APM Incentive Payment.

    • Attestation.

    • Attributed beneficiary.

    • Attribution-eligible beneficiary.

    • Certified Electronic Health Record Technology (CEHRT).

    • CMS-approved survey vendor.

    • CMS Web Interface.

    • Covered professional services.

    • Eligible clinician.

    • Episode payment model.

    • Estimated aggregate payment amounts.

    • Final score.

    • Group.

    • Health Professional Shortage Areas (HPSA).

    • High priority measure.

    • Hospital-based MIPS eligible clinician.

    • Improvement activities.

    • Incentive payment base period.

    • Low-volume threshold.

    • Meaningful EHR user for MIPS.

    • Measure benchmark.

    • Medicaid APM.

    • Medical Home Model.

    • Medicaid Medical Home Model.

    • Merit-based Incentive Payment System (MIPS).

    • MIPS APM.

    • MIPS eligible clinician.

    • MIPS payment year.

    • New Medicare-Enrolled MIPS eligible clinician.

    • Non-patient facing MIPS eligible clinician.

    • Other Payer Advanced APM.

    • Other payer arrangement.

    • Partial Qualifying APM Participant (Partial QP).

    • Partial QP patient count threshold.

    • Partial QP payment amount threshold.

    • Participation List.

    • Performance category score.

    • Performance standards.

    • Performance threshold.

    • Qualified Clinical Data Registry (QCDR).

    • Qualified registry.

    • QP patient count threshold.

    • QP payment amount threshold.

    • QP Performance Period.

    • Qualifying APM Participant (QP).

    • Rural areas.

    • Small practices.

    • Threshold Score.

    • Topped out non-process measure.

    • Topped out process measure.

    Some of these terms are new in conjunction with MIPS and APMs, while others are used in existing CMS programs. For the new terms and definitions, we note that some of them have been developed alongside policies of this regulation while others are defined by statute. Specifically, the following terms and definitions were established by the MACRA: APM, Eligible Alternative Payment Entity (which we refer to as an Advanced APM Entity), Composite Performance Score (which we refer to as final score), Eligible professional or EP (which we refer to as an eligible clinician), MIPS Eligible professional or MIPS EP (which we refer to as a MIPS eligible clinician), MIPS adjustment factor (which we refer to as a MIPS payment adjustment factor), additional positive MIPS payment adjustment factor (which we refer to as additional MIPS payment adjustment factor), Qualifying APM Participant, and Partial Qualifying APM Participant.

    These terms and definitions are discussed in detail in relevant sections of this final rule with comment period.

    E. MIPS Program Details 1. MIPS Eligible Clinicians

    We believe a successful MIPS program fully equips clinicians identified as MIPS eligible clinicians with the tools and incentives to focus on improving health care quality, efficiency, and patient safety for all their patients. Under MIPS, MIPS eligible clinicians are incentivized to engage in proven improvement measures and activities that impact patient health and safety and are relevant for their patient population. One of our strategic goals in developing the MIPS program is to advance a program that is meaningful, understandable, and flexible for participating MIPS eligible clinicians. One way we believe this will be accomplished is by minimizing MIPS eligible clinicians' burden. We have made an effort to focus on policies that remove as much administrative burden as possible from MIPS eligible clinicians and their practices while still providing meaningful incentives for high-quality, efficient care. In addition, we hope to balance practice diversity with flexibility to address varied MIPS eligible clinicians' practices. Examples of this flexibility include special consideration for non-patient facing MIPS eligible clinicians, an exclusion from MIPS for eligible clinicians who do not exceed the low-volume threshold, and other proposals discussed below.

    a. Definition of a MIPS Eligible Clinician

    Section 1848(q)(1)(C)(i) of the Act, as added by section 101(c)(1) of the MACRA, outlines the general definition of a MIPS eligible clinician for the MIPS program. Specifically, for the first and second year for which MIPS applies to payments (and the performance period for such years) a MIPS eligible clinician is defined as a physician (as defined in section 1861(r) of the Act), a physician assistant, nurse practitioner, clinical nurse specialist (as such terms are defined in section 1861(aa)(5) of the Act), a certified registered nurse anesthetist (as defined in section 1861(bb)(2) of the Act), and a group that includes such professionals. The statute also provides flexibility to specify additional eligible clinicians (as defined in section 1848(k)(3)(B) of the Act) as MIPS eligible clinicians in the third and subsequent years of MIPS. As discussed in the proposed rule (81 FR 28177 through 28178), section 1848(q)(1)(C)(ii) and (v) of the Act specifies several exclusions from the definition of a MIPS eligible clinician, which includes clinicians who are determined to be new Medicare-enrolled eligible clinicians, QPs and Partial QPs, or do not exceeded the low-volume threshold pertaining to the dollar value of billed Medicare Part B allowed charges or Part B-enrolled beneficiary count. In addition, section 1848(q)(1)(A) of the Act requires the Secretary to permit any eligible clinician (as defined in section 1848(k)(3)(B) of the Act) who is not a MIPS eligible clinician the option to volunteer to report on applicable measures and activities under MIPS. Section 1848(q)(1)(C)(vi) of the Act clarifies that a MIPS payment adjustment factor (or additional MIPS payment adjustment factor) will not be applied to an individual who is not a MIPS eligible clinician for a year, even if such individual voluntarily reports measures under MIPS. For purposes of this section of the final rule with comment period, we use the term “MIPS payment adjustment” to refer to the MIPS payment adjustment factor (or additional MIPS payment adjustment factor) as specified in section 1848(q)(1)(C)(vi) of the Act.

    To implement the MIPS program we must first establish and define a MIPS eligible clinician in accordance with the statutory definition. We proposed to define a MIPS eligible clinician at § 414.1305 as a physician (as defined in section 1861(r) of the Act), a physician assistant, nurse practitioner, and clinical nurse specialist (as such terms are defined in section 1861(aa)(5) of the Act), a certified registered nurse anesthetist (as defined in section 1861(bb)(2) of the Act), and a group that includes such professionals. In addition, we proposed that QPs and Partial QPs who do not report data under MIPS, low-volume threshold eligible clinicians, and new Medicare-enrolled eligible clinicians as defined at § 414.1305 would be excluded from this definition per the statutory exclusions defined in section 1848(q)(1)(C)(ii) and (v) of the Act. We intend to consider using our authority under section 1848(q)(1)(C)(i)(II) of the Act to expand the definition of a MIPS eligible clinician to include additional eligible clinicians (as defined in section 1848(k)(3)(B) of the Act) through rulemaking in future years.

    Additionally, in accordance with section 1848(q)(1)(A) and (q)(1)(C)(vi) of the Act, we proposed to allow eligible clinicians who are not MIPS eligible clinicians, as defined at proposed § 414.1305, the option to voluntarily report measures and activities for MIPS. We proposed at § 414.1310(d) that those eligible clinicians who are not MIPS eligible clinicians, but who voluntarily report on applicable measures and activities specified under MIPS, would not receive an adjustment under MIPS; however, they would have the opportunity to gain experience in the MIPS program. We were particularly interested in public comments regarding the feasibility and advisability of voluntary reporting in the MIPS program for entities such as RHCs and/or FQHCs, including comments regarding the specific technical issues associated with reporting that are unique to these health care providers. We anticipate some eligible clinicians that will not be MIPS eligible clinicians during the first 2 years of MIPS, such as physical and occupational therapists, clinical social workers, and others that have been reporting quality measures under the PQRS for a number of years, will want to have the ability to continue to report and gain experience under MIPS. We requested comments on these proposals.

    The following is a summary of the comments we received regarding our proposed definition of the term MIPS eligible clinician and our proposal to allow eligible clinicians who are not MIPS eligible clinicians the option to voluntarily report measures and activities for MIPS.

    Comment: Commenters supported the option for RHCs and FQHCs to voluntary report, but noted that RHCs and FQHCs may not have experience using EHR technology or the resources to invest in CEHRT and requested that CMS adjust for the social determinants of health status.

    Response: We appreciate the feedback on the role of socioeconomic status in quality measurement. We continue to evaluate the potential impact of social risk factors on measure performance. One of our core objectives is to improve beneficiary outcomes, and we want to ensure that complex patients as well as those with social risk factors receive excellent care.

    Comment: Several commenters expressed support for the proposed definition of a MIPS eligible clinician and the proposal to allow eligible clinicians who are not MIPS eligible to voluntarily report, which encourages interdisciplinary and team-based services necessary to address the full spectrum of patient and family needs and quality of life concerns throughout the care continuum and across health system and community-based care settings. One commenter expressed appreciation for CMS using practitioner-neutral language and including nurse practitioners.

    Response: We appreciate the support from commenters.

    Comment: In regard to the definition of a MIPS eligible clinician, one commenter recommended that certified registered nurse anesthetists be removed from the list of MIPS eligible clinicians because there are not applicable measures for their job duties and they do not treat diseases. Another commenter requested that CMS align the definition of an eligible clinician in both the Medicare and Medicaid programs because nurse practitioners do not qualify for the Medicare EHR Incentive Program for Eligible Professionals, but do qualify for the Medicaid EHR Incentive Program for Eligible Professionals. One commenter expressed concern with the inclusion of nurse practitioners and physician assistants in the definition of a MIPS eligible clinician due to such providers needing to purchase and implement an EHR system in a short timeframe and requested that CMS postpone the inclusion of nurse practitioners and physician assistants.

    Response: We appreciate the recommendations from the commenters and note that section 1848(q)(1)(C)(i) of the Act defines a MIPS eligible clinician, for the first and second MIPS payment years, as a physician (as defined in section 1861(r) of the Act), a physician assistant, nurse practitioner, clinical nurse specialist (as such terms are defined in section 1861(aa)(5) of the Act), a certified registered nurse anesthetist (as defined in section 1861(bb)(2) of the Act), and a group that includes such professionals. We do not have discretion under the statute to amend the definition of a MIPS eligible clinician by excluding clinician types that the statute expressly includes, such as certified registered nurse anesthetists, nurse practitioners, and physician assistants. We note, however, that several policies may alleviate the concerns of commenters regarding the availability of applicable measures and activities, and health IT implementation costs. For example, as discussed in section II.E.3.c. of this final rule with comment period, we are finalizing a higher low-volume threshold to ensure that MIPS eligible clinicians who do not exceed $30,000 of billed Medicare Part B allowed charges or 100 Part B-enrolled Medicare beneficiaries are excluded from MIPS. Also, we note that while non-patient facing MIPS eligible clinicians are not exempt from participating in MIPS or a performance category entirely, as discussed in section II.E.1.b. of this final rule with comment period, we are establishing a process that applies, to the extent feasible and appropriate, alternative measures or activities for non-patient facing MIPS eligible clinicians that fulfill the goals of the applicable performance category. In addition, as discussed in section II.E.6.b.(2) of this final rule with comment period, we may re-weight performance categories if there are not sufficient measures applicable and available to each MIPS eligible clinician to ensure that MIPS eligible clinicians, including those who are non-patient facing, who do not have sufficient alternative measures and activities that are applicable and available in a performance category are scored appropriately.

    In addition, we recognize that under MIPS, there will be more eligible clinicians subject to the requirements of EHR reporting than were previously eligible under the Medicare and/or Medicaid EHR Incentive Program, including hospital-based MIPS eligible clinicians, nurse practitioners, physician assistants, clinical nurse specialists, and certified registered nurse anesthetists. Since many of these non-physician clinicians are not eligible to participate in the Medicare and/or Medicaid EHR Incentive Program, we have little evidence as to whether there are sufficient measures applicable and available to these types of MIPS eligible clinicians under our proposals for the advancing care information performance category. As a result, we have provided additional flexibilities to mitigate negative adjustments for the first performance year (CY 2017) in order to allow hospital-based MIPS eligible clinicians, nurse practitioners, physician assistants, clinical nurse specialists, certified registered nurse anesthetists, and other MIPS eligible clinicians to familiarize themselves with the MIPS program. Section II.E.5.g.(8) of this final rule with comment period describes our final policies regarding the re-weighting of the advancing care information performance category within the final score, in which we would assign a weight of zero when there are not sufficient measures applicable and available.

    Comment: One commenter requested for suppliers of portable x-ray and independent diagnostic testing facility services to be excluded from the definition of a MIPS eligible clinician and recommended that CMS create an alternate pathway allowing for adequate payment updates to reflect the rising cost of care.

    Response: We note that the MIPS payment adjustment applies only to the amount otherwise paid under Part B with respect to items and services furnished by a MIPS eligible clinician during a year. As discussed in section II.E.7. of this final rule with comment period, we will apply the MIPS adjustment at the TIN/NPI level. In regard to suppliers of portable x-ray and independent diagnostic testing facility services, we note that such suppliers are not themselves included in the definition of a MIPS eligible clinician. However, there may be circumstances in which a MIPS eligible clinician would furnish the professional component of a Part B covered service that is billed by such a supplier. For example, a radiologist who is a MIPS eligible clinician could furnish the interpretation and report (professional component) for an x-ray service, and the portable x-ray supplier could bill for the global x-ray service (combined technical and professional component) or bill separately for the professional component of the x-ray service. In that case, the professional component (billed either on its own or as part of the global service) could be considered a service for which payment is made under Part B and furnished by a MIPS eligible clinician. Those services could be subject to MIPS adjustment based on the MIPS eligible clinician's performance during the applicable performance period. Because, however, those services are billed by suppliers that are not MIPS eligible clinicians, it is not operationally feasible for us at this time to associate those billed allowed charges with a MIPS eligible clinician at an NPI level in order to include them for purposes of applying any MIPS payment adjustment.

    Comment: One commenter indicated that the status of pathologists working in independent laboratories is unclear with regard to the definition of a MIPS eligible clinician and requested clarification as to whether or not they would be included given that they were considered EPs under PQRS.

    Response: We note that pathologists, including pathologists practicing in independent laboratories, are considered MIPS eligible clinicians and thus, required to participate in MIPS and subject to the MIPS payment adjustment. The MIPS payment adjustment applies only to the amount otherwise paid under Part B with respect to items and services furnished by a MIPS eligible clinician during a year, in which we will apply the MIPS adjustment at the TIN/NPI level (see section II.E.7. of this final rule with comment period). For items and services furnished by a pathologist practicing in an independent laboratory that are billed by the laboratory, such items and services may be subject to MIPS adjustment based on the MIPS eligible clinician's performance during the applicable performance period. For those billed Medicare Part B allowed charges we are able to associate with a MIPS eligible clinician at an NPI level, such items and services furnished by such pathologist would be included for purposes of applying any MIPS payment adjustment.

    Comment: A few commenters encouraged CMS to expand the list of MIPS eligible clinicians further to promote integrated care. One commenter suggested that we include certified nurse midwives as MIPS eligible clinicians. Another commenter encouraged CMS to ensure that specialists can successfully participate in the MIPS. One commenter indicated that MIPS accommodates the masses of physicians, but falls short in including consulted clinicians. A few commenters requested that we expand the definition of a MIPS eligible clinician to include therapists, dieticians, social workers, and other Medicare Part B suppliers as soon as possible in order for such clinicians to earn positive MIPS payment adjustments. One commenter recommended that the definition of MIPS eligible clinician be expanded to include all Medicare supplier types, including ambulatory services.

    Response: We appreciate the suggestions from the commenters and will take them into account as we consider expanding the definition of a MIPS eligible clinician for year 3 in future rulemaking. We interpret the comment regarding consulted clinicians to refer to locum tenens and clinicians contracted by a practice. We note that contracted clinicians who meet the definition of a MIPS eligible clinician are required to participate in MIPS. In regard to locum tenens clinicians, they bill for the items and services they furnish using the NPI of the clinician for whom they are substituting and, as such, do not bill Medicare in their own right for the items and services they furnish. As such, locum tenens clinicians are not MIPS eligible clinicians when they practice in that capacity.

    Comment: One commenter indicated that it is feasible to include physical therapists in the expanded definition of a MIPS eligible clinician given that physical therapists have been included in PQRS since 2007. The commenter noted that there will be a negative impact on the quality reporting rates of physical therapists if they are excluded from MIPS in 2017 and 2018. Another commenter recommended that CMS define provisions for physical therapists, occupational therapists, and speech language pathologists as soon as possible in order to provide sufficient time for building new systems for operation in year 3 of MIPS. A few commenters requested clarification on how MIPS will apply to physical therapists, occupational therapists, and speech language pathologists working with Medicare beneficiaries. One commenter suggested that therapists participating in MIPS should be scored using the same scoring weights for the quality and cost performance categories that apply to MIPS eligible clinicians in the first 2 years. The commenter noted that the same transition scoring would be fair and could mitigate severe penalties for clinicians new to MIPS.

    Response: We appreciate the concerns and recommendations from the commenters. In regard to expanding the definition of a MIPS eligible clinician for year 3, we will consider the suggestions from the commenters. We anticipate that some eligible clinicians who will not be included in the definition of a MIPS eligible clinician during the first 2 years of MIPS, such as physical and occupational therapists, clinical social workers, and others that have been reporting quality measures under the PQRS for a number of years, will want to have the ability to continue to report and gain experience under MIPS. We note that eligible clinicians who are not included in the definition of a MIPS eligible clinician during the first 2 years of MIPS (or any subsequent year) may voluntarily report on measures and activities under MIPS, but will not be subject to the MIPS payment adjustment. We do intend however to provide informative performance feedback to clinicians who voluntarily report to MIPS, which would include the same performance category and final score rules that apply to all MIPS eligible clinicians. We believe this informational performance feedback will help prepare those clinicians who voluntarily report to MIPS.

    Comment: Some commenters requested that CMS allow facility-based clinicians who provide outpatient services, such as physical therapists, occupational therapists, and speech language pathologists, to participate in MIPS and earn MIPS payment adjustments by the third year of the program. One commenter expressed concern that without inclusion in the Quality Payment Program, these facility-based clinicians would be disadvantaged. Another commenter expressed concern that the criteria for including non-physician clinicians later in MIPS are not clear and recommended that clarity be provided, including performance categories that are specific to each specialty and type of practice.

    Response: We appreciate the concerns and recommendations from the commenters, and will take them into account as we consider expanding the definition of a MIPS eligible clinician for year 3 in future rulemaking.

    Comment: One commenter did not support the expanding of the definition of a MIPS eligible clinician in year 3. The commenter noted that none of their physical therapists operate on the use of CEHRT and switching in year 3 would require significant capital and personnel. The commenter recommended postponing any expansion until year 4 or 5.

    Response: We appreciate the commenter expressing concerns and recognize that eligible clinicians and MIPS eligible clinicians will have a spectrum of experiences with using EHR technology. As we consider expanding the definition of a MIPS eligible clinician to include additional eligible clinicians in year 3, we will consider how such eligible clinicians would be scored for each performance category in future rulemaking.

    Comment: One commenter recommended that CMS convene a technical expert panel of eligible clinicians who will not be included in the definition of a MIPS eligible clinician during the first 2 years of MIPS to help adapt the Quality Payment Program to their needs.

    Response: We thank the commenter for the suggestion and will consider the recommendation as we consider expanding the definition of a MIPS eligible clinician to include additional eligible clinicians for year 3 in future rulemaking and prepare for the operationalization of the expanded definition. We are committed to continuously engage stakeholders as we implement MIPS, and establish and operationalize future policies.

    Comment: One commenter expressed concern about the difficulties hospital-based clinicians have had reporting under PQRS and recommended offering hospital-based clinicians more flexibility in adopting MIPS.

    Response: As previously noted, we recognize that there may not be sufficient measures applicable and available for certain performance categories for hospital-based MIPS eligible clinicians participating in MIPS. In section II.E.5.g.(8)(a)(i) of this final rule with comment period, we describe the re-weighting of the advancing care information performance category when there are not sufficient measures applicable and available for hospital-based MIPS eligible clinicians.

    Comment: A few commenters expressed concerns that our MIPS proposals focused on clinicians in large groups or who are hospital-based and did not include non-physician clinicians. One commenter requested that non-physician clinicians be recognized for their critical role in the health delivery system and providing high quality, low cost health care to the Medicare population.

    Response: We disagree with the commenters and note that the definition of a MIPS eligible clinician includes non-physician clinicians such physician assistants, nurse practitioners, clinical nurse specialists, and certified registered nurse anesthetists. As previously noted, in future rulemaking, we will consider expanding the definition of a MIPS eligible clinician to include additional eligible clinicians starting in year 3.

    Comment: A few commenters requested clarification regarding whether or not Doctors of Chiropractic would be able to participate in MIPS. Another commenter appreciated that Doctors of Chiropractic are included as MIPS eligible clinicians, but believed that chiropractors would be put at a severe disadvantage in participating in MIPS or APMs due to CMS' restrictions on chiropractic coverage. The commenter encouraged CMS to expand the billing codes for Doctors of Chiropractic to cover the full scope of licensure.

    Response: We note that chiropractors are included in the definition of “physician” under section 1861(r) of the Act, and therefore, are MIPS eligible clinicians. In regard to the comment pertaining to the expansion of billing codes for chiropractors, we note that such comment is out-of-scope given that we did not propose any billing code policies in the proposed rule.

    Comment: One commenter requested clarification on whether or not participation in MIPS is mandatory.

    Response: We note that clinicians who are included in the definition of a MIPS eligible clinicians as defined in section II.E.1.a. of this final rule with comment period are required to participate in MIPS unless they are excluded from the definition of a MIPS eligible clinician based on one of the three exclusions described in sections II.E.3.a., II.E.3.b., and II.E.3.c. of this final rule with comment period.

    Comment: One commenter requested clarification on how CMS will treat hospitalist services under MIPS, specifically, what measures will they report, whether the hospital's PFS payment amount for the hospitalists' services will be subject to the MIPS payment adjustment, and how hospitalists should report data since they do not have an office practice or an EHR to participate.

    Response: We note that hospitalists are required to participate in MIPS unless otherwise excluded. As discussed in section II.E.6.b.(2) of this final rule with comment period, we may re-weight performance categories if there are not sufficient measures applicable and available to each MIPS eligible clinician to ensure that MIPS eligible clinicians, including hospitalists, who do not have sufficient alternative measures and activities that are applicable and available in a performance category are scored appropriately. For hospitalists who meet the definition of a hospital-based MIPS eligible clinician, section II.E.5.g.(8)(a)(i) of this final rule with comment period describes the re-weighting of the advancing care information performance category within the final score, in which we would assign a weight of zero when there are not sufficient measures applicable and available for hospital-based MIPS eligible clinicians. In section II.E.5.b.(5) of the proposed rule (81 FR 28192), we sought comment on the application of additional system measures, which would directly impact hospitalists, and intend to address such policies in future rulemaking. Also, we note that the MIPS payment adjustment would be applied to the Medicare Part B payments for items and services furnished by a hospital-based MIPS eligible clinician.

    Comment: Some commenters expressed concern regarding the exclusion of pharmacists under MIPS and APMs, and indicated that the payment models would prevent program goals from being met unless all practitioners, including pharmacists, are effectively integrated into team-based care. A few commenters noted that pharmacists are medication-use experts in the health care system, and directly contribute toward many of the quality measures under both MIPS and Advanced APMs. Because pharmacists are neither MIPS eligible clinicians nor required practitioners under APMs, pharmacist expertise and contributions may be underutilized and/or unavailable to certain patients. A few commenters recommended that the definition of a MIPS eligible clinician include pharmacists given that they are a critical part of a patient care team, in which they can provide a broad array of services to patients and have a role in optimizing patient health outcomes as the number and complexity of medications continues to rise. One commenter recommended that the Quality Payment Program include metrics and payment methodologies that recognize services provided by pharmacists and align with other CMS and CDC programs.

    Response: We appreciate the suggestions from the commenters. We note that we do not have discretion under the statute to include clinicians who do not meet the definition of a MIPS eligible clinician. Thus, pharmacists would not be able to participate in MIPS.

    Comment: One commenter requested that CMS clarify whether or not MIPS requirements would apply to clinicians who are not Medicare-enrolled eligible clinicians. Another commenter expressed concern that the proposed rule did not address how MIPS payment adjustments would be applied for clinicians who are not Medicare-enrolled eligible clinicians.

    Response: We note that clinicians who are included in the definition of a MIPS eligible clinician and not otherwise excluded are required to report under MIPS. However, a clinician who is not included in the definition of a MIPS eligible clinician can voluntarily report under MIPS and would not be subject to the MIPS payment adjustment. Also, we note that eligible clinicians who are not Medicare-enrolled eligible clinicians are not required to participate in MIPS, and would not be subject to the MIPS payment adjustment given that the MIPS payment adjustment is applied to Medicare Part B payments for items and services furnished by a MIPS eligible clinician.

    Comment: One commenter requested information on how locum tenens clinicians will be assessed under MIPS.

    Response: As previously noted, locum tenens clinicians bill for the items and services they furnish using the NPI of the clinician for whom they are substituting and, as such, do not bill Medicare in their own right for the items and services they furnish. As such, locum tenens clinicians are not MIPS eligible clinicians when they practice in that capacity.

    Comment: One commenter noted that facility-based clinicians in California face unique challenges under state law and recommended that rather than automatically using an eligible clinician's facility's performance as a proxy for the quality and cost performance categories as proposed, CMS should develop a voluntary option to allow eligible clinicians who meet criteria to be considered a facility-based clinician.

    Response: We appreciate the suggestions from the commenter and will consider them as we develop policies for applying a facility's performance to a MIPS eligible clinician or group.

    Comment: One commenter suggested that the types of eligible clinicians who are not included in the definition of a MIPS eligible clinician in 2017 and who have been submitting PQRS measures for years, should be allowed to voluntarily participate in 2017 and earn MIPS payment adjustments if they complete a successful attestation.

    Response: We thank the commenter for their suggestion and note that clinicians not included in the definition of a MIPS eligible clinicians have the option to voluntarily report on applicable measures and activities under MIPS. However, the statute does not permit such clinicians to be subject to the MIPS payment adjustment. Should we expand the definition of a MIPS eligible clinician in future rulemaking, such clinicians may be able to earn MIPS payment adjustments beginning as early as the 2021 payment year.

    Comment: A few commenters recommended that certified anesthesiologist assistants be included in the definition of a MIPS eligible clinician. One commenter stated that such inclusion would provide the clarification that certified anesthesiologist assistants are health care providers, increase the amount of quality reporting under MIPS, and ensure certified anesthesiologist assistant participation in APMs. The commenter noted that if certified anesthesiologist assistants are not included in the definition of a MIPS eligible clinician, patient access to care would be restricted. Another commenter requested clarification regarding whether or not anesthesiologist assistants would be excluded from MIPS reporting in 2017.

    Response: We appreciate the suggestion from the commenters and note that section 1861(bb)(2) of the Act specifies that the term “certified registered nurse anesthetist” includes an anesthesiologist assistant. Thus, anesthesiologist assistants are considered eligible for MIPS beginning with the CY 2017 performance period.

    Comment: One commenter requested that audiologists remain active stakeholders in the MIPS implementation process, although they may not be included in the program until year 3.

    Response: We appreciate the recommendation from the commenter and note that we are committed to actively engaging with all stakeholders during the development and implementation of MIPS.

    Comment: One commenter suggested that CPC+ clinicians should be waived from MIPS if the group TIN is participating in CPC+.

    Response: We appreciate the suggestion from the commenter, but note that the exclusions in this final rule with comment period only pertain to new Medicare-enrolled eligible clinicians, QPs and Partial QPs who do not report on applicable MIPS measures and activities, and eligible clinicians who do not exceed the low-volume threshold. We refer readers to section II.E.5.h. of this final rule with comment period, which describes the APM scoring standard for MIPS eligible clinicians participating in MIPS APMs; such provisions are applicable to MIPS eligible clinicians participating in CPC+.

    Comment: One commenter requested that CMS allow psychiatrists who participate in ACOs or who work at least 30 percent of their time in eligible integrated care settings to opt out of the reporting requirements to avoid a negative MIPS payment adjustment. Another commenter recommended that CMS exempt from the definition of a MIPS eligible clinician those clinicians participating in all Alternative Payment Models defined in Category 3 of the HCPLAN Alternative Payment Models Framework. The commenter indicated that the exemption should include all upside-gain sharing only models defined in the Framework, including patient-centered medical home models, bundled payment models, and episode of care models.

    Response: We note that the statute only allows for certain exclusions for MIPS, two of which are for QPs and Partial QPs participating in an APM or other innovative payment model is not in itself sufficient for an eligible clinician to become a QP or Partial QP. As described in section II.F. of this final rule with comment period, only eligible clinicians who are identified on CMS-maintained lists as participants in Advanced APMs and meet the relevant QP or Partial QP threshold may become QPs or Partial QPs.

    After consideration of the public comments we received, we are finalizing the following policies. We are finalizing the definition at § 414.1305 of a MIPS eligible clinician, as identified by a unique billing TIN and NPI combination used to assess performance, as any of the following (excluding those identified at § 414.1310(b)): A physician (as defined in section 1861(r) of the Act), a physician assistant, nurse practitioner, and clinical nurse specialist (as such terms are defined in section 1861(aa)(5) of the Act), a certified registered nurse anesthetist (as defined in section 1861(bb)(2) of the Act), and a group that includes such clinicians. We are finalizing our proposed policies at § 414.1310(b) and (c) that QPs, Partial QPs who do not report on applicable measures and activities that are required to be reported under MIPS for any given performance period in a year, low-volume threshold eligible clinicians, and new Medicare-enrolled eligible clinicians as defined at § 414.1305 are excluded from this definition per the statutory exclusions defined in section 1848(q)(1)(C)(ii) and (v) of the Act. In accordance with section 1848(q)(1)(A) and (q)(1)(C)(vi) of the Act, we are finalizing our proposal at § 414.1310(b)(2) to allow eligible clinicians (as defined at § 414.1305) who are not MIPS eligible clinicians the option to voluntarily report measures and activities for MIPS. Additionally, we are finalizing our proposal at § 414.1310(d) that in no case will a MIPS payment adjustment apply to the items and services furnished during a year by individual eligible clinicians, as described in paragraphs (b) and (c) of this section, who are not MIPS eligible clinicians including eligible clinicians who are not MIPS eligible clinicians, but who voluntarily report on applicable measures and activities specified under MIPS.

    b. Non-Patient Facing MIPS Eligible Clinicians

    Section 1848(q)(2)(C)(iv) of the Act requires the Secretary, in specifying measures and activities for a performance category, to give consideration to the circumstances of professional types (or subcategories of those types determined by practice characteristics) who typically furnish services that do not involve face-to-face interaction with a patient. To the extent feasible and appropriate, the Secretary may take those circumstances into account and apply alternative measures or activities that fulfill the goals of the applicable performance category to such non-patient facing MIPS eligible clinicians. In carrying out these provisions, we are required to consult with non-patient facing MIPS eligible clinicians.

    In addition, section 1848(q)(5)(F) of the Act allows the Secretary to re-weight MIPS performance categories if there are not sufficient measures and activities applicable and available to each type of MIPS eligible clinician. We assume many non-patient facing MIPS eligible clinicians will not have sufficient measures and activities applicable and available to report under the performance categories under MIPS. We refer readers to section II.E.6.b.(2) of this final rule with comment period for the discussion regarding how we addressed performance categories weighting for MIPS eligible clinicians for whom no measures exist in a given category.

    To establish policies surrounding non-patient facing MIPS eligible clinicians, we must first define the term “non-patient facing.” Currently, the PQRS, VM, and Medicare EHR Incentive Program include two existing policies for considering whether an EP is providing patient-facing services. To determine, for purposes of PQRS, whether an EP had a “face-to-face” encounter with Medicare patients, we assess whether the EP billed for services under the PFS that are associated with face-to-face encounters, such as whether an EP billed general office visit codes, outpatient visits, and surgical procedures. Under PQRS, if an EP bills for at least one service under the PFS during the performance period that is associated with face-to-face encounters and reports quality measures via claims or registries, then the EP is required to report at least one “cross-cutting” measure. EPs who do not meet these criteria are not required to report a cross-cutting measure. For the purposes of PQRS, telehealth services have not historically been included in the definition of face-to-face encounters. For more information, please see the CY 2016 PFS final rule for these discussions (80 FR 71140).

    In the Stage 2 final rule (77 FR 54098 through 54099), the Medicare EHR Incentive Program established a significant hardship exception from the meaningful use payment adjustment under section 1848(a)(7)(A) of the Act for EPs that lack face-to-face interactions with patients and those who lack the need to follow-up with patients. EPs with a primary specialty of anesthesiology, pathology or radiology listed in the Provider Enrollment, Chain, and Ownership System (PECOS) as of 6 months prior to the first day of the payment adjustment year automatically receive this hardship exemption (77 FR 54100). Specialty codes associated with these specialties include 05-Anesthesiology, 22-Pathology, 30-Diagnostic Radiology, 36-Nuclear Medicine, 94-Interventional Radiology. EPs with a different specialty are also able to request this hardship exception through the hardship application process. However, telehealth services could be counted by EPs who choose to include these services within the definition of “seen by the EP” for the purposes of calculating patient encounters with the EHR Incentive Program (77 FR 53982).

    In the MIPS and APMs RFI (80 FR 63484), we sought comments on MIPS eligible clinicians that should be considered non-patient facing MIPS eligible clinicians and the criteria we should use to identify these MIPS eligible clinicians. Commenters were split when it came to defining and identifying non-patient facing MIPS eligible clinicians. Many took a specialty-driven approach. Commenters generally did not support use of specialty codes alone, which is the approach used by the Medicare EHR Incentive Program. Commenters indicated that these codes do not necessarily delineate between the same specialists who may or may not have patient-facing interaction. One example is cardiologists who specialize in cardiovascular imaging which is also coded as cardiology. On the other hand, as one commenter mentioned, physicians with specialty codes other than “cardiology” (for example, internal medicine) may perform cardiovascular imaging services. Therefore, using the specialty code for cardiology to identify clinicians who typically do not provide patient-facing services would be both over-inclusive and under-inclusive. Other commenters identified specialty types that they believe should be considered non-patient facing MIPS eligible clinicians. Specific specialty types included radiologists, anesthesiologists, nuclear cardiology or nuclear medicine physicians, and pathologists. Others pointed out that certain MIPS eligible clinicians may be primarily non-patient facing MIPS eligible clinicians even though they practice within a traditionally patient-facing specialty. The MIPS and APMs RFI comments and listening sessions with medical societies representing non-patient facing MIPS eligible clinicians specified radiology/imaging, anesthesiology, nuclear cardiology and oncology, and pathology as inclusive of non-patient facing MIPS eligible clinicians. Commenters noted that roles within specific types of specialties may need to be further delineated between patient-facing and non-patient facing MIPS eligible clinicians. An illustrative list of specific types of clinicians within the non-patient facing spectrum include:

    • Pathologists who may be primarily dedicated to working with local hospitals to identify early indicators related to evolving infectious diseases;

    • Radiologists who primarily provide consultative support back to a referring physician or provide image interpretation and diagnosis versus therapy;

    • Nuclear medicine physicians who play an indirect role in patient care, for example as a consultant to another physician in proper dose administration; or

    • Anesthesiologists who are primarily providing supervision oversight to Certified Registered Nurse Anesthetists.

    After reviewing current policies, we proposed to define a non-patient facing MIPS eligible clinician for MIPS at § 414.1305 as an individual MIPS eligible clinician or group that bills 25 or fewer patient-facing encounters during a performance period. We considered a patient-facing encounter as an instance in which the MIPS eligible clinician or group billed for services such as general office visits, outpatient visits, and procedure codes under the PFS. We intend to publish the list of patient-facing encounter codes on a CMS Web site similar to the way we currently publish the list of face-to-face encounter codes for PQRS. This proposal differs from the current PQRS policy in two ways. First, it creates a minimum threshold for the quantity of patient-facing encounters that MIPS eligible clinicians or groups would need to furnish to be considered patient-facing, rather than classifying MIPS eligible clinicians as patient-facing based on a single patient-facing encounter. Second, this proposal includes telehealth services in the definition of patient-facing encounters.

    We believed that setting the non-patient facing MIPS eligible clinician threshold for individual MIPS eligible clinician or group at 25 or fewer billed patient-facing encounters during a performance period is appropriate. We selected this threshold based on an analysis of non-patient facing Healthcare Common Procedure Coding System (HCPCS) codes billed by MIPS eligible clinicians. Using these codes and this threshold, we identified approximately one quarter of MIPS eligible clinicians as non-patient facing before MIPS exclusions, such as low-volume and newly-enrolled eligible clinician policies, were applied. The majority of clinicians enrolled in Medicare with specialties such as anesthesiology, nuclear medicine, and pathology were identified as non-patient facing in this analysis. The addition of telehealth to the analysis did not affect the outcome, as it created a less than 0.01 percent change in MIPS eligible clinicians categorized as non-patient facing.

    Therefore, the proposed approach allows the definition of non-patient facing MIPS eligible clinicians, to include both MIPS eligible clinicians who practice within specialties traditionally considered non-patient facing, as well as MIPS eligible clinicians who provide occasional patient-facing services that do not represent the bulk of their practices. This definition is also consistent with the statutory requirement that refers to professional types who typically furnish services that do not involve patient-facing interaction with a patient.

    In response to the MIPS and APMs RFI, some commenters believed that MIPS eligible clinicians should be defined as non-patient facing MIPS eligible clinicians based on whether their billing indicates they provide face-to-face services. Commenters indicated that the use of specific HCPCS codes in combination with specialty codes, may be a more appropriate way to identify MIPS eligible clinicians that have no patient interaction.

    We also proposed to include telehealth services in the definition of patient-facing encounters. Various MIPS eligible clinicians use telehealth services as an innovative way to deliver care to beneficiaries and we believe these services, while not furnished in-person, should be recognized as patient-facing. In addition, Medicare eligible telehealth services substitute for an in-person encounter and meet other site requirements under the PFS as defined at § 410.78.

    The proposed addition of the encounter threshold for patient-facing MIPS eligible clinicians was intended to minimize concerns that a MIPS eligible clinician could be misclassified as patient-facing as a result of providing occasional telehealth services that do not represent the bulk of their practice. Finally, we believed that this proposed definition of a non-patient facing MIPS eligible clinician for MIPS could be consistently used throughout the MIPS program to identify those MIPS eligible clinicians for whom certain proposed requirements for patient-facing MIPS eligible clinicians (such as reporting cross-cutting measures) may not be meaningful.

    We weighed several options when considering the appropriate definition of non-patient facing MIPS eligible clinicians for MIPS; and some options were similar to those we considered in implementing the Medicare EHR Incentive Program. One option we considered was basing the non-patient facing MIPS eligible clinician's definition on a set percentage of patient-facing encounters, such as 5 to 10 percent, that was tied to the same list of patient-facing encounter codes discussed in this section of this final rule with comment period. Another option we considered was the identification of non-patient facing MIPS eligible clinicians for MIPS only by specialty, which might be a simpler approach. However, we did not consider this approach sufficient for identifying all the possible non-patient facing MIPS eligible clinicians, as some patient-facing MIPS eligible clinicians practice in multi-specialty practices with non-patient facing MIPS eligible clinician's practices with different specialties. We would likely have had to develop a separate process to identify non-patient facing MIPS eligible clinicians in other specialties, whereas maintaining a single definition that is aligned across performance categories is simpler. Many comments from the MIPS and APMs RFI discouraged use of specialty codes alone. Additionally, we believed our proposal would allow us to more accurately identify MIPS eligible clinicians who are non-patient facing by applying a threshold to recognize that a MIPS eligible clinician who furnishes almost exclusively non-patient facing services should be treated as a non-patient facing MIPS eligible clinician despite furnishing a small number of patient-facing services.

    In the MIPS and APMs RFI (80 FR 63484), we also requested comments on what types of measures and/or improvement activities (new or from other payment systems) we should use to assess non-patient facing MIPS eligible clinicians' performance and how we should apply the MIPS performance categories to non-patient facing MIPS eligible clinicians. Commenters were split on these subjects. A number of commenters stated that non-patient facing MIPS eligible clinicians should be exempt from specific performance categories under MIPS or should be exempt from MIPS as a whole. Commenters who did not favor exemptions generally suggested that we focus on process measures and work with specialty societies to develop new, more clinically relevant measures for non-patient facing MIPS eligible clinicians.

    We took these stakeholder comments into consideration. We note that section 1848(q)(2)(C)(iv) of the Act does not grant the Secretary discretion to exempt non-patient facing MIPS eligible clinicians from a performance category entirely, but rather to apply to the extent feasible and appropriate alternative measures or activities that fulfill the goals of the applicable performance category. However, we have placed safeguards to ensure that MIPS eligible clinicians, including those who are non-patient facing, who do not have sufficient alternative measures that are applicable and available in a performance category are scored appropriately. We proposed to apply the Secretary's authority under section 1848(q)(5)(F) of the Act to re-weight such performance categories score to zero if there is no performance category score or to lower the weight of the quality performance category score if there are not at least three scored measures. Please refer to section II.E.6.b.(2)(b) in the proposed rule for details on the re-weighting proposals. Accordingly, we proposed alternative requirements for non-patient facing MIPS eligible clinicians across the proposed rule (see sections II.E.5.b., II.E.5.e., and II.E.5.f. of the proposed rule for more details). While non-patient facing MIPS eligible clinicians will not be exempt from any performance category under MIPS, we believe these alternative requirements fulfill the goals of the applicable performance categories and are in line with the commenters' desire to ensure that non-patient facing MIPS eligible clinicians are not placed at an unfair disadvantage under the new program. The requirements also build on prior program components in meaningful ways and are meant to help us appropriately assess and incentivize non-patient facing MIPS eligible clinicians. We requested comments on these proposals.

    The following is a summary of the comments we received regarding our proposal that defines non-patient facing MIPS eligible clinicians for MIPS as an individual MIPS eligible clinician or group that bills 25 or fewer patient-facing encounters (including telehealth services) during a performance period.

    Comment: A few commenters supported the proposed definition of non-patient facing MIPS eligible clinicians.

    Response: We appreciate the support from commenters.

    Comment: One commenter requested that pathologists (as identified in PECOS) be automatically identified as non-patient facing MIPS eligible clinicians at the beginning of each year. The commenter noted that it seems reasonable to use PECOS to identify non-patient facing specialties.

    Response: We appreciate the commenter expressing the importance for MIPS eligible clinicians to be identified as non-patient facing MIPS eligible clinicians at the beginning of each year. We believe that it would be beneficial for individual MIPS eligible clinicians and groups to know in advance of a performance period whether or not they qualify as a non-patient facing MIPS eligible clinician. For purposes of this section, we are coining the term “non-patient facing determination period” to refer to the timeframe used to assess claims data for making eligibility regarding non-patient facing status. We define the non-patient facing determination period to mean a 24-month assessment period, which includes a two-segment analysis of claims data regarding patient-facing encounters during an initial 12-month period prior to the performance period followed by another 12-month period during the performance period.

    The initial 12-month segment of the non-patient facing determination period would span from the last 4 months of a calendar year 2 years prior to the performance period followed by the first 8 months of the next calendar year and include a 60-day claims run out, which will allow us to inform eligible clinicians and groups of their non-patient status during the month (December) prior to the start of the performance period. We believe that the initial non-patient facing determination period enables us to make eligibility determinations based on 12 months of data that is as close to the performance period as possible while informing eligible clinicians of their non-patient facing status prior to the performance period. The second 12-month segment of the non-patient facing determination period would span from the last 4 months of a calendar year 1 year prior to the performance period followed by the first 8 months of the performance period in the next calendar year and include a 60-day claims run out, which will allow us to inform additional eligible clinicians and groups of their non-patient status during the performance period.

    Thus, for purposes of the 2019 MIPS payment adjustment, we will initially identify individual eligible clinicians and groups who are considered non-patient facing MIPS eligible clinicians based on 12 months of data starting from September 1, 2015 to August 31, 2016. In order to account for the identification of additional individual eligible clinicians and groups that may qualify as non-patient facing during the 2017 performance period, we will conduct another eligibility determination analysis based on 12 months of data starting from September 1, 2016 to August 31, 2017.

    Comment: One commenter requested that CMS consider allowing physicians in other specialties to declare by exception that they deserve a similar exemption as those that are identified in the proposed rule as non-patient facing MIPS eligible clinicians, which can be confirmed by CMS through coding analysis.

    Response: We disagree with the approach described by the commenter because the statute does not provide discretion in establishing exclusions other than the three exclusions specified in section II.E.3. of this final rule with comment period. Also, we note that non-patient facing MIPS eligible clinicians are identified based on an analysis we conduct using claims data to determine such status; this is not a status that clinicians make an election for purposes of MIPS.

    Comment: Many commenters expressed concerns that the threshold set forth in the proposed definition of a non-patient facing MIPS eligible clinician (for example, an individual MIPS eligible clinician or group that bills 25 or fewer patient-facing encounters during a performance period) was too low. The commenters believed that many clinicians in certain specialties would be classified as patient-facing even though clinicians in those specialties are predominately non-patient facing. One commenter stated that MIPS eligible clinicians with such a low number of patient-facing encounters may not realize they would be considered patient-facing and subject to additional reporting requirements. Many commenters recommended alternative options for establishing a threshold relating to the billing of patient-facing encounters, including the following: A threshold of 50 or fewer patient-facing encounters; a threshold of 100 or fewer patient-facing encounters, which would represent a somewhat larger portion of the MIPS eligible clinician's practice, averaging approximately two patient-facing encounters per week; and a threshold of 150 or fewer billed Medicare patient-facing encounters. Other commenters suggested that CMS consider automatically designating certain specialties, such as anesthesiology or radiology, as non-patient facing unless a clinician in such specialty bills more than 100 patient-facing encounters. One commenter suggested that CMS base the threshold on a percentage of patients seen (for example, 80 percent of services furnished are determined to be non-patient facing) or claims or allowed charges (for example, 85 percent of claims or charges are for non-patient facing services), or a combination of the two percentage-based options.

    Response: We thank the commenters for expressing their concerns and recommendations regarding the proposed threshold used to define a non-patient facing MIPS eligible clinician. Based on the comments indicating that the proposed threshold would misclassify certain specialties that are predominately non-patient facing, and in order to more accurately identify MIPS eligible clinicians who are non-patient facing, we are modifying our proposal and increasing the threshold to determine when a MIPS eligible clinician is considered non-patient facing. Therefore, we are finalizing a modification to our proposal to define a non-patient facing MIPS eligible clinician as an individual MIPS eligible clinician that bills 100 or fewer patient-facing encounters (including Medicare telehealth services defined in section 1834(m) of the Act) during the non-patient facing determination period, and a group provided that more than 75 percent of the NPIs billing under the group's TIN meet the definition of a non-patient facing individual MIPS eligible clinician during the non-patient facing determination period. We believe that the 100 or fewer billed patient-facing encounters as a threshold more accurately reflects a differentiation of annual patient-facing encounters between MIPS eligible clinicians who furnish a majority of patient-facing services and considered patient-facing and MIPS eligible clinicians who provide occasional patient-facing services that do not reflect the bulk of services provided by the practice or would traditionally be considered non-patient facing. This modified threshold that applies at the individual level would reduce the risk of identifying individual MIPS eligible clinicians as patient-facing who would otherwise be considered non-patient facing. Similarly, the modified threshold that applies at the group level as previously noted, would reduce the risk of identifying groups as patient-facing that would otherwise be considered non-patient facing. Also, we considered increasing the threshold based on different approaches. As previously described, one option was basing the definition of a non-patient facing MIPS eligible clinician on a set percentage of patient-facing encounters, such as 5 to 10 percent, that was tied to the same list of patient-facing encounter codes discussed in this section of the final rule with comment period. We did not pursue this approach because a percentage would not apply consistency, which could miscategorize MIPS eligible clinicians who would otherwise be considered patient-facing. Another option we considered was the identification of non-patient facing MIPS eligible clinicians only by specialty, which might be a simpler approach. However, we did not consider this approach sufficient for identifying all the possible non-patient facing MIPS eligible clinicians, as some patient-facing MIPS eligible clinicians practice in multi-specialty practices with non-patient facing MIPS eligible clinician's practices with different specialties. We would likely have had to develop a separate process to identify non-patient facing MIPS eligible clinicians in other specialties, whereas maintaining a single definition that is aligned across performance categories is simpler. Thus, we did not modify our approach along these lines.

    Comment: In regard to the illustrative list of specific types of clinicians within the non-patient facing spectrum outlined in the proposed rule, one commenter requested that CMS remove the reference to anesthesiologist supervision and ensure that the Quality Payment Program would not impose any unnecessary supervision. The commenter noted that physician supervision of nurse anesthetists did not improve care outcomes and was therefore unnecessary. Another commenter stated that most anesthesiologists should be designated as non-patient facing and recommended that CMS reconsider the non-patient facing determination criteria while another commenter requested that CMS ensure the equal treatment of certified registered nurse anesthetists and anesthesiologists when determining who qualifies as a non-patient facing MIPS eligible clinician. One commenter suggested that CMS publish the list of patient-facing services as quickly as possible in order for anesthesiologists to determine if they are considered non-patient facing MIPS eligible clinicians. The commenter requested that CMS provide details on how it estimated that a majority of anesthesiologists would qualify as non-patient facing.

    Response: We appreciate the suggestions from commenters regarding the types of MIPS eligible clinicians to be considered non-patient facing. We want to clarify that our proposed definition of a non-patient facing MIPS eligible clinician did not include the identification of any specific type of physician or clinician specialty, and note that the statutory definition of an anesthesiologist does not specify a supervision requisite as a requirement. However, our proposed definition of a non-patient facing MIPS eligible clinician is based on a methodology that would allow us to more accurately identify MIPS eligible clinicians who are non-patient facing by applying a threshold to recognize that a MIPS eligible clinician who furnishes almost exclusively non-patient facing services should be treated as a non-patient facing MIPS eligible clinician despite furnishing a small number of patient-facing services. Our methodology used to identify non-patient facing MIPS eligible clinicians included a quantitative, comparative analysis of claims and HCPCS code data. Contrary to the commenter's belief, we believe that our proposed definition of a non-patient facing clinician would not capture the majority of MIPS eligible clinicians or groups within specialties such as anesthesiology, pathology, radiology, and nuclear medicine who may provide a small portion of services that would be considered patient-facing, but would otherwise be considered non-patient facing MIPS eligible clinicians. As a result of this dynamic, we are finalizing a modification to our proposed definition of a non-patient facing MIPS eligible clinician. As previously noted, we will identify MIPS eligible clinicians who are considered non-patient facing in advance of the performance period.

    Comment: One commenter requested that MIPS eligible clinicians within the interventional pain management specialty be exempt from negative, but not positive, MIPS payment adjustments. The commenter noted that MIPS will destroy independent practices and increase the costs of Medicare, making Medicare insolvent even sooner than expected.

    Response: We thank the commenter for the suggestion. We note that the statute does not grant the Secretary discretion to exclude non-patient facing MIPS eligible clinicians from the requirement to participate in MIPS. However, non-patient facing MIPS eligible clinicians will benefit from other policies that we are finalizing throughout this final rule with comment period such as reduced performance requirements and lower performance threshold. Accordingly, we describe alternative requirements for non-patient facing MIPS eligible clinicians across this final rule with comment period (see sections II.E.5.b., II.E.5.e., and II.E.5.f. of this final rule with comment period for more details). We disagree with the comment regarding MIPS negatively impacting independent practices. We believe that independent practices will benefit from other policies that we are finalizing throughout this final rule with comment period such as reduced performance requirements and lower performance threshold.

    Comment: One commenter requested that CMS abandon the term “non-patient facing” in reference to MIPS eligible clinicians or physician specialties. The commenter indicated that the patient-facing/non-patient facing terminology is appropriate for describing the Current Procedural Terminology (CPT) code, but not appropriated for describing a clinician relative to quality improvement. Another commenter recommended that CMS consider an alternative term to “non-patient facing” as it applies to anesthesiologists. One commenter expressed concern that the term non-patient facing diminishes the importance of specialists.

    Response: We appreciate the commenters expressing their concerns regarding the use of the term “non-patient facing” and as a result of the concerns from commenters, we are interested in obtaining further input from stakeholders regarding potential terms that could be used to describe “non-patient facing” under MIPS. Therefore, we are seeking additional comment on modifying the terminology used to reference “non-patient facing” MIPS eligible clinicians for future consideration. What alternative terms could be used to describe “non-patient facing”?

    Comment: One commenter indicated that the proposed definition of non-patient facing clinicians is overly stringent and does not recognize a number of “hybrid” physicians such as nuclear cardiologists, who split time between patient-facing and non-patient facing activity. The commenter requested an alternative pathway for “hybrid” physicians in order for nuclear cardiologists and others to successfully participate in MIPS, which is important for medical specialists with no alternative payment models. As an interim solution, the commenter requested that the reporting period be shortened and be flexibility for MIPS eligible clinicians to select the reporting period within the applicable calendar year.

    Response: We thank the commenter for expressing concerns and recognize that MIPS eligible clinicians in certain specialties may not have a majority of their services categorized as non-patient facing. We want to ensure that MIPS eligible clinicians, including non-patient facing MIPS eligible clinicians are able to participate in MIPS successfully and thus, in this final rule with comment period, we not only establish requirements for MIPS eligible clinicians in each performance category, but we apply, to the extent feasible and appropriate, alternative measures or activities that fulfill the goals of each performance category. In sections II.E.5.b., II.E.5.e., and II.E.5.f. of this final rule with comment period, we describe the alternative requirements for non-patient facing MIPS eligible clinicians. Also, as described in section II.E.4. of this final rule with comment period, we are finalizing a modification to the MIPS performance period to be a minimum of one continuous 90-day period within CY 2017.

    Comment: Several commenters indicated that the definition of a non-patient facing MIPS eligible clinician is inadequate since the definition is dependent on the codes that define patient-facing encounters, which are not yet available. The commenters requested that CMS provide the applicable CPT codes as soon as possible in order for affected MIPS eligible clinicians to have sufficient time to assess the alignment of the codes. One commenter recommended that only evaluation and management services (the denominators of the cross-cutting measures as specified in Table C: Proposed Individual Quality Cross-Cutting Measures for the MIPS to Be Available to Meet the Reporting Criteria Via Claims, Registry, and EHR Beginning in 2017 of the proposed rule (81 FR 28447 through 28449)) be considered when determining whether a MIPS eligible clinician provides face-to-face services. The commenter indicated that the inclusion of other services, particularly 000 global codes, will inappropriately classify many radiologists as patient-facing and put small and rural practices at a distinct disadvantage.

    Response: We thank the commenters for their support and expressing their concerns. While we did not propose specific patient-facing encounter codes in the proposed rule, we considered a patient-facing encounter to be an instance in which the MIPS eligible clinician or group billed for items and services furnished such as general office visits, outpatient visits, and procedure codes under the PFS. We agree with the commenters that a non-patient facing MIPS eligible clinician is identified based on the evaluation and management of services, which reflects the list of patient-facing encounter codes. We note that the denominators, as specified in Table C of the proposed rule, used for determining the non-patient facing status of MIPS eligible clinicians are the same as the denominators of the cross-cutting measures. Based on our experience with PQRS, we believe that the use of patient-facing encounter codes is the most appropriate approach for determining whether or not MIPS eligible clinicians are non-patient facing. We intend to publish a list of patient-facing encounters on the CMS Web site located at QualityPaymentProgram.cms.gov.

    In regard to the comment pertaining to misclassification, we note that the definition of non-patient facing MIPS eligible clinicians creates a minimum threshold for the quantity of patient-facing encounters that MIPS eligible clinicians or groups would need to furnish to be considered patient-facing, rather than classifying MIPS eligible clinicians as patient-facing based on a single patient-facing encounter. This approach allows for the definition of non-patient facing MIPS eligible clinicians to include both MIPS eligible clinicians who practice within specialties traditionally considered non-patient facing as well as MIPS eligible clinicians who provide occasional patient-facing services that do not represent the bulk of their practices. We believe our modified policy will allow us to more accurately identify MIPS eligible clinicians who are non-patient facing by applying a threshold in recognition of the fact that a MIPS eligible clinician who furnishes almost exclusively non-patient facing services should be treated as a non-patient facing MIPS eligible clinician despite furnishing a small number of patient-facing services.

    Comment: One commenter requested clarification on whether or not the definition of a patient-facing encounter includes procedures such as peripheral nerve blocks (64400-64530) and epidural injections (62310-62319).

    Response: We intend to publish the list of patient-facing encounters on the CMS Web site located at QualityPaymentProgram.cms.gov, which will include procedures such as peripheral nerve blocks (64400-64530) and epidural injections (62310-62319).

    Comment: One commenter requested that CMS justify how 25 or fewer patient-facing encounters was determined as the threshold for non-patient facing MIPS eligible clinicians.

    Response: As previously noted, we believed that setting the non-patient facing MIPS eligible clinician threshold for individual MIPS eligible clinician or group at 25 or fewer billed patient-facing encounters during a performance period was appropriate. We selected this threshold based on an analysis of non-patient facing HCPCS codes billed by MIPS eligible clinicians. Using these codes and this threshold, we determined that approximately one quarter of MIPS eligible clinicians would be identified as non-patient facing before MIPS exclusions, such as the low-volume threshold and new Medicare-enrolled eligible clinician policies, were applied. Based on our analysis, a significant portion of clinicians enrolled in Medicare with specialties such as anesthesiology, nuclear medicine, and pathology were identified as non-patient facing in this analysis. We believe that our approach allows the definition of non-patient facing MIPS eligible clinicians, to include both MIPS eligible clinicians who practice within specialties traditionally considered non-patient facing, as well as MIPS eligible clinicians who provide occasional patient-facing services that do not represent the bulk of their practices.

    However, as discussed above, we are finalizing a modification to our proposal to define a non-patient facing MIPS eligible clinician as an individual MIPS eligible clinician that bills 100 or fewer patient-facing encounters (including Medicare telehealth services defined in section 1834(m) of the Act) during the non-patient facing determination period, and a group provided that more than 75 percent of the NPIs billing under the group's TIN meet the definition of a non-patient facing individual MIPS eligible clinician during the non-patient facing determination period. When we applied our prior methodology to make determinations at the group level, the percentage of MIPS eligible clinicians classified as non-patient facing at the group level was higher because at the group level, MIPS eligible clinicians with less than 100 encounters who would otherwise be considered patient-facing (for example, pediatricians) are included in the group level calculation for the non-patient facing determination. Thus, there would be more specialists classified as non-patient facing when we make determinations at the group level, particularly when the percentage of specialists identified as non-patient facing at the group level is compared to the overall percentage of individual MIPS eligible clinicians. We note that the reason for the increase in the number of non-patient facing determinations is due to individual MIPS eligible clinicians in groups who have with less than 100 encounters would be classified as non-patient facing and would otherwise be considered patient-facing.

    Comment: Several commenters disagreed with CMS's proposal to apply the same billing threshold for patient-facing encounters to both individual MIPS eligible clinicians and groups. One commenter noted that such a policy would force groups of non-patient facing MIPS eligible clinicians to be required to report on inapplicable outcomes and cross-cutting measures if several individuals' rare face-to-face patient encounters are summed as a group (for example, a group of 10 physicians with 2 to 3 face-to-face patient encounters per year per MIPS eligible clinician). Another commenter specifically indicated that if the proposed non-patient facing threshold is applied at a group level, specialties such as diagnostic radiology, pathology, nuclear medicine, and anesthesiology would be considered patient-facing even though practices in these specialties could be considered non-patient facing if evaluated individually.

    A few commenters indicated that when the proposed threshold is applied to groups without scaling the threshold by the number of clinicians in a group, a single individual clinician could push the entire group into the patient-facing category, even if the other individual clinicians in the group would, otherwise, be considered non-patient facing. One commenter indicated that the proposed definition of a non-patient facing MIPS eligible clinician would impact small and rural practices whose general radiologists perform more interventional procedures even though such patient-facing encounters represent only a very small fraction of the group's total Medicare services.

    Several commenters provided alternative options for determining how the definition of non-patient facing MIPS eligible clinicians could be applied to groups. One commenter suggested scaling the patient-facing encounter threshold by the number of clinicians in a group practice while another commenter suggested doing so by patient-facing encounter codes. A few other commenters recommended one or more of the following alternatives: (1) Apply a patient-facing encounter threshold that is proportional to the group size, and, for non-patient facing MIPS eligible clinicians who meet the definition, identify such MIPS eligible clinicians at the beginning of the performance year; (2) classify groups based on whether the majority of individual MIPS eligible clinicians meet the threshold; (3) compare a group's average number of patient-facing encounters to the threshold, where a group's average would be defined by the total number of patient-facing encounters billed by the group divided by the number of MIPS eligible clinicians in the group and as a result, would not be skewed by a few MIPS eligible clinicians; or (4) redefine a non-patient facing MIPS eligible clinician by using the threshold of 50 or fewer patient-facing encounters per individual such that, if 51 percent or more members of the group individually fall below the threshold, then the entire group is considered non-patient facing.

    Response: We thank the commenters for expressing their concerns regarding the proposed definition of a non-patient facing MIPS eligible clinician. Based on the comments received, we recognize that having a similar threshold applied at the individual and group levels would inadvertently identify groups composed of certain specialties or multi-specialties as patient-facing that would traditionally be considered non-patient facing or provide occasional patient-facing services that do not represent the bulk of their group. Thus, we are modifying our proposed definition of a non-patient facing MIPS eligible clinician to establish two separate thresholds that apply at the individual and group level.

    Specifically, we are modifying our proposal to define a non-patient facing MIPS eligible clinician for MIPS as an individual MIPS eligible clinician that bills 100 or fewer patient-facing encounters (including Medicare telehealth services defined in section 1834(m) of the Act) during the non-patient facing determination period, and a group provided that more than 75 percent of the NPIs billing under the group's TIN meet the definition of a non-patient facing individual MIPS eligible clinician during the non-patient facing determination period.

    In regard to the threshold applying at the group level, we recognize that groups vary in size and composition and thus, we believe that a percentage-based approach applies such a threshold equally across all types of groups. Also, we believe that a percentage-based threshold for groups is a more appropriate and accurate approach for distinguishing between groups composed of certain specialty or multi-specialty practices that should be considered non-patient facing. We are establishing a percentage-based threshold pertaining to groups above 75 percent in order to succinctly identify whether or not the majority of services furnished by groups are non-patient facing. We are specifying that more than 75 percent of the NPIs billing under the group's TIN would need to meet the definition of a non-patient facing individual MIPS eligible clinician in order for the group to be considered non-patient facing because such a threshold is applicable to any group size and composition and clearly delineates which groups furnish primarily non-patient facing services while remaining consistent with the individual-level threshold. For purposes of defining a non-patient facing MIPS eligible clinician as it relates to groups, we believe that more than 75 percent is an adequate percentage threshold. Based on the comments received regarding the establishment of a separate non-patient facing threshold for groups, we are seeking additional comment on our modified policy for future consideration, which determines that a group would be considered non-patient facing if more than 75 percent of the NPIs billing under the group's TIN meet the definition of a non-patient facing individual MIPS eligible clinician during the non-patient facing determination period.

    Comment: One commenter indicated that clarification is needed on how the requirements for each performance category would apply to clinicians who do not have face-to-face encounters with patients.

    Response: We refer readers to sections II.E.5.b., II.E.5.e., and II.E.5.f. of this final rule with comment period, which describe the requirements for each performance category pertaining to non-patient facing MIPS eligible clinicians.

    Comment: One commenter inquired about whether or not CMS would be able to distinguish claims for patient-facing encounters from claims for non-patient facing encounters to ensure that Part B claims for non-patient facing encounters are not subject to the MIPS payment adjustment.

    Response: The statute makes it clear that the MIPS payment adjustment applies to the amount otherwise paid under Medicare Part B charges with respect to items and services furnished by a MIPS eligible clinician during a year. We note that here is no carve-out for amounts paid for claims for non-patient facing services given that the statute does not grant the Secretary discretion to establish such a carve-out through rulemaking.

    Comment: One commenter requested that CMS include safeguards that prevent unintended consequences of scoring newly introduced quality measures. Specifically, the commenter indicated that the three proposed population-based measures have rarely been, or ever, reported by physician anesthesiologists. The three measures—Acute Conditions Composite (Bacterial Pneumonia, Urinary Tract Infection and Dehydration), Chronic Conditions Composite (Diabetes, Chronic Obstructive Pulmonary Disease or Asthma, Heart Failure) and All-cause Hospital Readmission Measure are measures that the physician anesthesiologist would have little control over, especially since these measures are calculated by CMS using administrative claims data. The commenter indicated that the use of these measures would place anesthesiology at a disadvantage to other MIPS eligible clinicians. The commenter expressed concern that attribution of these measures to individual physician anesthesiologists may prove to be equally or less transparent than current measures under VM.

    Response: We appreciate the commenter's concerns and note that, as discussed in section II.E.5.b.(4) of this final rule with comment period, we are establishing alternative requirements under the quality performance category for non-patient facing MIPS eligible clinicians. As discussed in section II.E.6.b.(2) of this final rule with comment period, we may re-weight performance categories if there are not sufficient measures applicable and available for each MIPS eligible clinician in order to ensure that all MIPS eligible clinicians, including those who are non-patient facing, are scored appropriately. Lastly, as discussed in section II.E.5.b.(6) of this final rule with comment period, we note that 2 of the 3 proposed population measures are not being finalized. In section II.E.8.e. of this final rule with comment period, we describe a validation process for claims and registry submissions to validate whether MIPS eligible clinicians have submitted all applicable measures when MIPS eligible clinicians submit fewer than six measures.

    Comment: One commenter requested clarification on how MIPS incentives or penalties would be applied when facilities (for example, hospitals) bill and collect the Medicare Part B payments through reassignment from their hospital-based MIPS eligible clinicians. The commenter indicated that as hospitals continue to employ primary care clinicians and specialists and bill payers on their behalf, hospitals are concerned that their Medicare Part B payments will be subject to MIPS payment adjustments for poor final scores. The commenter inquired about whether a hospital-based clinician would be required to participate in MIPS. The commenter recommended that CMS consider the consequences of applying a MIPS payment adjustment factor that may adversely affect financially vulnerable hospitals, such as safety net hospitals.

    Response: We appreciate the commenter expressing concerns. We note that the requirements described in this final rule with comment period apply to MIPS eligible clinicians participating in MIPS as individual MIPS eligible clinicians or groups and do not apply to hospitals directly. In regard to the commenter's concern about the MIPS payment adjustment affecting financially vulnerable hospitals and safety net hospitals, section 1848(q)(6)(E) of the Act provides that the MIPS payment adjustment is applied to the amount otherwise paid under Part B for the items and services furnished by a MIPS eligible clinician during a year (beginning with 2019). Thus, the MIPS payment adjustment would apply to payments made for items and services furnished by MIPS eligible clinicians for Medicare Part B charges billed such as those under the PFS, but it would not apply to the facility payment to the hospital itself under the inpatient prospective payment system (IPPS) or other facility-based payment methodology. We refer readers to sections II.E.1.c. and II.E.1.d. of this final rule with comment period, which address MIPS eligible clinicians who practice in Method I CAHs, Method II CAHs, RHCs, and FQHCs.

    Comment: A commenter suggested that CMS focus on inpatient care, rather than outpatient care, because savings are more achievable in the inpatient setting (particularly in the last 6 months of life). The commenter noted that the MIPS program should track hospitals, rather than clinicians.

    Response: We appreciate the suggestions from the commenter and will consider them into consideration in future rulemaking.

    Comment: Several commenters supported the inclusion of telehealth services as patient-facing encounters. A few commenters described the potential benefits of telehealth, including: Increasing access to health care services that otherwise may not be available to many patients, reducing avoidable hospitalizations for nursing facility residents who otherwise may not receive early enough treatment, and providing an option to help address clinician shortages. Another commenter expressed concern that telehealth would become common and is not a viable substitute for face-to-face patient care.

    A few commenters discussed the definition of telehealth. One commenter recommended a revision to the current Medicare telehealth definition to reflect simple, plain language for MIPS reporting and suggested the following, “Telehealth means a health care service provided to a patient from a provider at other location.” Another commenter requested that CMS define and adopt a technology neutral definition of telehealth that would allow MIPS eligible clinicians to report the full range of evidence-based telehealth services they provide, rather than limiting MIPS telehealth reporting to be “Medicare eligible telehealth services” as defined at 42 CFR 410.78. One commenter requested that CMS expand the definition, use, and reporting of telehealth services, and clearly distinguish between MIPS eligible clinicians who are and are not patient-facing (for example, radiology, physician-to-physician consult). Another commenter suggested that CMS publish, at the beginning of a performance year, a comprehensive list of each telehealth service cross-mapped to whether it is determined to be patient-facing or non-patient facing.

    Also, a few commenters recommended that telehealth services should be restricted to true direct patient encounters (which would count toward a threshold of patient-facing encounters) and exclude the use of telehealth services by clinicians to consult with one another. One commenter disagreed with the eligibility criteria for telehealth services in contributing towards the scoring of the four performance categories and recommended that CMS treat telehealth services the same as all other in-person services for purposes of calculating MIPS program requirements.

    Response: We appreciate the support from commenters regarding our proposal to include telehealth services in the definition of patient-facing encounters. We note that telehealth services means the Medicare telehealth services defined in section 1834(m) of the Act. Under the PFS and for purposes of this final rule with comment period, Medicare telehealth services that are evaluation and management services (the denominators for the cross-cutting measures) are considered patient-facing encounters, which will be made available at QualityPaymentProgram.cms.gov. The list of all Medicare telehealth services is located on the CMS Web site at https://www.cms.gov/Medicare/Medicare-General-Information/Telehealth/Telehealth-Codes.html. For eligible telehealth services, the use of telecommunications technology (real-time audio and video communication) substitutes for an in-person encounter. Services furnished with the use of telecommunications technology that do not use a real-time interactive communication between a patient and clinician are not considered telehealth services. Such services encompass circumstances in which a clinician would be able to assess an aspect of a patient's condition without the presence of the patient or without the interposition of another clinician. In regard to the recommendation from commenters requesting CMS to modify the definition of telehealth, we note that section 1834(m) of the Act defines Medicare telehealth services and we believe this is the appropriate definition for purposes of delineating the scope of patient-facing encounters.

    Comment: One commenter requested that the registration process for non-patient facing MIPS eligible clinicians be very clear, and noted that it is difficult to register in more than one place with multiple logins and passwords. The commenter requested that CMS make sure that the personnel handling the Quality Payment Program Service Center have knowledge of areas such as pathology and radiology. The commenter also recommended that CMS reach out to the specialty clinician community in order for specialists to know that they need to register.

    Response: We did not propose a registration process for non-patient facing MIPS eligible clinicians. All MIPS eligible clinicians who meet the definition of a non-patient facing MIPS eligible clinician will be considered non-patient facing for the duration of a performance period. In order for non-patient facing MIPS eligible clinicians to know in advance of a performance period whether or not they qualify as a non-patient facing MIPS eligible clinician, we will identify non-patient facing individual MIPS eligible clinicians and groups based on the 24-month non-patient facing determination period. The non-patient facing determination period has an initial 12-month segment that would span from the last 4 months of a calendar year 2 years prior to the performance period followed by the first 8 months of the next calendar year and include a 60-day claims run out, which will allow us to inform MIPS eligible clinicians and groups of their non-patient facing status during the month (December) prior to the start of the performance period.

    For purposes of the 2019 MIPS payment adjustment, we will initially identify individual MIPS eligible clinicians and groups who are considered non-patient facing MIPS eligible clinicians based on 12 months of data starting from September 1, 2015 to August 31, 2016. In order to account for the identification of additional individual MIPS eligible clinicians and groups that may qualify as non-patient facing during the 2017 performance period, we will conduct another eligibility determination analysis based on 12 months of data starting from September 1, 2016 to August 31, 2017. In regard to the suggestion regarding the Quality Payment Program Service Center, we strive to ensure that any MIPS eligible clinician or group that will seeks assistance through the Quality Payment Program Service Center will be provided with adequate and consistent information pertaining to the various components of MIPS.

    After consideration of the public comments we received, we are finalizing a modification to our proposal to define a non-patient facing MIPS eligible clinician for MIPS at § 414.1305 as an individual MIPS eligible clinician that bills 100 or fewer patient-facing encounters (including Medicare telehealth services defined in section 1834(m) of the Act) during the non-patient facing determination period, and a group provided that more than 75 percent of the NPIs billing under the group's TIN meet the definition of a non-patient facing individual MIPS eligible clinician during the non-patient facing determination period. As noted above, we believe that it would be beneficial for individual MIPS eligible clinicians and groups to know in advance of a performance period whether or not they qualify as a non-patient facing MIPS eligible clinician.

    We establish the non-patient facing determination period for purposes of identifying non-patient facing MIPS eligible clinicians in advance of the performance period using historical claims data. This eligibility determination process will allow us to identify non-patient facing MIPS eligible clinicians prior to or shortly after the start of the performance period. In order to conduct an analysis of the data prior to the performance period, we are establishing an initial non-patient facing determination period consisting of 12 months. The initial 12-month segment of the non-patient facing determination period would span from the last 4 months of a calendar year 2 years prior to the performance period followed by the first 8 months of the next calendar year and include a 60-day claims run out, which will allow us to inform MIPS eligible clinicians and groups of their non-patient facing status during the month (December) prior to the start of the performance period. The second 12-month segment of the non-patient facing determination period would span from the last 4 months of a calendar year 1 year prior to the performance period followed by the first 8 months of the performance period in the next calendar year and include a 60-day claims run out, which will allow us to inform additional eligible clinicians and groups of their non-patient status during the performance period.

    Thus, for purposes of the 2019 MIPS payment adjustment, we will initially identify individual MIPS eligible clinicians and groups who are considered non-patient facing MIPS eligible clinicians based on 12 months of data starting from September 1, 2015 to August 31, 2016. In order to account for the identification of additional individual MIPS eligible clinicians and groups that may qualify as non-patient facing during the 2017 performance period, we will conduct another eligibility determination analysis based on 12 months of data starting from September 1, 2016 to August 31, 2017.

    Similarly, for future years, we will conduct an initial eligibility determination analysis based on 12 months of data (consisting of the last 4 months of the calendar year 2 years prior to the performance period and the first 8 months of the calendar year prior to the performance period) to determine the non-patient facing status of individual MIPS eligible clinicians and groups, and conduct another eligibility determination analysis based on 12 months of data (consisting of the last 4 months of the calendar year prior to the performance period and the first 8 months of the performance period) to determine the non-patient facing status of additional individual MIPS eligible clinicians and groups. We will not change the non-patient facing status of any individual MIPS eligible clinician or group identified as non-patient facing during the first eligibility determination analysis based on the second eligibility determination analysis. Thus, an individual MIPS eligible clinician or group that is identified as non-patient facing during the first eligibility determination analysis will continue to be considered non-patient facing for the duration of the performance period regardless of the results of the second eligibility determination analysis. We will conduct the second eligibility determination analysis to account for the identification of additional, previously unidentified individual MIPS eligible clinicians and groups that are considered non-patient facing.

    In addition, we consider a patient-facing encounter as the evaluation and management services (the denominators for the cross-cutting measures). Lastly, as noted above, we are finalizing our proposal to include Medicare telehealth services (as defined in section 1834(m) of the Act) in the definition of patient-facing encounters. We intend to publish a list of patient-facing encounters on the CMS Web site located at QualityPaymentProgram.cms.gov.

    c. MIPS Eligible Clinicians Who Practice in Critical Access Hospitals Billing Under Method II (Method II CAHs)

    Section 1848(q)(6)(E) of the Act provides that the MIPS payment adjustment is applied to the amount otherwise paid under Part B for the items and services furnished by a MIPS eligible clinician during a year (beginning with 2019). In the case of MIPS eligible clinicians who practice in CAHs that bill under Method I (“Method I CAHs”), the MIPS payment adjustment would apply to payments made for items and services billed by MIPS eligible clinicians under the PFS, but it would not apply to the facility payment to the CAH itself. In the case of MIPS eligible clinicians who practice in Method II CAHs and have not assigned their billing rights to the CAH, the MIPS payment adjustment would apply in the same manner as for MIPS eligible clinicians who bill for items and services in Method I CAHs.

    Under section 1834(g)(2) of the Act, a Method II CAH bills and is paid for facility services at 101 percent of its reasonable costs and for professional services at 115 percent of such amounts as would otherwise be paid under Part B if such services were not included in outpatient CAH services. In the case of MIPS eligible clinicians who practice in Method II CAHs and have assigned their billing rights to the CAHs, those professional services would constitute “covered professional services” under section 1848(k)(3)(A) of the Act because they are furnished by an eligible clinician and payment is “based on” the PFS. Moreover, this is consistent with the precedent CMS has established by applying the PQRS and meaningful use payment adjustments to Method II CAH payments. Therefore, we proposed that the MIPS payment adjustment does apply to Method II CAH payments under section 1834(g)(2)(B) of the Act when MIPS eligible clinicians who practice in Method II CAHs have assigned their billing rights to the CAH. We requested comments on this proposal.

    The following is a summary of the comments we received regarding our proposal that the MIPS payment adjustment does apply to Method II CAH payments under section 1834(g)(2)(B) of the Act when MIPS eligible clinicians who practice in Method II CAHs have assigned their billing rights to the CAH.

    Comment: One commenter requested clarification regarding whether or not clinicians who are part of a CAH would be considered a group and required to participate MIPS.

    Response: We note that clinicians meeting the definition of a MIPS eligible clinician unless eligible for an exclusion, are generally required to participate in MIPS. For MIPS eligible clinicians who practice in Method I CAHs, the MIPS payment adjustment would apply to payments made for items and services that are Medicare Part B charges billed by MIPS eligible clinicians, but it would not apply to the facility payment to the CAH itself. For MIPS eligible clinicians who practice in Method II CAHs and have not assigned their billing rights to the CAH, the MIPS payment adjustment would apply in the same manner as for MIPS eligible clinicians who bill for items and services in Method I CAHs. Moreover, in this final rule with comment period, we are finalizing our proposal that the MIPS payment adjustment does apply to Method II CAH payments under section 1834(g)(2)(B) of the Act when MIPS eligible clinicians who practice in Method II CAHs have assigned their billing rights to the CAH. We note that if a CAH is reporting as a group, then MIPS eligible clinicians part of a CAH would be considered a group as defined at § 414.1305.

    Comment: Several commenters stated that CMS must address the problems with Method II Critical Access Hospital reporting prior to Quality Payment Program implementation, particularly relating to the attribution methodology and data capture issues. For example, commenters suggested that CMS examine whether there are mechanisms for better capturing information on MIPS eligible clinicians from the CMS-1450 form. Another commenter expressed concerns that Method II CAH participation in PQRS did not work as planned and the same issues may affect Method II CAH participation in the Quality Payment Program such as attribution issues may arise when any portion of the items and services furnished by eligible clinicians are excluded from Medicare's claims data database. The commenter believed that cost and quality measures are skewed because most patients attributed to Method II CAH facilities are institutionalized, causing them to appear to have much higher costs and lower quality than the average, and because not all CAH services are reported on CMS-1500 claim forms. Specifically, commenters indicated that Method II CAHs see only a small portion of their services reimbursed under Medicare Part B, including hospital inpatient, swing bed, nursing home, psychiatric and rehabilitation inpatient, and hospital outpatient services rendered in non-CAH settings. Services rendered for outpatients in the CAH setting (for example provider-based clinic, observation, emergency room, surgery, etc.) are reimbursed through Part A and are exempt from the Quality Payment Program. The commenters noted that this results in beneficiaries who are less acute and low cost to the Medicare program (those seen in clinic settings and those who have avoided inpatient and post-acute care settings) being excluded in the Quality Payment Program attribution, with only potentially high-cost beneficiaries being counted. Therefore, while a CAH-based eligible clinician may have a substantial portion of his or her patient population in a low-cost category, the use of the PQRS attribution methodology for MIPS could still easily result in the MIPS eligible clinician being reported as high-cost if only high-cost patients are included in the Quality Payment Program attribution. The commenters recommended that all Method II CAH ambulatory services be included in the attribution methodology of the Quality Payment Program.

    For Method II claims, this would involve scrubbing outpatient claims for services reported with professional revenue codes (96X, 97X and 98X) that are matched up with the applicable CPT codes. Commenters recommended an alternative, in which the Method II CAHs could be benchmarked only against themselves. Commenters indicated that the penalties would be relatively small, given that Method II CAHs bill primarily under Part A, but the publishing of these negative scores on Physician Compare will cause patients to seek care elsewhere, further destabilizing the rural delivery system.

    Response: We appreciate the commenters expressing their concerns and note that MIPS eligible clinicians who practice in Method II CAHs may be eligible for the low-volume threshold exclusion, in which such eligible clinicians who do not exceed $30,000 of billed Medicare Part B allowed charges or 100 Part B-enrolled Medicare beneficiaries would be excluded from MIPS. We believe this exclusion will benefit eligible clinicians who practice in Method II CAHs. We refer readers to section II.E.10. of this final rule with comment period for final policies regarding public reporting on Physician Compare.

    Comment: One commenter suggested that CMS delay the start of the MIPS program for MIPS eligible clinicians who practice in Method II CAHs and have assigned their billing rights to the CAH.

    Response: We appreciate the suggestion from the commenter. However, we do not deem it necessary or justifiable to delay the participation of MIPS eligible clinicians who provide services in Method II CAHs and have assigned their billing rights to the CAH given that Method II CAHs were required to participate in PQRS and the Medicare EHR Incentive Program.

    Comment: One commenter indicated that many clinicians who practice in Method II CAHs would provide their clinical care in RHCs/FQHCs, and as such, their only qualifying Part B charges would be documented in the CAH's inpatient CEHRT. The commenter noted that while PQRS was mandated for these clinicians, facilities face difficulty creating quality PQRS reports based on extremely limited encounters. The commenter also indicated that it is overly burdensome to require these low-volume “inpatient only” CAH providers to participate in the MIPS program until inpatient CEHRT software is required through the certification process to produce NQF measure reports (on a clinician by clinician basis) relevant to any and all CMS quality programs. The commenter recommended that all clinicians who practice in Method II CAHs be exempt from reporting under MIPS, similar to the provisions established under the EHR Incentive Program that exempt hospital-based EPs from the application of the meaningful use payment adjustment.

    Response: We appreciate the concerns expressed by the commenter regarding MIPS eligible clinicians who practice in Method II CAHs and note that clinicians meeting the definition of a MIPS eligible clinician, unless eligible for an exclusion, are generally required to participate in MIPS (section II.E.3. of this final rule with comment period describes the provisions pertaining to the exclusions from MIPS participation). For MIPS eligible clinicians who practice in Method II CAHs and have not assigned their billing rights to the CAH, the MIPS payment adjustment would apply to payments made for items and services billed by MIPS eligible clinicians under the PFS, but it would not apply to the facility payment to the CAH itself. However, for MIPS eligible clinicians who practice in Method II CAHs and have assigned their billing rights to the CAH, the MIPS payment adjustment applies to Method II CAH payments under section 1834(g)(2)(B) of the Act.

    In section II.E.5.g.(8)(a)(i) of this final rule with comment period, we noted that CAHs (and eligible hospitals) are subject to meaningful use requirements under sections 1886(b)(3)(B) and (n) and 1814(l) of the Act, respectively, which were not affected by the enactment of the MACRA. CAHs (and eligible hospitals) are required to report on objectives and measures of meaningful use under the EHR Incentive Program, as outlined in the 2015 EHR Incentive Programs final rule. The objectives and measures of the EHR Incentive Programs for CAHs (and eligible hospitals) are specific to these facilities, and are more applicable and better represent the EHR technology available in these settings. Section 1848(a)(7)(D) of the Act exempts hospital-based EPs from the application of the payment adjustment under the EHR Incentive Program and section 1848(a)(7)(B) of the Act provides the authority to exempt an EP who is not a meaningful EHR user from the application of the payment adjustment if it is determined that compliance with the meaningful EHR user requirements would result in a significant hardship, such as in the case of an EP who practices in a rural area without sufficient internet access. The MACRA did not maintain these statutory exceptions for the advancing care information performance category under MIPS. Thus, the exceptions under sections 1848(a)(7)(B) and (D) of the Act are limited to the meaningful use payment adjustment under section 1848(a)(7)(A) of the Act and do not apply in the context of the MIPS program.

    Section 1848(q)(5)(F) of the Act provides the authority to assign different scoring weights (including a weight of zero) for each performance category if there are not sufficient measures and activities applicable and available to each type of MIPS eligible clinician, including hospital-based clinicians. Accordingly, as described in section II.E.5.g.(8)(a)(i) of this final rule with comment period, we may assign a weight of zero percentage for the advancing care information performance category for hospital-based MIPS eligible clinicians. Under MIPS, we define a hospital-based MIPS eligible clinician as a MIPS eligible clinician who furnishes 75 percent or more of his or her covered professional services in sites of service identified by the Place of Service (POS) codes 21, 22, and 23 used in the HIPAA standard transaction as an inpatient hospital, on campus outpatient hospital or emergency room setting in the year preceding the performance period. Consistent with the EHR Incentive Program, we will determine which MIPS eligible clinicians qualify as “hospital-based” for a MIPS payment year.

    Comment: One commenter requested that CMS address data capture issues for CAHs that may be required to participate in the MIPS and examine whether there are mechanisms for better capturing information on eligible clinicians from the CMS-1450 form. Some CAHs have reported issues with capturing full information about eligible clinicians from the institutional billing form used by CAHs (UB-04/CMS-1450). Under existing billing rules, CAHs may bill one CMS-1450 per day, with claims from multiple providers are combined into one submission.

    Response: We appreciate the commenter expressing these concerns and intend to address operational and system-infrastructure issues experienced under previously established CMS programs and ensure that MIPS eligible clinicians have an improved experience when participating in the MIPS program.

    After consideration of the public comments we received, we are finalizing our proposal that the MIPS payment adjustment will apply to Method II CAH payments under section 1834(g)(2)(B) of the Act when MIPS eligible clinicians who practice in Method II CAHs have assigned their billing rights to the CAH.

    d. MIPS Eligible Clinicians Who Practice in Rural Health Clinics (RHCs) and/or Federally Qualified Health Centers (FQHCs)

    As noted in section II.E.1.d. of the proposed rule (81 FR 28176), section 1848(q)(6)(E) of the Act provides that the MIPS payment adjustment is applied to the amount otherwise paid under Part B with respect to the items and services furnished by a MIPS eligible clinician during a year. Some eligible clinicians may not receive MIPS payment adjustments due to their billing methodologies. If a MIPS eligible clinician furnishes items and services in an RHC and/or FQHC and the RHC and/or FQHC bills for those items and services under the RHC's or FQHC's all-inclusive payment methodology, the MIPS adjustment would not apply to the facility payment to the RHC or FQHC itself. However, if a MIPS eligible clinician furnishes other items and services in an RHC and/or FQHC and bills for those items and services under the PFS, the MIPS adjustment would apply to payments made for items and services. We note that eligible clinicians providing services for a RHC or FQHC as an employee or contractor is paid by the RHC or FQHC, not under the PFS. When a MIPS eligible clinician furnishes professional services in an RHC and/or FQHC, the RHC bills for those services under the RHC's all-inclusive rate methodology and the FQHC bills for those services under the FQHC prospective payment system methodology, in which the MIPS payment adjustment would not apply to the RHC or FQHC payment. Therefore, we proposed that services rendered by an eligible clinician that are payable under the RHC or FQHC methodology would not be subject to the MIPS payments adjustments. However, these eligible clinicians have the option to voluntarily report on applicable measures and activities for MIPS, in which the data received would not be used to assess their performance for the purpose of the MIPS payment adjustment. We requested comments on this proposal.

    The following is a summary of the comments we received regarding our proposal that services rendered by an eligible clinician that are payable under the RHC or FQHC methodology would not be subject to the MIPS payments adjustments.

    Comment: Several commenters supported CMS' proposal that items and services furnished by a MIPS eligible clinician that are payable under the RHC or FQHC methodology would not be subject to the MIPS payment adjustments.

    Response: We appreciate the support from commenters.

    Comment: One commenter noted that it is unclear what the participation requirements are for MIPS eligible clinicians who practice in FQHCs.

    Response: In this final rule with comment period, we note that items and services furnished by a MIPS eligible clinician that are payable under the RHC or FQHC methodology would not be subject to the MIPS payments adjustment. These MIPS eligible clinicians have the option to voluntarily report on applicable measures and activities for MIPS. If such MIPS eligible clinicians voluntarily participate in MIPS, they would follow the requirements established for each performance category. We note that the data received from such MIPS eligible clinicians would not be used to assess their performance for the purpose of the MIPS payment adjustment. However, items and services furnished by a MIPS eligible clinician that are billed Medicare Part B charges by the MIPS eligible clinician would be subject to the MIPS payment adjustment. Also, we note that such MIPS eligible clinicians who furnished items and services that are billed Medicare Part B allowed charges by such MIPS eligible clinicians may be excluded from the requirement to participate in MIPS if they do not exceed the low-volume threshold as described in section II.E.3.c. of this final rule with comment period.

    Comment: Several commenters agreed with voluntary reporting of MIPS data for FQHC and RHC clinicians as described in the proposed rule, and recommended that quality reporting requirements should be matched with HRSA measures. Commenters noted that drawing conclusions from the initial data could be problematic based upon coding and documentation differences compared to other clinicians reporting MIPS data. One commenter requested that CMS not request FQHCs and RHCs to voluntarily submit data. The commenter indicated such organizations have neither the IT support nor administrative staff to submit extended data.

    Response: We thank the commenters for expressing their concerns regarding the comparability of data submitted by MIPS eligible clinicians who practice in RHCs and FQHCs. We want to reiterate that such MIPS eligible clinicians have the option to decide whether or not they voluntarily participate in MIPS.

    Comment: A few commenters requested CMS to ensure that FQHC clinicians are not subject to MIPS for the limited number of FQHC-related claims submitted under the PFS. Alternatively, one commenter requested that fee service claims for non-specialty services furnished by clinicians practicing in FQHCs or RHCs not be counted when determining eligibility for the low-volume threshold.

    Response: We appreciate the concern expressed by the commenter and note that section 1848(q)(6)(E) of the Act provides that the MIPS payment adjustment is applied to the amount otherwise billed under Medicare Part B charges with respect to the items and services furnished by a MIPS eligible clinician during a year. With respect to the comment regarding the low-volume threshold, we refer readers to section II.E.3.c. of this final rule with comment period, in which we establish a low-volume threshold to identify MIPS eligible clinicians excluded from participating in MIPS. We disagree with the recommendation that the fee for service claims for non-specialty items and services furnished by clinicians practicing in FQHCs or RHCs should be excluded from the low-volume threshold eligibility determination. We believe that the low-volume threshold established in this final rule with comment period retains as MIPS eligible clinicians those MIPS eligible clinicians who are treating relatively few beneficiaries, but engage in resource intensive specialties, or those treating many beneficiaries with relatively low-priced services. We can meaningfully measure the performance and drive quality improvement across the broadest range of MIPS eligible clinician types and specialties. Conversely, it excludes MIPS eligible clinicians who do not have a substantial quantity of interactions with Medicare beneficiaries or furnish high cost services. Clinicians practicing in a RHC or FQHC not exceeding the low-volume threshold would be excluded from the MIPS requirements.

    Comment: Several commenters indicated that RHCs should be incentivized to participate and report quality data under the Quality Payment Program. One commenter indicated that the voluntary participation option is unlikely to be used without an incentive. Another commenter recommended that CMS conduct a survey of RHCs before it makes the effort to set up a voluntary reporting program that no one is likely to use. The commenter's own survey found that without incentives or penalties, very few RHCs would voluntarily participate in MIPS, and found that an incentive payment of $10,000 per clinic per year would prompt about half of RHCs to report under MIPS. A few commenters suggested that CMS include RHCs in MIPS, as these are the only primary care system left in the country with no tie to value.

    Response: We appreciate the suggestions from commenters and will consider them as we assess the volume of voluntary reporting under MIPS.

    Comment: One commenter expressed concern that under CMS' proposal to exclude RHCs from MIPS, RHCs' patients will fail to benefit from the rigorous quality measurement that comparable practices under MIPS program will experience. The commenter is concerned about the growing disparities in quality and life expectancy between rural and urban patients. The commenter notes that the number of RHCs has grown from 400 in 1990 to more than 4,000 today, with new conversions continuing as more rural providers realize they can get paid more than FFS under this model.

    Response: We thank the commenter for expressing concerns and note that MIPS eligible clinicians who practice in RHCs and furnish items and services that are payable under the RHC methodology have the option to voluntarily report on applicable measures and activities for MIPS.

    Comment: A few commenters requested that consideration be given to phase-in requests for FQHC voluntary reporting to allow for the development of social determinants of health status measure adjustments.

    Response: We appreciate the feedback on the role of socioeconomic status in quality measurement. We continue to evaluate the potential impact of social risk factors on measure performance. One of our core objectives is to improve beneficiary outcomes, and we want to ensure that complex patients as well as those with social risk factors receive excellent care.

    Comment: A few commenters supported CMS' proposal to be inclusive of rural practices, but encouraged CMS to have special conditions for such rural clinicians that have not participated in PQRS, VM, or the Medicare EHR Incentive Program for EPs in the past and suggested a phased approach for full participation that protects safety net clinicians from downside risk.

    Response: We appreciate the support from commenters and note that MIPS eligible clinicians who practice in RHCs and furnish items and services that are payable under the RHC methodology would not be subject to the MIPS payments adjustments for such items and services, but would have the option to voluntarily report on applicable measures and activities for MIPS. For such MIPS eligible clinicians who voluntarily participate in MIPS, the data submitted to CMS would not be used to assess their performance for the purpose of the MIPS payment adjustment.

    Comment: One commenter recommended that CMS create a system permitting the voluntary reporting of performance information by excluded clinicians, and that the data reported be used to help define rural-specific measures and standards for these clinicians and for all rural clinicians. Under this system, data would be released only on an aggregate basis, protecting the privacy of individual entities reporting.

    Response: We thank the commenter for the suggestions and will consider them as we establish policies pertaining to MIPS eligible clinicians who practice in RHCs and FQHCs in future rulemaking.

    Comment: One commenter noted that in certain communities, clinical services are delivered in RHCs, small independent practices and community health centers, in which hospital-based services billed under the PFS may only represent a small portion of total care provided. The commenter requested that CMS develop a method for rural clinicians such as those practicing in RHCs and FQHCs to have a meaningful avenue to participate in the Quality Payment Program. Another commenter indicated that RHCs, CAHs, and FQHCs were created to assure the availability of health care services to remote and underserved populations, and while a majority of clinicians who practice in RHCs, CAHs, and FQHCs bill under Medicare Part A, may have a limited number of encounters for which services are billed under Medicare Part B. Thus, such clinicians may exceed the low-volume threshold and therefore be subject to the MIPS payment adjustment. The commenter expressed concerns that RHCs, CAHs, and FQHCs would be negatively impacted by having their resources stretched even further if required to meet the requirements under MIPS or be subject to a negative MIPS payment adjustment. The commenter also noted that many RHCs and FQHCs have not implemented EHR technology due to the lack of available resources and struggle to recruit qualified clinicians and staff, and as a result, such clinicians and staff are disproportionately older than the average health care workforce. If RHCs and FQHCs are required to participate in MIPS and meet all requirements or be subject to a negative MIPS payment adjustment, the fiscal resources reduced by either a MIPS payment adjustment or investment in EHR technology would significantly impact and reduce the availability of services available to remote and underserved populations. The commenter recommended that CMS consider permanent exclusions for clinicians practicing in RHCs and FQHCs from the requirement to participate in the MIPS program. One commenter noted that CMS should provide exemptions from entire performance categories, not just individual measures and activities, consider the feasibility of shorter reporting timeframes, and ensure that there are free or low cost reporting options within each MIPS performance category.

    Response: We appreciate the commenters expressing their concerns and providing recommendations. We will take into consideration the suggestions from commenters in future rulemaking. We note that the MIPS payment adjustment is limited to items and services furnished by MIPS eligible clinicians for billed Medicare Part B charges such as those under the PFS. We note that MIPS eligible clinicians practicing in RHCs and FQHCs will benefit from other policies that we are finalizing throughout this final rule with comment period such as the higher low-volume threshold, lower reporting requirements, and lower performance threshold.

    Comment: One commenter requested clarification on how CMS would define rural areas and suggested that CMS adopt a consistent definition for the term “small practices” across all CMS programs. The commenter suggested that a small practice be defined as having 25 or fewer clinicians. Another commenter recommended that the low-volume threshold be set at an even higher level for rural and underserved areas to ensure that MIPS does not endanger the financial stability of rural safety net practices or reduce access to services for rural Medicare beneficiaries.

    Response: We note that we define rural areas as clinicians in zip codes designated as rural, using the most recent HRSA Area Health Resource File data set available as described in section II.E.5.f.(5) of this final rule with comment period. Also, in section II.E.5.f.(5) of this final rule with comment period, we define small practices as practices consisting of 15 or fewer clinicians. We are finalizing our proposed definition of small practices because the statute provides special considerations for small practices consisting of 15 or fewer clinicians. In regard to the commenter's suggestion pertaining to the low-volume threshold, we are finalizing a modification to our proposal, which establishes a higher low-volume threshold as described in section II.E.3.c. of this final rule with comment period.

    Comment: Some commenters recommended that CMS follow the recommendations of the NQF Report on Performance Measurement for Rural Low-Volume Providers and establish rural peer groups and rural-specific standards for assessment of rural provider performance in all domains. Commenters noted that the NQF developed specific recommendations for how pay-for-performance mechanisms should be implemented for rural providers. The NQF Report on Performance Measurement for Rural Low-Volume Providers sets out both overarching and specific approaches for how rural provider performance measurement should be handled. The NQF Report on Performance Measurement for Rural Low-Volume Providers also makes recommendations about rural performance measures of domains other than quality, including cost. One commenter noted that as rural-specific quality measures are developed, such measures should be both mandatory core measures and elective supplementary measures.

    Response: We appreciate the recommendations provided by the commenters and will take them into consideration for future rulemaking.

    Comment: One commenter agreed with the goals of the proposed rule, but believed that the proposed rule had one thematic deficiency as a result of the quality reporting constructs, which implied a dichotomy of “primary care” versus “specialist” with the correlate implication that all specialists and specialties impact value of current health care similarly (and generally adversely) and marginalized specialties as leaders in care quality and efficiency improvement. The commenter recommended that CMS create specialty-specific quality and efficiency targets that incentivize specialists caring for high risk, high-cost chronically ill patients to provide the best long-term care and coordinate care with primary care physicians (including chronic care subspecialists practicing across multiple health systems rather than as part of a larger provider entity) with each specialty having specific quality goals and efficiency targets.

    Response: We appreciate the feedback from the commenter, but disagree with commenter's assessment that our policies marginalize specialists. We will take into consideration the recommendations provided by the commenter for future rulemaking.

    Comment: Due to complexity of the proposed rule and the extremely short projected turnaround time before the start of the 2017 performance period, a few commenters recommended that Frontier Health Professional Shortage Area (HPSA) clinicians should be exempt from mandatory MIPS/APM participation until 2019, when the program has had a chance to evaluate its successes and failures with respect to larger, more economically stable participants. The commenters suggested that Frontier HPSA clinicians should be allowed to voluntarily participate if they want to, but they should not be penalized due to the low-income, low-population challenges faced in extremely rural areas until payment year 2021 or later.

    Response: We note that the statute does not grant the Secretary discretion to establish exclusions other than the three exclusions described in section II.E.3. of this final rule with comment period. Thus, Frontier HPSA clinicians who are MIPS eligible clinicians are required to participate in MIPS. However, we believe that Frontier HPSA clinicians will benefit from other policies that we are finalizing throughout this final rule with comment period such as the higher low-volume threshold, lower reporting requirements, and lower performance threshold.

    After consideration of the public comments we received, we are finalizing our proposal that services rendered by an eligible clinician under the RHC or FQHC methodology, will not be subject to the MIPS payments adjustments. However, these eligible clinicians have the option to voluntarily report on applicable measures and activities for MIPS, in which the data received will not be used to assess their performance for the purpose of the MIPS payment adjustment.

    e. Group Practice (Group)

    Section 1848(q)(1)(D) of the Act, requires the Secretary to establish and apply a process that includes features of the PQRS group practice reporting option (GPRO) established under section 1848(m)(3)(C) of the Act for MIPS eligible clinicians in a group for purposes of assessing performance in the quality performance category. In addition, it gives the Secretary the discretion to do so for the other three performance categories. Additionally, we will assess performance either for individual MIPS eligible clinicians or for groups. As discussed in section II.E.2.b. of the proposed rule (81 FR 28177), we proposed to define a group at § 414.1305 as a single Taxpayer Identification Number (TIN) with two or more MIPS eligible clinicians, as identified by their individual National Provider Identifier (NPI), who have reassigned their Medicare billing rights to the TIN. Also, as outlined in section II.E.2.c. of the proposed rule (81 FR 28177), we proposed to define an APM Entity group at § 414.1305 identified by a unique APM participant identifier. However, we are finalizing a modification to the definition of a group as described in section II.E.2.b. of this final rule with comment period and finalizing the definition of an APM Entity group as described in section II.E.2.c. of this final rule with comment period.

    2. MIPS Eligible Clinician Identifier

    To support MIPS eligible clinicians reporting to a single comprehensive and cohesive MIPS program, we need to align the technical reporting requirements from PQRS, VM, and EHR-MU into one program. This requires an appropriate MIPS eligible clinician identifier. We currently use a variety of identifiers to assess an individual eligible clinician or group under different programs. For example, under the PQRS for individual reporting, CMS uses a combination of TIN and NPI to assess eligibility and participation, where each unique TIN and NPI combination is treated as a distinct eligible clinician and is separately assessed for purposes of the program. Under the PQRS GPRO, eligibility and participation are assessed at the TIN level. Under the Medicare EHR Incentive Program, we utilize the NPI to assess eligibility and participation. And under the VM, performance and payment adjustments are assessed at the TIN level. Additionally, for APMs such as the Pioneer Accountable Care Organization (ACO) Model, we also assign a program-specific identifier (in the case of the Pioneer ACO Model, an ACO ID) to the organization(s), and associate that identifier with individual eligible clinicians who are, in turn, identified through a combination of a TIN and an NPI.

    In the MIPS and APMs RFI (80 FR 63484), we sought comments on which specific identifier(s) should be used to identify a MIPS eligible clinician for purposes of determining eligibility, participation, and performance under the MIPS performance categories. In addition, we requested comments pertaining to what safeguards should be in place to ensure that MIPS eligible clinicians do not switch identifiers to avoid being considered “poor-performing” and comments on what safeguards should be in place to address any unintended consequences, if the MIPS eligible clinician identifier were a unique TIN/NPI combination, to ensure an appropriate assessment of the MIPS eligible clinician's performance. In the MIPS and APMs RFI (80 FR 63484), we sought comment on using a MIPS eligible clinician's TIN, NPI, or TIN/NPI combination as potential MIPS eligible clinician identifiers, or creating a unique MIPS eligible clinician identifier. The commenters did not demonstrate a consensus on a single best identifier.

    Commenters favoring the use of the MIPS eligible clinician's TIN recommended that MIPS eligible clinicians should be associated with the TIN used for receiving payment from CMS claims. They further commented that this approach will deter MIPS eligible clinicians from “gaming” the system by switching to a higher performing group. Under this approach, commenters suggested that MIPS eligible clinicians who bill under more than one TIN can be assigned the performance and MIPS payment adjustment for the primary practice based upon majority of dollar amount of claims or encounters from the prior year.

    Other commenters supported using unique TIN and NPI combinations to identify MIPS eligible clinicians. Commenters suggested many eligible clinicians are familiar with using TIN and NPI together from PQRS and other CMS programs. Commenters also noted this approach can calculate performance for multiple unique TIN/NPI combinations for those MIPS eligible clinicians who practice under more than one TIN. Commenters who supported the TIN/NPI also believed this approach enables greater accountability for individual MIPS eligible clinicians beyond what might be achieved when using TIN as an identifier and would provide a safeguard from MIPS eligible clinicians changing their identifier to avoid payment penalties.

    Some commenters supported the use of only the NPI as the MIPS identifier. They believed this approach would best provide for individual accountability for quality in MIPS while minimizing potential confusion because providers do not generally change their NPI over time. Supporters of using the NPI only as the MIPS identifier also commented that this approach would be simplest for administrative purposes. These commenters also note the continuity inherent with the NPI would address the safeguard issue of providers attempting to change their identifier for MIPS performance purposes.

    In the MIPS and APMs RFI (80 FR 63484), we also solicited feedback on the potential for creating a new MIPS identifier for the purposes of identifying MIPS eligible clinicians within the MIPS program. In response, many commenters indicated they would not support a new MIPS identifier. Commenters generally expressed concern that a new identifier for MIPS would only add to administrative burden, create confusion for MIPS eligible clinicians and increase reporting errors.

    After reviewing the comments, we did not propose to create a new MIPS eligible clinician identifier. However, we appreciated the various ways a MIPS eligible clinician may engage with MIPS, either individually or through a group. Therefore, we proposed to use multiple identifiers that allow MIPS eligible clinicians to be measured as an individual or collectively through a group's performance. We also proposed that the same identifier be used for all four performance categories; for example, if a group is submitting information collectively, then it must be measured collectively for all four MIPS performance categories: Quality, cost, improvement activities, and advancing care information. As discussed in the final score methodology section II.E.6. of the proposed rule (81 FR 28247 through 28248), we proposed to use a single identifier, TIN/NPI, for applying the MIPS payment adjustment, regardless of how the MIPS eligible clinician is assessed. Specifically, if the MIPS eligible clinician is identified for performance only using the TIN, we proposed to use the TIN/NPI when applying the MIPS payment adjustment. We requested comments on these proposals.

    The following is a summary of the comments we received regarding our proposals to use multiple identifiers that allow MIPS eligible clinicians to be measured as an individual or collectively through a group's performance and use a single identifier, TIN/NPI, for applying the MIPS payment adjustment.

    Comment: Several commenters supported the proposal to have each unique TIN/NPI combination considered a different MIPS eligible clinician and to use the TIN to identify group practices. One commenter noted that using a group's billing TIN to identify a group is consistent with the current CMS approach under PQRS and VM, and is preferable to creating a new MIPS-specific identifier for groups.

    Response: We appreciate the support from commenters.

    Comment: One commenter noted that the proposed MIPS identifiers (combination of TIN/NPI, etc.) would be sufficient for individual, group, and APM reporting to MIPS, but requested that CMS establish an identifier for virtual groups. Another commenter questioned the use of these identifiers beyond their original purposes.

    Response: We appreciate the feedback from the commenters. We did not propose an identifier for virtual groups, but in future rulemaking, we will take into consideration the establishment of a virtual group identifier. As noted in this final rule with comment period, the use of the identifiers enables us to identify individual MIPS eligible clinicians at the TIN/NPI level and groups at the TIN level.

    Comment: A few commenters opposed the approach of creating a new MIPS eligible clinician identifier at the initiation of the Quality Payment Program because it would be premature and cause confusion. The commenter further noted that there may be times when a clinician is not MIPS eligible and then becomes MIPS eligible. Also, the commenter indicated that there is currently not a way to report the identifier on claims.

    Response: We disagree with the commenter and believe that it is essential for us to be able to identify individual MIPS eligible clinicians using a unique identifier because the MIPS payment adjustment would be applied to the Medicare Part B charges billed by individual MIPS eligible clinicians at the TIN/NPI level. We note that we will be able to identify, at the NPI level, individual eligible clinicians who are excluded from the MIPS requirements and not subject to the MIPS payment adjustment for exclusions pertaining to new Medicare-enrolled eligible clinicians and QPs and Partial QPs not participating MIPS. In our analyses of claims data, we will be able to identify individual MIPS eligible clinicians at the TIN/NPI level given that billing is associated with a TIN or TIN/NPI.

    Comment: One commenter recommended the use of TINs plus alphanumeric codes as identifiers.

    Response: We disagree with the commenter's suggestion to use a TIN with an alphanumeric code because it would add complexity and not facilitate the identification of individual eligible clinicians at the NPI level who are associated with a group at the TIN level. For certain exclusions (for example, new Medicare-enrolled eligible clinicians, and QPs and Partial QPs who are not participating in MIPS), eligibility determinations will be made and applied at the NPI level.

    Comment: Several commenters requested that small physician practices be exempt from MIPS. A few commenters indicated that penalizing small practices would decrease access to care for patients. One commenter indicated that small groups and independent physicians are unfairly penalized and are being forced to integrate into larger hospital or corporations. Another commenter expressed concern that additional administrative duties will affect patient care and will not improve healthcare. One commenter indicated that the proposed rule was discriminatory toward solo or small group practices. The commenter noted that the financial burden of MACRA will result in the closure of many solo and small group practitioners.

    Response: We appreciate the concerns expressed by the commenters. We note that the statute does not grant the Secretary with discretion to establish exclusions other than the exclusions described in section II.E.3. of this final rule with comment period. However, we believe that small practices will benefit from policies we are finalizing throughout this final rule with comment period such as the higher low-volume threshold, lower performance requirements, and lower performance threshold.

    Comment: A few commenters requested that CMS determine and state eligibility status for clinicians providing services at independent diagnostic testing facilities (IDTFs) and to provide clear, detailed guidance under what circumstances eligibility would occur under MIPS. The commenter noted that CMS has issued similar guidance under the PQRS system of “eligible but not able to participate”; however, the commenter indicated that the guidance provided in PQRS does not address all variations of billing and coding practices of IDTFs.

    Response: We note that the MIPS payment adjustment applies only to the amount otherwise paid under Part B with respect to items and services furnished by a MIPS eligible clinician during a year. As discussed in section II.E.7. of this final rule with comment period, we will apply the MIPS adjustment at the TIN/NPI level. In regard to suppliers of independent diagnostic testing facility services, we note that such suppliers are not themselves included in the definition of a MIPS eligible clinician. However, there may be circumstances in which a MIPS eligible clinician would furnish the professional component of a Part B covered service that is billed by such a supplier. Those services could be subject to MIPS adjustment based on the MIPS eligible clinician's performance during the applicable performance period. Because, however, those services are billed by suppliers that are not MIPS eligible clinicians, it is not operationally feasible for us at this time to associate those billed allowed charges with a MIPS eligible clinician at an NPI level in order to include them for purposes of applying any MIPS payment adjustment.

    Comment: One commenter expressed concern regarding the definition of a group (unique TIN) because large health systems and hospitals operate large medical groups spanning practices and specialties, and all of them share a TIN and EHRs. The commenter indicated that grouping all clinicians together takes away the advantages of group participation. The commenter noted that CMS should generate another way for group practices to differentiate themselves.

    Response: We thank the commenter for expressing their concern. We disagree with the commenter because we believe that group level reporting is advantageous for groups in that it encourages coordination, teamwork, and shared responsibility. However, we recognize that we are not able to identify groups with eligible clinicians who are excluded from the MIPS requirements both at the individual level and group level such as new Medicare-enrolled clinicians. We note that we could establish new identifiers to more accurately identify such eligible clinicians. For future consideration, we are seeking additional comment on the identifiers. What are the advantages and disadvantages of identifying new Medicare-enrolled eligible clinicians and eligible clinicians not included in the definition of a MIPS eligible clinician until year 3 such as therapists? What are the possible identifiers that could be established for identifying such eligible clinicians?

    Comment: One commenter requested clarification about how CMS intends to treat group practices participating in MIPS in regard to satisfying the “hospital-based clinician” definition, and questioned if it would evaluate the group as a whole, or each individual within the group. And if the latter, the commenter questioned if CMS would adopt a process for scoring individuals in a group differently than the overall group. Another commenter requested that CMS consider how the definition of a group, and use of a single TIN, could represent facility-based outpatient therapy clinicians. Currently, many facility-based outpatient clinicians operate under the facility's TIN.

    Response: We note that hospital-based MIPS eligible clinicians are considered MIPS eligible clinicians are required to participate in MIPS. However, section II.E.5.g.(8)(a)(i) of this final rule with comment period describes our final policies regarding the re-weighting of the advancing care information performance category within the final score, in which we would assign a weight of zero when there are not sufficient measures applicable and available for hospital-based MIPS eligible clinicians.

    In regard to how the definition of a group corresponds facility-based outpatient clinicians, we noted that the MIPS payment adjustment applies only to the amount otherwise paid under Part B with respect to items and services furnished by a MIPS eligible clinician during a year, in which we will apply the MIPS adjustment at the TIN/NPI level (see section II.E.7. of this final rule with comment period). For items and services furnished by such clinicians practicing in a facility that are billed by the facility, such items and services may be subject to MIPS adjustment based on the MIPS eligible clinician's performance during the applicable performance period. For those billed Medicare Part B allowed charges we are able to associate with a MIPS eligible clinician at an NPI level, such items and services furnished by such clinicians would be included for purposes of applying any MIPS payment adjustment.

    Comment: Several commenters recommended that CMS extend groups to include multiple TINs and require that those TINs share and have access to the same EHR. Commenters noted that group reporting would be complicated by clinicians joining the group, and clinicians assigned to multiple TINs using different EHR systems. The commenters also expressed concern about the ability for groups to submit quality data under the group reporting option using different types of EHRs. Commenter requested the submission of multiple specialty specific data sets and to alter the scoring methodology.

    Response: We appreciate the commenters expressing their concerns and providing their suggestions. We are finalizing the definition of a group as proposed. We disagree with commenters that the definition of a group should be modified in order to account for operational and technical data mapping issues. We believe that the finalized definition of a group provides groups with the opportunity to utilize its performance data in ways that can improve coordination, teamwork, and shared responsibility.

    We do not believe that the definition of a group would create complications for eligible clinicians associated with multiple TINs. We note that individual eligible clinicians would be required to meet the MIPS requirements for each TIN/NPI association unless they are excluded from MIPS based on an exclusion established in section II.E.3. of this final rule with comment period.

    Comment: One commenter requested CMS to ensure that each service provided to a patient is associated with the actual clinician furnishing that service.

    Response: We note that the MIPS payment adjustment for individual MIPS eligible clinicians is applied to the Medicare Part B payments for items and services furnished by each MIPS eligible clinician. For groups reporting at the group level, scoring and the application of the MIPS payment adjustment is applied at the TIN level for Medicare Part B payments for items and services furnished by the eligible clinicians of the group.

    Comment: One commenter supported CMS' proposal for optional group performance tracking and submission, but recommended that CMS provide additional guidelines for clinicians who practice under multiple identifiers. The commenter requested additional clarification on how MIPS payment adjustments would impact clinicians working under multiple identifiers at multiple organizations.

    Response: We appreciate the support from the commenter. As previously noted, individual eligible clinicians who are part of several groups and thus, associated with multiple TINs, such individual eligible clinicians would be required to participate in MIPS for each group (TIN) association unless the eligible clinician (NPI) is excluded from the MIPS. Section II.E.3.e. of this final rule with comment period describes how the exclusion policies relate to groups with eligible clinicians excluded from MIPS.

    Comment: With many clinicians practicing within multiple TINs, one commenter suggested that even though it is unclear how multiple-TIN clinicians who choose individual reporting would be scored, CMS should use the clinician's highest TIN performance score for each of the four performance categories. Another commenter requested clarification on how the Quality Payment Program rule will apply to clinicians who work under multiple TINs, including the scenario where one TIN is participating in an ACO and another is not.

    Response: We note that groups have to the option to report at the individual or group level. For individual eligible clinicians associated with multiple TINs, the individual eligible clinician will either report at the individual level if the group elects to report at the individual or be included in the group-level reporting if the group elects group-level reporting. As previously noted, individual eligible clinicians who are associated with multiple TINs would be required to participate in MIPS for each group (TIN) association unless the eligible clinician (NPI) is excluded from the MIPS.

    Comment: One commenter noted as a reminder to CMS that using TINs as identifiers has caused some problems in the past such as the accuracy of TINs. When TINs are not accurate, performance rates and program metrics may be incorrect. The commenter recommended that CMS establish clear and efficient mechanisms for groups to resolve inconsistencies.

    Response: We appreciate the feedback from the commenter and will take into consideration the commenter's suggestions in future rulemaking.

    Comment: Several commenters supported the proposal to permit clinicians to report either at the individual or group level. However, one commenter expressed concern about limitations on the ability of clinicians, in the context of group-level reporting, to report the most appropriate and meaningful specialty measures. Another commenter indicated that it was not clear how group reporting would allow for specialty specific reporting, given the lack of a TIN for individual departments within a larger faculty practice plan or physician group. The commenter noted that this could cause thousands of providers to miss out on the best use of MIPS because their facilities chose reporting measures and activities that would not reflect the care they individually provide. Therefore, the commenter suggested that CMS create a reporting option within MIPS that would allow specialty-specific groups to self-designate as “group” under MIPS even if they were part of the TIN for a larger facility practice plan or physician group. The commenter noted that this would facilitate the comparison of physicians providing a similar mix of procedures for comparison for the purpose of assigning a final score. Another commenter recommended that CMS consider the common business model where large hospitals and health systems acquire multiple physician practices.

    Response: We appreciate the support from the commenters. We will consider the recommendations from the commenters in future rulemaking. We note that group-level reporting does not provide the option for groups to report at sub-levels of the group by specialty. We believe that group-level reporting ensures coordination, teamwork, and shared responsibility.

    Comment: A few commenters expressed concern regarding MIPS eligible clinicians moving practices in the middle of a reporting period. One commenter recommended that if a clinician changes TINs during the course of a year, their final composite score should be attributed to their final TIN on December 31 of that year. Another commenter indicated that by using a TIN/NPI combination, CMS could accurately match reporting data to an individual clinician because often the NPI of the clinician will not change, and CMS could match the new TIN to ensure accurate attribution.

    Response: We appreciate the concerns and suggestions from the commenters and note that individual MIPS eligible clinicians may be associated with more than one TIN during the performance period due to a variety of reasons with differing timeframes. In sections II.E.6. and II.E.7. of this final rule with comment period, we describe how individual MIPS eligible will have their performance assessed and scored and how the MIPS payment adjustment would be applied if a MIPS eligible clinician changes TINs during the performance period.

    Comment: One commenter expressed concern regarding how group size would be calculated, particularly how clinicians that are not subject to MIPS would be included in the size of the group.

    Response: CMS does not make an eligibility determination regarding a group size. We note that groups attest to their group size for purpose of using the CMS Web Interface or a group identifying as a small practice. In order for groups to determine their group size, we note that a group size would be determined before exclusions are applied.

    Comment: One commenter recommended that CMS allow validation or updating of clinicians' identifying information in the PECOS system, and not a separate system.

    Response: We appreciate the suggestion from the commenter and will consider it as we operationalize the use of PECOS for MIPS.

    After consideration of the public comments we received, we are finalizing the use of multiple identifiers that allow MIPS eligible clinicians to be measured as an individual or collectively through a group's performance. Additionally, we are finalizing our proposal that the same identifier be used for all four performance categories. For example, if a group is submitting information collectively, then it must be measured collectively for all four MIPS performance categories: Quality, cost, improvement activities, and advancing care information. While we have multiple identifiers for participation and performance, we are finalizing the use of a single identifier, TIN/NPI, for applying the MIPS payment adjustment, regardless of how the MIPS eligible clinician is assessed (see final score methodology outlined in section II.E.6. of this final rule with comment period). Specifically, if the MIPS eligible clinician is identified for performance only using the TIN, we will use the TIN/NPI when applying the MIPS payment adjustment.

    a. Individual Identifiers

    We proposed to use a combination of billing TIN/NPI as the identifier to assess performance of an individual MIPS eligible clinician. Similar to PQRS, each unique TIN/NPI combination would be considered a different MIPS eligible clinician, and MIPS performance would be assessed separately for each TIN under which an individual bills. While we considered using the NPI only, we believe TIN/NPI is a better approach for MIPS. Both TIN and NPI are needed for payment purposes and using a combination of billing TIN/NPI as the MIPS eligible clinician identifier allows us to match MIPS performance and MIPS payment adjustments with the appropriate practice, particularly for MIPS eligible clinicians that bill under more than one TIN. In addition, using TIN/NPI also provides the flexibility to allow individual MIPS eligible clinician and group reporting, as the proposed group identifiers also include TIN as part of the identifier. We recognize that TIN/NPI is not a static identifier and can change if an individual MIPS eligible clinician changes practices and/or if a group merges with another between the performance period and payment adjustment period. Section II.E.7.a. of the proposed rule describes in more detail how we proposed to match performance in cases where the TIN/NPI changes. We requested comments on this proposal.

    The following is a summary of the comments we received regarding our proposal to use a combination of billing TIN/NPI as the identifier to assess performance of an individual MIPS eligible clinician.

    Comment: One commenter expressed concern that independent physicians would not fare well as a result of the proposed rule.

    Response: We appreciate the concern expressed by the commenter. We believe that independent clinicians will benefit from policies we are finalizing throughout this final rule with comment period such as the higher low-volume threshold, lower performance requirements, and lower performance threshold.

    Comment: One commenter found the MIPS terminology confusing and believed that tracking individual clinicians for reimbursement, as outlined in the proposed rule, would be difficult.

    Response: We appreciate the feedback from the commenter and will consider the ways we can explain the MIPS requirements to ensure that information is clear, understandable, and consistent.

    Comment: Several commenters requested clarification regarding how individual MIPS eligible clinicians who bill to multiple TINs would have their performance assessed. Commenters questioned if they are eligible for MIPS payment adjustment under multiple TINs, if they are expected to perform under all four categories for each TIN where they practice, and how a Partial QP and individual in a group practice would be assessed for purposes of the 2019 MIPS payment adjustment based on the TIN/NPI combination.

    Response: For MIPS eligible clinicians associated with multiple TINs, we note that MIPS eligible clinicians will need to meet the MIPS requirements for each TIN they are associated with unless they are excluded from the MIPS requirements based on one of the three exclusions (as described in section II.E.3. of this final rule with comment period) at the individual and/or group level.

    Comment: One commenter questioned the benefit to clinicians reporting at the TIN/NPI level compared to the NPI level.

    Response: We note that groups have the option to report at the individual (TIN/NPI) level or the group (TIN) level. Depending on the composition of groups, groups may find that reporting at the individual level may be more advantageous for the group than the reporting at the group level and vice versa. Individual eligible clinicians who are not part of a group, would report at the individual level.

    Comment: To facilitate individual clinician-level information, one commenter recommended that CMS use the NPI identifier throughout the MIPS program. The commenter noted that the NPI is also used by the private sector, promoting greater alignment than would a newly created MIPS clinician identifier.

    Response: We appreciate the suggestion from the commenter, but disagree with the commenter that we should establish an identifier only at the NPI level because we need to be able to not only account for individual NPIs, but we need to have a capacity that allows us to identify eligible clinicians and MIPS eligible clinicians who are associated with a group given that group level reporting is an option and scoring and MIPS payment adjustments would need be applied accordingly. As a result, we are finalizing the individual MIPS eligible clinician identifier using the TIN/NPI combination.

    Comment: One commenter requested clarification on how clinicians using only a TIN will be scored, and then have their payment adjusted based on the TIN/NPI.

    Response: We note that groups reporting at the group level will be assessed and scored, at the TIN level and have a MIPS payment adjustment applied at the TIN/NPI level. We note that the MIPS payment adjustment is applied to the MIPS eligible clinicians within the TIN for billed Medicare Part B charges.

    After consideration of the public comments we received, we are finalizing our proposed definition of a MIPS eligible clinician at § 414.1305 to use a combination of unique billing TIN and NPI combination as the identifier to assess performance of an individual MIPS eligible clinician. Each unique TIN/NPI combination will be considered a different MIPS eligible clinician, and MIPS performance will be assessed separately for each TIN under which an individual bills. We recognize that TIN/NPI is not a static identifier and can change if an individual MIPS eligible clinician changes practices and/or if a group merges with another between the performance period and payment adjustment period. We refer readers to section II.E.7.a. of this final rule with comment period, which describes our final policy for matching performance in cases where the TIN/NPI changes.

    b. Group Identifiers for Performance

    We proposed the following way a MIPS eligible clinician may have their performance assessed as part of a group under MIPS. We proposed to use a group's billing TIN to identify a group. This approach has been used as a group identifier for both PQRS and VM. The use of the TIN would significantly reduce the participation burden that could be experienced by large groups. Additionally, the utilization of the TIN benefits large and small practices by allowing such entities to submit performance data one time for their group and develop systems to improve performance. Groups that report on quality performance measures through certain data submission methods must register to participate in MIPS as described in section II.E.5.b. of the proposed rule.

    We proposed to codify the definition of a group at § 414.1305 as a group that would consist of a single TIN with two or more MIPS eligible clinicians (as identified by their individual NPI) who have reassigned their billing rights to the TIN. We requested comments on this proposal.

    The following is a summary of the comments we received regarding our proposal establishing the way a MIPS eligible clinician may have their performance assessed as part of a group under MIPS.

    Comment: Several commenters expressed concern regarding the group identifier. Commenters indicated that a group identifier restricts group reporting to TIN-level identification because TINs may represent many different specialties and subspecialists that have elected to join together for non-practice related reasons, such as billing purposes. Commenters recommended that CMS allow TINs to subdivide into smaller groups for the purposes of participating in MIPS. A few commenters recommended that CMS expand the definition of a group to include subsets in a TIN so that groups of specialists or sub-specialists within a TIN can be allowed to group accordingly. One commenter suggested expanding the allowable group identifiers for physician groups to include a group's sub-tax identification numbers based on the Medicare PFS area or the hospital payment area in which they provide care. A few commenters encouraged CMS to consider providing additional flexibility to allow clinicians to submit group rosters of TIN/NPI combinations to CMS to define a MIPS reporting group. The commenters noted that this approach would allow a large, multispecialty group under one TIN to split into clinically-relevant reporting groups, or multiple TINs within a delivery system to group report under a common group. In addition to the options that CMS proposed regarding use of multiple identifiers to assess physician/group performance under MIPS, one commenter recommended that CMS permit groups to “split” TINs for this purpose. Another commenter noted that such flexibility would be a very useful precursor to future APM participation.

    Response: We appreciate the commenters expressing their concerns and providing recommendations. We recognize that groups have varying compositions of eligible clinicians and will consider the suggestions from commenters in future rulemaking. We disagree with commenters regarding their suggested approach for defining a group because multiple sublevel identifiers create more complexity given that it would require the establishment of numerous identifiers in order to account for all types of group compositions. We note that except for groups that contain APM participants, we are not permitting groups to “split” TINs if they choose to participate in MIPS as a group. We believe it is critical to establish the definition of a group that ensures coordination, teamwork, and shared responsibility at the group level, in which our proposed definition achieves this objective. We note that groups have the opportunity to analyze its data in ways that are meaningful to the group, which may include analyses for each segment of a group to promote and enhance the coordination of care and improve the quality of care and health outcomes.

    Comment: Several commenters supported the proposed approach to reduce the participation burden by allowing large groups to report as a group. One commenter requested clarification on how a group's performance and final score would be applied to all NPIs in the TIN, particularly whether CMS would assess each individual across the four performance categories and then cumulatively calculate the final score or whether CMS would assess a group-based collective set of objectives that could be met by any combination of individual clinicians inside the group to calculate the final score.

    Response: In section II.E.3.d. of this final rule with comment period, we note that groups reporting at the group level (TIN) must meet the definition of a group at all times during the performance period for the MIPS payment year. In order for groups to have their performance assessed as a group across all performance categories, individual eligible clinicians and MIPS eligible clinicians within a group must aggregate their performance data across the TIN.

    Comment: One commenter indicated that the scoring methodology for large TINs is ambiguous.

    Response: We note that the scoring methodology for groups, regardless of size, is the same as described in section II.E.6. of this final rule with comment period.

    Comment: One commenter requested further clarification of attribution of eligible activities (for example, improvement activities) for one organization with one TIN that participates in MIPS and multiple APMs.

    Response: For those TINs that have MIPS eligible clinicians that are subject to the APM scoring standard, we refer readers to section II.E.5.h. of this final rule with comment period for our discussion regarding policies pertaining to the APM scoring standard.

    Comment: Several commenters agreed with our proposal to not require an additional identifier for qualified clinicians and instead use a combination of MIPS eligible clinician NPI and group billing TIN. To ease the administrative burden, commenters recommended the following: have attribution of a qualified clinician to a group's billing TIN be done automatically by CMS based on billing PECOS data; do not require individual third party rights for qualified clinicians, but instead let program administrators at each health system register for their groups and automatically have access to qualified clinicians associated with that TIN; and provide for the ability to look up statuses, eligibility, program history and other information by both individual NPI and group TIN.

    Response: We appreciate the recommendations from the commenters and will consider them as we establish subregulatory guidance regarding the voluntary registration process for groups and the registration process for groups electing to use the CMS Web Interface data submission mechanism and/or administer the CAHPS for MIPS survey.

    Comment: Several commenters requested that CMS consistently define “small” practices and consider additional accommodations for such practices. Commenter noted that the proposal may overburden smaller groups. There were a few commenters indicating that solo or small practices with less than 25 clinicians should be exempt from MIPS while other commenters recommended that group practices of 15 or fewer clinicians be exempt from MIPS. One commenter suggested that CMS review opportunities to provide incentives targeted around quality metrics reflective of the patient population served.

    Response: We note that a small practice is defined as a practice consisting of 15 or fewer eligible clinicians. We note that the statute does not provide the discretion to establish exclusions other than the exclusions pertaining to new Medicare-enrolled eligible clinicians, QPs and Partial QPs who do not participate in MIPS, and eligible clinicians who do not exceed the low-volume threshold. However, small groups may be excluded from MIPS if they do not exceed the low-volume threshold as established in section II.E.3.c. of this final rule with comment period.

    Comment: One commenter requested that post-acute and long-term care practices be considered separately in this proposal. The commenter indicated that grouping them with their specialty peers practicing in a traditional ambulatory setting creates inequities. In particular, the commenter noted that benchmarks and thresholds are not comparable due to the different natures of the types of practice.

    Response: We recognize that groups will have varying compositions and note that groups have the option to report at the individual level or group level. In section II.E.3.c. of this final rule with comment period, we describe the low-volume threshold exclusion which is applied at the individual eligible clinician level or the group level. A group that would not be excluded from MIPS when reporting at a group level may find it advantageous to report at the individual level.

    After consideration of the public comments we received, we are finalizing a modification to our proposal regarding the use of a group's billing TIN to identify a group. Thus, we are codifying the definition of a group at § 414.1305 to mean a group that consists of a single TIN with two or more eligible clinicians (including at least one MIPS eligible clinician), as identified by their individual NPI, who have reassigned their billing rights to the TIN.

    c. APM Entity Group Identifier for Performance

    We proposed the following way to identify a group to support APMs (see section II.F.5.b. of this rule). To ensure we have accurately captured all of the eligible clinicians identified as participants that are participating in the APM Entity, we proposed that each eligible clinician who is a participant of an APM Entity would be identified by a unique APM participant identifier. The unique APM participant identifier would be a combination of four identifiers: (1) APM Identifier (established by CMS; for example, XXXXXX); (2) APM Entity identifier (established under the APM by CMS; for example, AA00001111); (3) TIN(s) (9 numeric characters; for example, XXXXXXXXX); (4) EP NPI (10 numeric characters; for example, 1111111111). For example, an APM participant identifier could be APM XXXXXX, APM Entity AA00001111, TIN-XXXXXXXXX, NPI-11111111111.

    We proposed to codify the definition of an APM Entity group at § 414.1305 as an APM Entity identified by a unique APM participant identifier. We requested comments on these proposals. See section II.E.5.h. of the proposed rule for proposed policies regarding requirements for APM Entity groups under MIPS.

    The following is a summary of the comments we received regarding our proposal establishing the way each eligible clinician who is a participant of an APM Entity would be identified by a unique APM participant identifier.

    Comment: Several commenters supported the approach to identify APM professionals by a combination of APM identifier, APM entity identifier, TIN and NPI. Commenters requested that CMS make the QP identifiers available via an application program interface (API), which would improve an APM participant's ability to provide accurate and timely reports. However, one commenter recommended that an APM Entity group be defined using a unique APM participant identifier composed of a combination of four, cross-referenced identifiers: APM ID, MIPS ID, TIN, and NPI. The commenter shared that their Shared Savings Program experience with their ACO Identifier has been very positive, and suggested that MIPS adopt a similar definition and use the APM-MIPS ID for day-to-day APM identification, versus the proposed alternative.

    Response: We appreciate the support and suggestions from the commenters. As we operationalize the process for APM Entity identifiers, we will taking into consideration the recommendation of making the QP identifier available via an API. In regard to suggestion regarding the APM Entity group identifier, we do not believe it is necessary to create an additional MIPS ID for the purposes of tracking APM Entities under MIPS. We further note that for all APMs, the APM Entity identifiers are the same identifiers that are currently used by CMS for other purposes. For example, in the case of the Shared Savings Program, since ACOs are the participating APM Entity, the APM Entity identifier would be the same as the ACO Identifier. We believe that tracking APM Entity participation in this way is most consistent with how CMS currently tracks APM Entity participation, and eliminates any unnecessary burden of tracking any new, additional identifiers.

    Comment: One commenter requested clarification on the use of the APM participant identifier and whether the APM participant identifier would be a required data element for submission.

    Response: We note that the APM Identifier will be used to ensure accurate tracking of all APM participants and comprised of the four already existing identifiers that are described in this section. In regard to the data elements required for the submission of data via a submission mechanism, the required data elements will depend on the requirements for each data submission mechanism. The submission procedures for each data submission mechanism will be further outlined in subregulatory guidance.

    Comment: One commenter did not support the proposal regarding how an APM Entity group would be defined. The commenter requested clarification as to why an APM participant could not be identified by a combination of TIN/NPI, and a single character prefix or suffix to denote the eligible clinician is part of an APM entity.

    Response: We appreciate the feedback from the commenter. We note that our proposal to use the APM ID, APM Entity Identifier, TIN and NPI is most consistent with how APM participation is currently tracked within our systems. Introducing another method of identification, such as a single character prefix or suffix, would be a deviation from our already existing operational processes, and we do not foresee that such a deviation would add any program efficiencies or facilitate participant tracking.

    Comment: One commenter did not support mandatory reporting and participation, and indicated that ACOs are an example of forcing participation in alternative payment models resulting in the failure to save money and difficulties to retain participants.

    Response: We appreciate the concerns from the commenter and note that participation in MIPS is mandatory while participation in an ACO (or APM) is voluntary. Based on the results generated to date under the Shared Savings Program, the data suggests that the longer organizations stay in the Shared Savings Program, the more likely they are able to achieve savings. Also, the number of organizations participating in the Shared Savings Program is increasing annually.

    Comment: One commenter recommended that CMS take into account the burden placed on certain subspecialties that may not and will not have the flexibility to participate in many current APMs. Another commenter recommended that CMS identify specialties and subspecialties currently unable to participate in Advanced APMs and establish ways to minimize their burden and risk of receiving a penalty under MIPS.

    Response: We thank the commenters for expressing their concerns. As we develop the operational elements of the MIPS program, we strive to establish a process ensuring that participation in MIPS can be successful. Based on the experience and feedback provided by stakeholders regarding previously established CMS programs, we are improving and enhancing the user-experience for MIPS. We will continue to seek stakeholder feedback as we implement the MIPS program.

    After consideration of the public comments we received, we are finalizing our proposal that each eligible clinician who is a participant of an APM Entity will be identified by a unique APM participant identifier. The unique APM participant identifier will be a combination of four identifiers: (1) APM Identifier (established by CMS; for example, XXXXXX); (2) APM Entity identifier (established under the APM by CMS; for example, AA00001111); (3) TIN(s) (9 numeric characters; for example, XXXXXXXXX); (4) EP NPI (10 numeric characters; for example, 1111111111). For example, an APM participant identifier could be APM XXXXXX, APM Entity AA00001111, TIN-XXXXXXXXX, NPI-11111111111. Thus, we are codifying the definition of an APM Entity group at § 414.1305 to mean a group of eligible clinicians participating in an APM Entity, as identified by a combination of the APM identifier, APM Entity identifier, Taxpayer Identification Number (TIN), and National Provider Identifier (NPI) for each participating eligible clinician.

    3. Exclusions a. New Medicare-Enrolled Eligible Clinician

    Section 1848(q)(1)(C)(v) of the Act provides that in the case of a professional who first becomes a Medicare-enrolled eligible clinician during the performance period for a year (and had not previously submitted claims under Medicare either as an individual, an entity, or a part of a physician group or under a different billing number or tax identifier), that the eligible clinician will not be treated as a MIPS eligible clinician until the subsequent year and performance period for that year. In addition, section 1848(q)(1)(C)(vi) of the Act clarifies that individuals who are not deemed MIPS eligible clinicians for a year will not receive a MIPS payment adjustment. Accordingly, we proposed at § 414.1305 that a new Medicare-enrolled eligible clinician be defined as a professional who first becomes a Medicare-enrolled eligible clinician within the PECOS during the performance period for a year and who has not previously submitted claims as a Medicare-enrolled eligible clinician either as an individual, an entity, or a part of a physician group or under a different billing number or tax identifier. These eligible clinicians will not be treated as a MIPS eligible clinician until the subsequent year and the performance period for such subsequent year. As discussed in section II.E.4. of the proposed rule (81 FR 28179 through 28181), we proposed that the MIPS performance period would be the calendar year (January 1 through December 31) 2 years prior to the year in which the MIPS payment adjustment is applied. For example, an eligible clinician who newly enrolls in Medicare within PECOS in 2017 would not be required to participate in MIPS in 2017, and he or she would not receive a MIPS payment adjustment in 2019. The same eligible clinician would be required to participate in MIPS in 2018 and would receive a MIPS payment adjustment in 2020, and so forth. In addition, in the case of items and services furnished during a year by an individual who is not an MIPS eligible clinician, there will not be a MIPS payment adjustment applied for that year. We also proposed at § 414.1310(d) that in no case would a MIPS payment adjustment apply to the items and services furnished by new Medicare-enrolled eligible clinicians. We requested comments on these proposals.

    The following is a summary of the comments we received regarding our proposals to define a new Medicare-enrolled eligible clinician as a professional who first becomes a Medicare-enrolled eligible clinician within the PECOS during the performance period for a year and who has not previously submitted claims under Medicare either as an individual, an entity, or a part of a physician group or under a different billing number or tax identifier, that the eligible clinician would not be treated as a MIPS eligible clinician until the subsequent year and performance period for such subsequent year, that a MIPS payment adjustment would not be applied in the case of items and services furnished during a year by an individual who is not an MIPS eligible clinician, and that in no case would a MIPS payment adjustment apply to the items and services furnished by new Medicare-enrolled eligible clinicians.

    Comment: One commenter recommended postponing the implementation of the “new” types of clinicians to a later effective date.

    Response: We appreciate the suggestion from the commenter, but note that we do not find it necessary or justifiable to postpone the implementation of the new Medicare-enrolled eligible clinician provision.

    Comment: One commenter requested clarification on how CMS would require clinicians who are new Medicare-enrolled eligible clinicians to participate in MIPS after their first 12 months of Medicare enrollment passed.

    Response: We note that section 1848(q)(1)(C)(v) of the Act provides that in the case of a professional who first becomes a Medicare-enrolled eligible clinician during the performance period for a year (and had not previously submitted claims under Medicare either as an individual, an entity, or a part of a physician group or under a different billing number or tax identifier), that the eligible clinician will not be treated as a MIPS eligible clinician until the subsequent year and performance period for that year. We note that new Medicare-enrolled eligible clinicians are excluded from MIPS during the performance period in which they are identified as being a new Medicare-enrolled eligible clinicians. For example, if an eligible clinician becomes a new Medicare-enrolled eligible clinician in April of a particular year, such eligible clinician would be excluded from MIPS until the subsequent year and performance period for that year, in which such eligible clinician would be required to participate in MIPS starting in January of the next year.

    Moreover, section 1848(q)(1)(C)(vi) of the Act clarifies that individuals who are not deemed MIPS eligible clinicians for a year will not receive a MIPS payment adjustment. Accordingly, we define a new Medicare-enrolled eligible clinician as a professional who first becomes a Medicare-enrolled eligible clinician within the PECOS during the performance period for a year and who has not previously submitted claims as a Medicare-enrolled eligible clinician either as an individual, an entity, or a part of a physician group or under a different billing number or tax identifier. These eligible clinicians will not be treated as a MIPS eligible clinician until the subsequent year and the performance period for such subsequent year. Thus, such eligible clinicians would be treated as a MIPS eligible clinician in their subsequent year of being a Medicare-enrolled eligible clinician, required to participate in MPS, and subject to the MIPS payment adjustment for the performance period of that subsequent year.

    Comment: One commenter requested clarification on clinicians' eligibility under MIPS and their designation on whether they are Medicare or Medicaid-enrolled from year to year.

    Response: In section II.E.1.a. of this final rule with comment period, we define a MIPS eligible clinician. Clinicians meeting the definition of a MIPS eligible clinician are required to participate in MIPS unless eligible for an exclusion as defined in section II.E.3. of this final rule with comment period. For purposes of MIPS, we are able to identify an eligible clinician who first becomes a Medicare-enrolled eligible clinician within the PECOS during the performance period for a year and who has not previously submitted claims as a Medicare-enrolled eligible clinician either as an individual, an entity, or a part of a physician group or under a different billing number or tax identifier.

    Comment: Several commenters supported the exclusion of new Medicare-enrolled eligible clinicians from MIPS; however, commenters indicated that it is unreasonable to require new Medicare-enrolled eligible clinicians to begin participating in MIPS during the next performance period, especially those that become new Medicare-enrolled eligible clinicians later in the year. The commenters recommended giving new Medicare-enrolled eligible clinicians the option of being excluded from MIPS in both the performance period in which they begin treating Medicare patients and in the following performance period. One commenter opposed CMS's proposal that clinicians newly enrolling in Medicare in 2017 would have to participate in MIPS starting January 1, 2018, and requested that CMS instead extend the window so that clinicians enrolling in Medicare in 2017 would not begin participation until January 1, 2019. Another commenter suggested that CMS consider new Medicare-enrolled eligible clinicians ineligible for MIPS until the first performance period following at least 12 months of enrollment in Medicare.

    Response: We thank the commenters for expressing their concerns. While the statute does not give the Secretary discretion to further delay MIPS participation for these eligible clinicians, we note that in the transition year (CY 2017) and performance period for such year in which an eligible clinician is treated as a MIPS eligible clinician, the clinician may qualify for an exclusion under the low-volume threshold. We refer readers to section II.E.3.c. of this final rule with comment period, which further describes the low-volume threshold provision.

    Comment: A few commenters supported CMS' proposal that a new Medicare-enrolled eligible clinician would not be eligible to participate in the MIPS program until the subsequent performance period.

    Response: We appreciate the support from the commenters.

    Comment: A few commenters offered recommendations pertaining to exemptions that CMS should consider. One commenter suggested that medical/surgical practices of 15 professionals or fewer be fully exempt from MIPS; otherwise, many Medicare patients risk losing access to physicians who have cared for them for many years. Another commenter recommended that MIPS eligible clinicians who are a Tier 1 or part of a Center of Excellence or a High Quality Provider with a private insurer should be exempt from penalties because they are a proven benefit to the system already and should not be penalized.

    Response: We appreciate the commenters providing their recommendations. We note that the suggestions are out-of-scope to proposals described in the proposed rule (81 FR 28161) and iterate that the statute only allows for limited exceptions for eligible clinicians to be exempt from the MIPS requirements.

    Comment: One commenter encouraged CMS to only use exceptions and special cases as outlined in the proposed rule when absolutely necessary because the creation of exceptions, exclusions, and multiple performance pathways would introduce unnecessary reporting burden for participating MIPS eligible clinicians.

    Response: We thank the commenter for the suggestion and note that in this final rule with comment period, we are finalizing our proposed exclusions pertaining to new Medicare-enrolled eligible clinicians and QPs and Partial QPs, and modifying our proposed exclusion pertaining to the low-volume threshold, as discussed in sections II.E.3.a., II.E.3.b., and II.E.3.c., of this final rule with comment period.

    After consideration of the public comments we received, we are finalizing the definition of a new Medicare-enrolled eligible clinician at § 414.1305 as a professional who first becomes a Medicare-enrolled eligible clinician within the PECOS during the performance period for a year and had not previously submitted claims under Medicare such as an individual, an entity, or a part of a physician group or under a different billing number or tax identifier. We are finalizing our proposal at § 414.1310(c) that these eligible clinicians will not be treated as a MIPS eligible clinician until the subsequent year and the performance period for such subsequent year. As outlined in section II.E.4. of this final rule with comment period, we are finalizing a modification to the MIPS performance period to be a minimum of one continuous 90-day period within CY 2017. In the case of items and services furnished during a year by an individual who is not a MIPS eligible clinician during the performance period, there will not be a MIPS payment adjustment applied for that payment adjustment year. Additionally, we are finalizing our proposal at § 414.1310(d) that in no case would a MIPS payment adjustment apply to the items and services furnished during a year by new Medicare-enrolled eligible clinicians for the applicable performance period.

    We believe that it would be beneficial for eligible clinicians to know during the performance period of a calendar year whether or not they are identified as a new Medicare-enrolled eligible clinician. For purposes of this section, we are coining the term “new Medicare-enrolled eligible clinician determination period” and define it to mean the 12 months of a calendar year applicable to the performance period. During the new Medicare-enrolled eligible clinician determination period, we will conduct eligibility determinations on a quarterly basis to the extent that is technically feasible in order to identify new Medicare-enrolled eligible clinicians that would be excluded from the requirement to participate in MIPS for the applicable performance period. Given that the performance period is a minimum of one continuous 90-day period within CY 2017, we believe it would be beneficial for such eligible clinicians to be identified as being excluded from MIPS requirements on a quarterly basis in order for individual eligible clinicians or groups to plan and prepare accordingly. For future years of the MIPS program, we will conduct similar eligibility determinations on a quarterly basis during the new Medicare-enrolled eligible clinician determination period, which consists of the 12 months of a calendar year applicable to the performance period, in order to identify throughout the calendar year eligible clinicians who would excluded from MIPS as a result of first becoming new Medicare-enrolled eligible clinicians during the performance period for a given year.

    b. Qualifying APM Participant (QP) and Partial Qualifying APM Participant (Partial QP)

    Sections 1848(q)(1)(C)(ii)(I) and (II) of the Act provide that the definition of a MIPS eligible clinician does not include, for a year, an eligible clinician who is a Qualifying APM Participant (QP) (as defined in section 1833(z)(2) of the Act) or a Partial Qualifying APM Participant (Partial QP) (as defined in section 1848(q)(1)(C)(iii) of the Act) who does not report on the applicable measures and activities that are required under MIPS. Section II.F.5. of the proposed rule provides detailed information on the determination of QPs and Partial QPs.

    We proposed that the definition of a MIPS eligible clinician at § 414.1310 does not include QPs (defined at § 414.1305) and Partial QPs (defined at § 414.1305) who do not report on applicable measures and activities that are required to be reported under MIPS for any given performance period. Partial QPs will have the option to elect whether or not to report under MIPS, which determines whether or not they will be subject to MIPS payment adjustments. Please refer to the section II.F.5.c. of the proposed rule where this election is discussed in greater detail. We requested comments on this proposal.

    The following is a summary of the comments we received regarding our proposal that the definition of a MIPS eligible clinician does not include QPs (defined at § 414.1305) and Partial QPs (defined at § 414.1305) who do not report on applicable measures and activities that are required to be reported under MIPS for any given performance period, in which Partial QPs will have the option to elect whether or not to report under MIPS.

    Comment: One commenter recommended that CMS consider presumptive QP status in the first performance year, and prospective notification of QP status based on prior year thresholds. Alternatively, if in the year following the performance year CMS determines the Advanced APM Entity has not yet met the required threshold score, the commenter indicated that CMS could either: Assign the entity's participating clinicians a neutral MIPS score without a penalty or reward; or allow them to complete two of the four MIPS performance categories in 2018 and have the results count for 2019 payments.

    Response: We refer readers to section II.F.5 of this final rule with comment period for policies regarding QP and Partial QP determinations.

    After consideration of the public comments we received, we are finalizing our proposal at § 414.1305 that the definition of a MIPS eligible clinician does not include QPs (defined at § 414.1305) and Partial QPs (defined at § 414.1305) who do not report on applicable measures and activities that are required to be reported under MIPS for any given performance period in a year. Also, we are finalizing our proposed policy at § 414.1310(b) that for a year, QPs (defined at § 414.1305) and Partial QPs (defined at § 414.1305) who do not report on applicable measures and activities that are required to be reported under MIPS for any given performance period in a year are excluded from MIPS. Partial QPs will have the option to elect whether or not to report under MIPS, which determines whether or not they will be subject to MIPS payment adjustments.

    c. Low-Volume Threshold

    Section 1848(q)(1)(C)(ii)(III) of the Act provides that the definition of a MIPS eligible clinician does not include MIPS eligible clinicians who are below the low-volume threshold selected by the Secretary under section 1848(q)(1)(C)(iv) of the Act for a given year. Section 1848(q)(1)(C)(iv) of the Act requires the Secretary to select a low-volume threshold to apply for the purposes of this exclusion which may include one or more of the following: (1) T he minimum number, as determined by the Secretary, of Part B-enrolled individuals who are treated by the MIPS eligible clinician for a particular performance period; (2) the minimum number, as determined by the Secretary, of items and services furnish to Part B-enrolled individuals by the MIPS eligible clinician for a particular performance period; and (3) the minimum amount, as determined by the Secretary, of allowed charges billed by the MIPS eligible clinician for a particular performance period.

    We proposed at § 414.1305 to define MIPS eligible clinicians or groups who do not exceed the low-volume threshold as an individual MIPS eligible clinician or group who, during the performance period, have Medicare billing charges less than or equal to $10,000 and provides care for 100 or fewer Part B-enrolled Medicare beneficiaries. We believed this strategy holds more merit as it retains as MIPS eligible clinicians those MIPS eligible clinicians who are treating relatively few beneficiaries, but engage in resource intensive specialties, or those treating many beneficiaries with relatively low-priced services. By requiring both criteria to be met, we can meaningfully measure the performance and drive quality improvement across the broadest range of MIPS eligible clinician types and specialties. Conversely, it excludes MIPS eligible clinicians who do not have a substantial quantity of interactions with Medicare beneficiaries or furnish high cost services.

    In developing this proposal, we considered using items and services furnished to Part B-enrolled individuals by the MIPS eligible clinician for a particular performance period rather than patients, but a review of the data reflected there were nominal differences between the two methods. We plan to monitor the proposed requirement and anticipate that the specific thresholds will evolve over time. We requested comments on this proposal including alternative patient threshold, case thresholds, and dollar values.

    The following is a summary of the comments we received regarding our proposal to define MIPS eligible clinicians or groups who do not exceed the low-volume threshold as an individual MIPS eligible clinician or group who, during the performance period, have Medicare billing charges less than or equal to $10,000 and provides care for 100 or fewer Part B-enrolled Medicare beneficiaries.

    Comment: A few commenters supported the proposed policy to exempt MIPS eligible clinicians or groups from MIPS requirements who do not exceed the low-volume threshold of having Medicare billing charges less than or equal to $10,000 and providing care for 100 or fewer Part B-enrolled Medicare beneficiaries. In particular, one commenter expressed support for the dual criteria of the low-volume threshold (Medicare billing charges less than or equal to $10,000 and providing care for 100 or fewer Part B-enrolled Medicare beneficiaries).

    Response: We appreciate the support from the commenters.

    Comment: A significant portion of commenters expressed concern regarding our proposed low-volume threshold provision, particularly the requirement for MIPS eligible clinicians and groups to meet both the low-volume threshold pertaining to the dollar value of Medicare billing charges and the number of Medicare Part B beneficiaries cared for during a performance period. The commenters requested that CMS modify the criteria under the definition of MIPS eligible clinicians or groups who do not exceed the low-volume threshold to require that an individual MIPS eligible clinician or group would need to meet either the low-volume threshold pertaining to the dollar value of Medicare billing charges or the number of Medicare Part-B beneficiaries cared for during a performance period in order to determine whether or not an individual MIPS eligible clinician or group exceeds the low-volume threshold. Several commenters noted that such a change would provide greater flexibility for specialty clinicians.

    Response: We appreciate the concerns expressed by commenters. We agree with the commenters and have modified our proposal to not require that MIPS eligible clinicians and groups must meet both the dollar value of Medicare billing charges and the number of Medicare Part B beneficiaries cared for during a performance period. Instead, we are finalizing that individual MIPS eligible clinicians and groups meet either the threshold of $30,000 in billed Medicare Part B allowed charges or the threshold of 100 or fewer Part B-enrolled Medicare beneficiaries. Also, we believe that the modified proposal reduces the risk of clinicians withdrawing as Medicare suppliers and minimizing the number of Medicare beneficiaries that they treat in a year. We will monitor any effect on Medicare participation. Similar to the goal of the proposed low-volume threshold, we believe that this modified approach holds more merit as it retains as MIPS eligible clinicians those MIPS eligible clinicians who are treating relatively few beneficiaries, but engage in resource intensive specialties, or those treating many beneficiaries with relatively low-priced services. We believe that the modified proposal would also ensure that we can meaningfully measure the performance and drive quality improvement across a broad range of MIPS eligible clinician types and specialties. We note that eligible clinicians who are excluded from the definition of a MIPS eligible clinician under the low-volume threshold or another applicable exclusion can still participate voluntarily in MIPS, but are not subject to positive or negative MIPS adjustments. For future consideration, we are seeking additional comment on possible ways that excluded eligible clinicians might be able to opt-in to the MIPS program (and the MIPS payment adjustment) in future years in a manner consistent with the statute.

    Comment: The majority of commenters recommended that CMS increase the low-volume threshold. A signification portion of commenters requested that MIPS eligible clinicians or groups who do not exceed the low-volume threshold should have Medicare billing charges less than or equal to $30,000 or provide care for 100 or fewer Part B-enrolled Medicare beneficiaries. Many commenters noted that raising the low-volume threshold would allow more physicians with a small number of Medicare patients to be recognized as MIPS eligible clinicians or groups who do not exceed the low-volume threshold, particularly MIPS eligible clinicians providing specialty services or high risk services. Several commenters indicated that women on Medicare receive expensive surgical care from OB/GYNs, which could cause MIPS eligible clinicians and groups to exceed the proposed low-volume threshold despite a very small number of Medicare patients. The commenters suggested that CMS exempt MIPS eligible clinicians and groups from the MIPS program who have less than $30,000 in Medicare allowed charges per year or provide care for fewer than 100 unique Medicare Part-B beneficiaries.

    A few commenters indicated that an increase in the low-volume threshold would mitigate an undue burden on small practices. One commenter stated that RHCs and such clinicians will have fewer than $10,000 in Medicare billing charges, but many of them will have more than 100 Part B beneficiaries under their care. The commenter expressed concern that RHCs may be burdened with MIPS requirements for a low level of Part B claims and thus, may either face penalties or the cost of implementing the MIPS requirements. A few commenters indicated that the low-volume threshold should be high enough to exempt physicians who have no possibility of a positive return on their investment in the cost of reporting.

    Other recommendations from commenters included the following: align the patient cap with the CPC+ patient panel requirements, which would increase the number of Medicare Part B beneficiaries cared for to 150 (and would prevent clinicians from having two different low-volume thresholds within the same program); exclude groups from participation in MIPS based on an aggregated threshold for the group with the rate of $30,000 and 100 patients per clinician, in which a group of two eligible clinicians would be excluded if charging under $60,000 and caring for under 200 Medicare Part B-enrolled Medicare beneficiaries; exempt MIPS eligible clinicians for the transition year of MIPS who bill under Place of Service 20, which is the designation for a place with the purpose of diagnosing and treating illness or injury for unscheduled, ambulatory patients seeking immediate medical attention; and exempt facilities operating in Frontier areas from MIPS participation, at least until 2019 when the list of MIPS eligible clinicians expands and additional MIPS eligible clinicians are able to participate in MIPS.

    There were other commenters who requested that the threshold criteria regarding the dollar value of Medicare billed charges and the number of Medicare Part B beneficiaries cared for be increased to the following: $25,000 Medicare billed charges or 50 or 100 Part B beneficiaries; $50,000 Medicare billed charges or 100 or 150 Part B beneficiaries; $75,000 Medicare billed charges or 100 or 750 Part B beneficiaries; $100,000 Medicare billed charges or 1000 Part B beneficiaries; $250,000 Medicare billed charges or 150 Part B beneficiaries; and $500,000 Medicare billed charges or 400 or 500 Part B beneficiaries.

    Several commenters requested that CMS temporarily increase the low-volume threshold in order for small practices to not be immediately impacted by the implementation of MIPS. One commenter suggested that the threshold be increased to 250 unique Medicare patients and a total Medicare billing not to exceed $200,000 for 5 years. Another commenter recommended that CMS set the low-volume threshold in 2019 at $250,000 of Medicare billing charges. The commenter explained that at such amount, the avoided penalties at 4 percent would approximately equal the $10,000 cost of reporting and below such amount, there would not likely be a return that exceeds the costs of reporting. Below such amount, the commenter suggested CMS make MIPS participation optional, but MIPS eligible clinicians that participate would be exempt from any penalties.

    Response: We appreciate the concerns and recommendations provided by the commenters. We received a range of suggestions and considered the various options. We agree with commenters that the dollar value of the low-volume threshold should be increased and that the low-volume threshold should not require MIPS eligible clinicians and groups to be required to meet both the dollar value of billed Medicare Part B allowed charges and the Part B Medicare-enrolled beneficiary count thresholds at this time. We believe it is important to establish a low-volume threshold that is responsive to stakeholder feedback. Some of the recommended options would have established a threshold that would exclude many eligible clinicians who would otherwise want to participate in MIPS. The majority of commenters suggested that the low-volume threshold be changed to reflect $30,000 or less billed Medicare Part B allowed charges. As a result, we are modifying our proposal. We are defining MIPS eligible clinicians or groups who do not exceed the low-volume threshold as an individual MIPS eligible clinician or group who, during the low-volume threshold determination period, has billed Medicare Part B allowed charges less than or equal to $30,000 or provides care for 100 or fewer Part B-enrolled Medicare beneficiaries. This policy would be more robust and effective at excluding clinicians for whom submitting data to MIPS may represent a disproportionate burden with a secondary effect of allowing greater concentration of technical assistance on a smaller cohort of practices. We believe that the higher low-volume threshold addresses the concerns from commenters while remaining consistent with the proposal and having a policy that is easy to understand.

    Comment: A few commenters indicated that it would be difficult for psychologists to determine ahead of time if they met the low-volume threshold relating to the dollar value of $10,000 Medicare billing charges in order to be exempt from MIPS, yet it would be relatively easy for psychologists to determine whether they are likely to have fewer than 100 Medicare patients in a given year based on their historical volume of Medicare patients. Several commenters requested CMS to change the low-volume threshold requirement to state “$10,000 in Medicare charges or fewer than 100 beneficiaries,” making it possible for psychologists to be exempt from MIPS, which is essential in keeping them enrolled in Medicare provider panels. A few commenters expressed concerns that if the proposed low-volume threshold was finalized as is, psychologists and psychotherapists who see Medicare beneficiaries weekly or bi-weekly would be unable to meet Medicare patients’ demand for psychotherapy, would discontinue seeing Medicare beneficiaries altogether, and would be reluctant to participate in MIPS if they were not exempted from MIPS participation. Commenters stated that CMS violates the Mental Health Parity and Addiction Equity Act of 2008 by having separate rules for medical versus psychological illnesses.

    Response: As previously noted, we are finalizing a modification to proposal, in which we are defining MIPS eligible clinicians or groups who do not exceed the low-volume threshold as an individual MIPS eligible clinician or group who, during the performance period, has billed Medicare Part B allowed charges less than or equal to $30,000 or provides care for 100 or fewer Part B-enrolled Medicare beneficiaries. Thus, a MIPS eligible clinician or a group would only need to meet the dollar value or the beneficiary count for the low-volume threshold exclusion. As a result, psychologists will be able to easily discern whether or not they exceed the low-volume threshold. In addition, we intend to provide a NPI level lookup feature prior to or shortly after the start of the performance period that will allow clinicians to determine if they do not exceed the low-volume threshold and are therefore excluded from MIPS. More information on this NPI level lookup feature will be made available at QualityPaymentProgram.cms.gov.

    In regard to the comment pertaining to the Mental Health Parity and Addiction Equity Act of 2008 (MHPAEA), we note that the MHPAEA generally prevents group health plans and health insurance issuers that provide mental health or substance use disorder benefits from imposing less favorable benefit limitations on those benefits than on medical/surgical benefits. The mental health parity requirements of MHPAEA do not apply to Medicare.

    Comment: One commenter indicated that the low-volume threshold is too low for a group and requested that CMS either establish a certain exclusion threshold based on group size, or exclude a group if more than 50 percent of its MIPS eligible clinicians meet the low-volume threshold. Another commenter recommended CMS to establish a low-volume threshold based upon practice size, so that solo practices and those with less than 10 clinicians are ineligible for MIPS. The commenter noted that the financial and reporting burden of participating in MIPS would be too great for such clinicians.

    Response: We appreciate the concern and suggestions from the commenters and note that we are modifying our proposed low-volume threshold by increasing the dollar value of the billed Medicare Part B allowed charges and eliminating the requirement that the clinician meet both the dollar value and beneficiary count thresholds. MIPS eligible clinicians or groups that do not exceed the low-volume threshold of $30,000 billed Medicare Part B allowed charges or provide care for 100 or fewer Part B-enrolled Medicare beneficiaries would be excluded from MIPS. We apply the same low-volume threshold to both individual MIPS eligible clinicians and groups because groups have the option to elect to report at an individual or group level. A group that would be excluded from MIPS when reporting at a group level may find it advantageous to report at the individual level.

    Comment: One commenter suggested that CMS exclude Part B and Part D drug costs from the low-volume threshold determination to mitigate the impacts of MIPS on community practices in rural and underserved areas.

    Response: We appreciate the suggestion from the commenter and note that the low-volume threshold applies to Medicare Part B allowed charges billed by the eligible clinician, such as those under the PFS.

    Comment: One commenter stated that CMS should provide education and training to MIPS eligible clinicians and groups meeting the low-volume threshold.

    Response: We are committed to actively engaging with all stakeholders, including tribes and tribal officials, throughout the process of establishing and implementing MIPS and using various means to communicate and inform MIPS eligible clinicians and groups of the MIPS requirements. In addition, we intend to provide a NPI level lookup feature prior to or shortly after the start of the performance period that will allow clinicians to determine if they do not exceed the low-volume threshold and are therefore excluded from MIPS. More information on this NPI level lookup feature will be made available at QualityPaymentProgram.cms.gov.

    Comment: One commenter requested that a definition of “Medicare billing charges” be established under the low-volume threshold policy. The commenter also requests a modification to this term so that it reads “allowed amount” so that it is clear that the $10,000 threshold is calculated based on $10,000 of Medicare-allowed services.

    Response: We appreciate the suggestions from the commenter and note that the low-volume threshold pertains to Medicare Part B allowed charges billed by a MIPS eligible clinician, such as those under the PFS. In order to be consistent with the statute, we assess the allowed charges billed to determine whether or not an eligible clinician exceeds the low-volume threshold. Also, we specify that the allowed charges billed relate to Medicare Part B.

    Comment: One commenter noted that since MIPS eligibility is based on the current reporting period, a clinician would not definitively know if he or she is excluded until the end of the year. It would be helpful if eligibility would be based on a prior period, as is currently done for hospital-based determinations for EPs under the EHR Incentive Program. This is especially problematic for low-volume clinicians such as OB/GYN, because eligibility might change from year to year. Another commenter questioned why the low-volume threshold for a MIPS eligible clinician is calculated based on the performance year rather than basing the calculation on the previous year.

    Response: We agree that it would be beneficial for individual eligible clinicians and groups to know whether they are excluded under the low-volume threshold prior to the start of the performance period and thus, we are finalizing a modification to our proposal to allow us to make eligibility determinations regarding low-volume status using historical claims data. This modification will allow us to inform individual MIPS eligible clinicians and groups of their low-volume status prior to or shortly after the start of the performance period. For purposes of this section, we are coining the term “low-volume threshold determination period” to refer to the timeframe used to assess claims data for making eligibility determinations for the low-volume threshold exclusion. We define the low-volume threshold determination period to mean a 24-month assessment period, which includes a two-segment analysis of claims data during an initial 12-month period prior to the performance period followed by another 12-month period during the performance period. The initial 12-month segment of the low-volume threshold determination period would span from the last 4 months of a calendar year 2 years prior to the performance period followed by the first 8 months of the next calendar year and include a 60-day claims run out, which will allow us to inform eligible clinicians and groups of their low-volume status during the month (December) prior to the start of the performance period. To conduct an analysis of the claims data regarding Medicare Part B allowed charges billed prior to the performance period, we are establishing an initial segment of the low-volume threshold determination period consisting of 12 months. We believe that the initial low-volume threshold determination period enables us to make eligibility determinations based on 12 months of data that is as close to the performance period as possible while informing eligible clinicians of their low-volume threshold status prior to the performance period. The second 12-month segment of the low-volume threshold determination period would span from the last 4 months of a calendar year 1 year prior to the performance period followed by the first 8 months of the performance period in the next calendar year and include a 60-day claims run out, which will allow us to inform additional eligible clinicians and groups of their low-volume status during the performance period.

    Thus, for purposes of the 2019 MIPS payment adjustment, we will initially identify the low-volume status of individual eligible clinicians and groups based on 12 months of data starting from September 1, 2015 to August 31, 2016, with a 60 day claims run out. To account for the identification of additional individual eligible clinicians and groups who do not exceed the low-volume threshold during the 2017 performance period, we will conduct another eligibility determination analysis based on 12 months of data starting from September 1, 2016 to August 31, 2017, with a 60 day claims run out. For example, MIPS eligible clinicians who may have exceeded the low-volume threshold during the first determination assessment, but fall below the threshold during the performance period because their practice changed significantly, they changed practices from a prior year, etc.

    In addition, we note that the low-volume threshold exclusion is determined at the individual (TIN/NPI) level for individual reporting and at the group (TIN) level for group reporting. An eligible clinician may be identified as having a status that does not exceed the low-volume threshold at the individual (TIN/NPI) level, but if such eligible clinician is part of a group that is identified as having a status exceeding the low-volume threshold, such eligible clinician would be required to participate in MIPS as part of the group because the low-volume threshold is determined at the group (TIN) level for groups. For eligibility determinations pertaining to the low-volume threshold exclusion, we will be conducting our analysis for each TIN/NPI and TIN identified in the claims data and make a determination based on the Medicare Part B allowed charges billed. Since we are making eligibility determinations for each TIN/NPI and TIN identified in the claims data, we do not need to know whether or not a group is reporting at the individual or group level prior to our analyses. Thus, groups can use the eligibility determinations we make for each TIN/NPI and TIN to determine whether or not their group would be reporting at the individual or group level. Subsequently, groups reporting at the group level would need to meet the group requirements as discussed in section II.E.3.d. of this final rule with comment period.

    Comment: One commenter requested that CMS ensure that low-volume threshold exclusion and other exclusions would not penalize practices with more pediatric, women's health, Medicaid, or private insurance patients.

    Response: We recognize that groups will have different patient populations. As previously noted, we are finalizing a modified low-volume threshold policy that will increase the number of individual eligible clinicians and groups excluded from the requirement to participate in MIPS, which would include individual eligible clinicians and groups with more pediatric, women's health, Medicaid, or private insurance patients if they have not billed more than $30,000 of Medicare Part B allowed charges or provided care for more than 100 Part B-enrolled Medicare beneficiaries. We note that MIPS eligible clinicians who are excluded from MIPS have the option to voluntarily participate in MIPS, but would not receive a MIPS payment adjustment.

    Comment: One commenter requested more information about whether the low-volume threshold will be eliminated in future years and if there is a potential for an incentive payment when an eligible clinician meets the low-volume threshold but elects to report anyway.

    Response: We intend to monitor the low-volume threshold requirement and anticipate that the specific threshold will evolve over time. For eligible clinicians who do not exceed the low-volume threshold and are thus excluded from MIPS, they could voluntarily participate in MIPS, but would not be subject to the MIPS payment adjustment (positive or negative).

    Comment: A few commenters requested clarification on the definition of the low-volume threshold including whether the $10,000 limit pertains to all Medicare billing charges or solely Medicare Part B charges, how this low-volume threshold applies to low-volume clinicians practicing in and reporting as a group, how beneficiaries are attributed to clinicians, and if there is a timeframe in which a patient was last seen.

    Response: We note that the dollar value of low-volume threshold applies to Medicare Part B allowed charges billed by the eligible clinician. We note that eligibility determinations regarding low-volume threshold exclusion are based on claims data. As a result, we are able to identify Medicare Part B allowed charges billed by the eligible clinician and the number of Part B-enrolled Medicare beneficiaries cared for by an eligible clinician during the first and second low-volume threshold determination periods. For eligibility determinations regarding the low-volume threshold exclusion, we do not consider the timeframes of when a patient was last seen. In regard to how the low-volume threshold applies to MIPS eligible clinicians in groups, we apply the same low-volume threshold to both individual MIPS eligible clinicians and groups since groups have the option to report at an individual or group level. As a result of the low-volume threshold exclusion being determined at the individual (TIN/NPI) level for individual reporting and at the group (TIN) level for group reporting, there will be some eligible clinicians with a low-volume status that does not exceed the low-volume threshold who would be excluded from MIPS at the individual (TIN/NPI) level, but if such eligible clinicians are part of a group with a low-volume status that exceeds the low-volume threshold, such eligible clinicians would be required to participate in MIPS as part of the group. Section II.E.3.d. of this final rule with comment period describes how a group's (TIN) performance is assessed and scored at the group level and how the MIPS payment adjustment is applied at the group level when a group includes clinicians who are excluded from MIPS at the individual level.

    Comment: Several commenters opposed holding individuals and groups to the same low-volume threshold standards. One commenter stated that basing the exclusion on two thresholds simultaneously would be antithetical to measurements of quality based on outcomes. The commenter noted that patient care can be very expensive and some eligible clinicians could be denied the low-volume threshold exclusion after seeing only a few very complex patients over the course of the performance period. Another commenter indicated that the proposed exclusionary criteria may lead to eligible clinicians in solo or small practices withdrawing as Medicare suppliers, or limiting the number of Medicare patients they treat over a performance period.

    One commenter requested that CMS issue a clarification stating that when clinicians choose to have their performance assessed at the group level, the low-volume threshold would also be assessed at the group level. This would ensure consistent treatment. Another commenter requested clarity regarding the low-volume threshold exclusion definition for groups, and recommended that CMS apply a multiplying factor for each enrolled Medicare clinician in the group definition. One commenter recommended that CMS scale the minimum number of Part B-enrolled Medicare beneficiaries and Medicare billed charges to the number of physician group members while another commenter requested that if a practice reports as a group, the low-volume threshold should be multiplied by the number of clinicians in the group. Commenters recommended a higher threshold for groups.

    A few commenters indicated that the current proposal does not provide a meaningful exclusion for small and rural practices that cannot afford the upfront investments (including investments in EHR systems) and as a result of the high costs to report for small practices, the threat of negative MIPS payment adjustments or low positive MIPS payment adjustments that do not cover the costs to report would deter small practices from participating in MIPS.

    Response: We thank the commenters for their concerns and recommendations regarding the low-volume threshold. We recognize that the low-volume threshold proposed in section II.E.3.c. of the proposed rule (81 FR 28178) is a concern and as previously noted, we are modifying our proposal by increasing the dollar value of the billed Medicare Part B allowed charges and eliminating the requirement for MIPS eligible clinicians and groups to meet both the dollar value threshold and the 100 beneficiary count. In this final rule with comment period, we continue to apply the same low-volume threshold for both individual MIPS eligible clinicians and groups. We disagree with the comment regarding a percentage-based approach for groups because groups have the option of electing to report at an individual or group level. If a group elects not to report as a group, then each MIPS eligible clinician would report individually.

    In addition, we believe that the modified proposal reduces the risk of clinicians withdrawing as Medicare suppliers and minimizing the number of Medicare beneficiaries that they treat in a year. We will monitor any effect on Medicare participation in CY 2017 and future calendar years.

    Comment: Several commenters expressed concern that clinicians working in solo practices or small groups, especially in rural areas and HPSAs, would have difficulty meeting the requirements for MIPS. One commenter noted that non-board-certified doctors often work in these areas and are reimbursed at a lower rate than board-certified doctors. The commenters recommended that CMS make similar concessions for this category of clinicians as it proposed to do for non-patient facing MIPS eligible clinicians in the proposed rule. One commenter requested that small practice physicians and solo physicians in HPSAs be exempt from MIPS. The commenters requested that CMS ensure that small and solo practices have an equal opportunity to participate successfully in MIPS and Advanced APMs.

    Response: We appreciate the concerns expressed by commenters and recognize that certain individual MIPS eligible clinicians and groups may only be able to report on a few, or possibly no, applicable measures and activities for the MIPS requirements. In section II.E.6.b.(2) of this final rule with comment period, we describe the re-weighting of each performance category when there are not sufficient measures and activities that are applicable and available. Also, our modified low-volume threshold exclusion policy increases the dollar value of Medicare Part B allowed charges billed by an eligible clinician, which will increase the number of eligible clinicians and groups excluded from MIPS and not subject to a negative MIPS payment adjustment, which may include additional solo or small rural or HPSA practices. We believe that rural areas, small practices, and HPSAs will benefit from other policies that we are finalizing throughout this final rule with comment period such as lower reporting requirements and lower performance threshold.

    Comment: One commenter expressed concern that the MIPS program as outlined in the proposed rule would limit referrals to necessarily higher-cost small and rural providers. The commenter indicated that comparisons between small, rural practices and larger practices does not take into account differences in infrastructure and technological capabilities and patient populations which the commenter believed are more likely to be sick and poor in the rural settings. Another commenter expressed concern that rural clinicians who serve impoverished communities and do not have additional resources (for example, dieticians who can provide more hands-on care for diabetic patients) would be unfairly penalized if their patients do not comply with medical advice.

    Response: We appreciate the concern expressed by the commenter and recognize that groups vary in size, clinician composition, patient population, resources, technological capabilities, geographic location, and other characteristics. While we believe the MIPS measures are valid and reliable, we will continue to investigate methods to ensure all clinicians are treated as fairly as possible within MIPS. As noted in this final rule with comment period, the Secretary is required to take into account the relevant studies conducted and recommendations made in reports under section 2(d) of the Improving Medicare Post-Acute Transformation (IMPACT) Act of 2014. Under the IMPACT Act, the Office of the Assistant Secretary for Planning and Evaluation (ASPE) has been conducting studies on the issue of risk adjustment for sociodemographic factors on quality measures and cost, as well as other strategies for including social determinants of health status evaluation in CMS programs. We will closely examine the ASPE studies when they are available and incorporate findings as feasible and appropriate through future rulemaking. Also, we will monitor outcomes of beneficiaries with social risk factors, as well as the performance of the MIPS eligible clinicians who care for them to assess for potential unintended consequences such as penalties for factors outside the control of clinicians. We believe that rural clinicians and practices will benefit from other policies that we are finalizing throughout this final rule with comment period such as lower reporting requirements and lower performance threshold.

    Comment: One commenter requested clarification as to whether or not non-patient facing MIPS eligible clinicians who are not based in a rural practice or not a member of a FQHC, but see fewer than 25 patients, would be exempt from MIPS. Another commenter requested clarification regarding whether or not the low-volume threshold applies if a physical therapist, occupational therapist, or speech-language pathologist is institution-based or nursing home-based.

    Response: In both situations that the commenter raises, the clinician would be excluded from MIPS, however they would be excluded for different reasons. For the first example, the non-patient facing MIPS eligible clinician would be excluded due to seeing fewer than 25 patients, which falls below our finalized low-volume threshold exclusion. For the second example, the physical therapists, occupational therapists, or speech-language pathologist cannot be considered MIPS eligible clinicians until as early as the third year of the MIPS program.

    Comment: One commenter proposed a phase-in period for small practices in addition to an increased low-volume threshold because the proposed rule did not immediately allow the opportunity for virtual groups that could provide the infrastructure to assist small practices. Additionally, the commenter believed that most small practices and solo physicians would not be ready to report on January 1, 2017. The commenter's recommended phase-in period would exempt the 40th percentile of all small and rural practices in each specialty in year 1; the 30th percentile of all small and rural practices in each specialty in year 2; the 20th percentile of all small and rural practices in each specialty in year 3; and the 10th percentile of all small and rural practices in each specialty in year 4. The commenter's recommended phase-in would be voluntary, and they believe it would provide more time for resource-limited small practices to prepare, finance new systems and upgrades, change workflows, and transition to MIPS.

    Response: We appreciate the concerns and recommendations provided by the commenter. We recognize that small and rural practices may not have experience using CEHRT and/or may not be prepared to meet the MIPS requirements for each performance category. As described in this section of the final rule with comment period, we are modifying our proposal by increasing the dollar value of billed Medicare Part B allowed charges and eliminating the requirement for MIPS eligible clinicians and groups to meet both the dollar value threshold and the 100 beneficiary count, in which groups not exceeding the low-volume threshold would be excluded from the MIPS requirements. We believe our modified low-volume threshold is less complex with potentially a singular parameter determining low-volume status and addresses the commenter's concerns by providing exclusions for more individual MIPS eligible clinicians and groups, including small and rural practices. Also, in section II.E.5.g.(8)(a) of this final rule with comment period, we describe our final policies regarding the re-weighting of the advancing care information performance category within the final score, in which we would assign a weight of zero when there are not sufficient measures applicable and available.

    Comment: A few commenters expressed concern that the proposed rule favored large practices, and requested that group practices with fewer than 10 or 15 physicians be excluded from MIPS. One commenter recommended that it may be more beneficial to expand the exclusion to practices under 15 physicians, thus reducing the number of practitioners that are going to opt out of Medicare altogether following MACRA and retaining a fairer adjustment distribution among the moderate and large practices.

    Response: We thank the commenters for expressing their concerns and note that we are modifying our proposed low-volume threshold to apply to an individual MIPS eligible clinician or group who, during the low-volume threshold determination period, has billed Medicare Part B allowed charges less than or equal to $30,000 or provides care for 100 or few Part B-enrolled Medicare beneficiaries. We believe our modified proposal would increase the number of groups excluded from participating in MIPS based on the low-volume threshold, including group practices with fewer than 10 or 15 clinicians.

    Comment: One commenter requested that CMS provide the underlying data that shows the distribution of spending and volume of cases on which the low-volume threshold is based. The commenter expressed concern that if the low-volume threshold is set too low, it may place too many clinicians close to the minimum of 20 attributable cases for resource use, which lacks statistical robustness. Another commenter suggested that CMS increase the low-volume threshold, as the commenter believed that counties with skewed demographics will give clinicians no chance to avoid negative MIPS payment adjustments. The commenter requested a moratorium on the implementation of MIPS until a study can be done that examines the potential effects of the law in such counties or for CMS to exempt practices that have a patient-population with more than 30 percent of its furnished services provided to Medicare Part B beneficiaries until the effects of the law are studied on the impact to these groups.

    Response: We appreciate the concerns expressed by commenters regarding the proposed low-volume threshold and intend to monitor the effects of the low-volume threshold and anticipate that the specific thresholds will evolve over time. In this section of the final rule with comment period, we are modifying our proposed low-volume threshold, in which we are defining MIPS eligible clinicians or groups that do not exceed the low-volume threshold as an individual MIPS eligible clinician or group who, during the low-volume threshold determination period, has billed Medicare Part B allowed charges less than or equal to $30,000 or see fewer than 100 beneficiaries. In regard to the commenter's concern on having too many MIPS eligible clinicians near the minimum number of attributable cases for the cost performance category; we believe the increased low-volume threshold policy would reduce such risk and ensure statistical robustness. We also note that we have made a number of modifications within the cost performance category and refer readers to section II.E.5.e. of this final rule with comment period for the discussion of our modified policies.

    Comment: One commenter requested that CMS calculate the projected data collection and reporting costs, the number of cases necessary to achieve statistical significance or reliability and comparison purposes, and the administrative costs on the agency to manage and calculate MIPS scores. With such costs in mind, the commenter requested that CMS adjust the low-volume threshold to a level such that MIPS would only apply to eligible clinicians for whom the costs of participating in the MIPS program outweighed the costs of refusing to accept Medicare patients. Otherwise, commenter was concerned that solo practitioners and small practices would opt out of treating Medicare patients.

    Response: We thank the commenter for their suggestions and note that we are modifying our proposed low-volume threshold by increasing the dollar value of billed Medicare Part B allowed charges and eliminating the requirement for MIPS eligible clinicians and groups to meet both the dollar value threshold and the 100 beneficiary count. We believe our modified proposal would increase the number of groups excluded from participating in MIPS based on the low-volume threshold and prevent the low-volume threshold from being a potential factor that could influence a MIPS eligible clinician's decision to deny access to care for Medicare Part B beneficiaries or opt out of treating Medicare Part B beneficiaries. We refer readers to section III.B. of this final rule with comment period for our discussion regarding burden reduction.

    Comment: For those eligible clinicians not participating in an ACO, one commenter requested clarification on the proposed $10,000 threshold, specifically, whether this includes payments made under the RHC all-inclusive rate (AIR) or FQHC prospective payment system. The commenter suggested that the $10,000 threshold should only include Part B PFS allowed charges because the other payment methodologies already are alternatives to fee schedules.

    Response: In this section of the final rule with comment period, we are modifying our proposed low-volume threshold to be based on a dollar value of $30,000 of billed Medicare Part B allowed charges during a performance period or 100 Part B-enrolled beneficiary count, which would apply to clinicians in RHCs and FQHCs with billed Medicare Part B allowed charges.

    Comment: A few commenters requested clarification on the low-volume threshold for clinicians who change positions frequently or work as locum tenens. The commenters requested CMS to clarify whether or not the threshold would be cumulative for these clinicians throughout the year as they bill under different TINs, or whether the threshold be specific to a TIN/NPI combination. Commenters recommended that the low-volume threshold be for a specific TIN in which a clinician may work.

    Response: In sections II.E.2.a. and II.E.2.b. of this final rule with comment period, we describe the identifiers for MIPS eligible clinicians participating in MIPS at the individual or group level. For MIPS eligible clinicians reporting as individuals, we use a combination of billing TIN/NPI as the identifier to assess performance. In order to determine the low-volume status of eligible clinicians reporting individually, we will calculate the low-volume threshold for each TIN/NPI combination. For individual MIPS eligible clinicians billing under multiple TINs, the low-volume threshold is calculated for each TIN/NPI combination. In the case of an individual eligible clinician exceeding the low-volume threshold under any TIN/NPI combination, the eligible clinician would be considered a MIPS eligible clinician and required to meet the MIPS requirements for those TIN/NPI combinations.

    Comment: One commenter suggested that CMS develop a MIPS hardship exception in addition to a low-volume threshold.

    Response: We thank the commenter for the suggestion. We note that the section II.E.5.g.(8)(a)(ii) of this final rule with comment period describes our final policies regarding the re-weighting of the advancing care information performance category within the final score, in which we would assign a weight of zero when there are not sufficient measures applicable and available for MIPS eligible clinicians facing a significant hardship.

    Comment: One commenter stated that the low-volume threshold should also take into account total Medicare patients and billing, including Medicare Advantage enrollees, not just Part B.

    Response: We appreciate the suggestion from the commenter, but note that section 1848(q)(1)(C)(iv) of the Act establishes provisions relating to the low-volume threshold, in which the low-volume threshold only pertains to the number of Part B-enrolled Medicare beneficiaries, the number of items and services furnished to such individuals, or the amount of allowed charges billed under Part B. To the extent that Medicare Part B allowed charges are incurred for beneficiaries enrolled in section 1833(a)(1)(A) or 1876 Cost Plans, those the Medicare beneficiaries would be included in the beneficiary count; however, beneficiaries enrolled in Medicare Advantage plans that receive their Part B services through their Medicare Advantage plan will not be included in allowed charges billed under Medicare Part B for determining the low-volume threshold.

    Comment: Regarding partial year performance data, one commenter indicated that the low-volume reporting threshold and “insufficient sample size” standard already proposed for MIPS are adequate, and no additional “partial year” criteria would be needed. For example, a clinician who only began billing Medicare in November and did not meet the low-volume threshold would not be eligible for MIPS. Another clinician who began billing Medicare in November who exceeds the low-volume threshold, even in such a short time period, would be eligible for MIPS. The commenter supported this approach because it is simple and straightforward and does not require any additional calculations.

    Response: We appreciate the support from the commenter.

    Comment: One commenter requested that CMS provide an exemption for physicians over 60 or 65 years old as they cannot afford to implement the necessary changes, particularly if they are working part-time.

    Response: We appreciate the concerns expressed by the commenter and note that all MIPS eligible clinicians (as defined in section 1861(r) of the Act) practicing either full-time or part-time are required to participate in MIPS unless determined eligible for an exclusion. A MIPS eligible clinician, whether practicing full-time or part-time, who does not exceed the low-volume threshold would be excluded from participating in MIPS.

    After consideration of the public comments we received, we are finalizing a modification to our proposal to define MIPS eligible clinicians or groups who do not exceed the low-volume threshold. At § 414.1305, we are defining MIPS eligible clinicians or groups who do not exceed the low-volume threshold as an individual MIPS eligible clinician or group who, during the low-volume threshold determination period, has Medicare Part B billing charges less than or equal to $30,000 or provides care for 100 or fewer Part B-enrolled Medicare beneficiaries. We are finalizing our proposed policy at § 414.1310(b) that for a year, MIPS eligible clinicians who do not exceed the low-volume threshold (as defined at § 414.1305) are excluded from MIPS for the performance period with respect to a year. The low-volume threshold also applies to MIPS eligible clinicians who practice in APMs under the APM scoring standard at the APM Entity level, in which APM Entities that do not exceed the low-volume threshold would be excluded from the MIPS requirements and not subject to a MIPS payment adjustment. Such an exclusion will not affect an APM Entity's QP determination if the APM Entity is an Advanced APM. Additionally, because we agree that it would be beneficial for individual eligible clinicians and groups to know whether they are excluded under the low-volume threshold prior to the start of the performance period, we are finalizing a modification to our proposal to allow us to make eligibility determinations regarding low-volume status using historical data. This modification will allow us to inform individual MIPS eligible clinicians and groups of their low-volume status prior to the performance period. We establish the low-volume threshold determination period to refer to the timeframe used to assess claims data for making eligibility determinations for the low-volume threshold exclusion. We define the low-volume threshold determination period to mean a 24-month assessment period, which includes a two-segment analysis of claims data during an initial 12-month period prior to the performance period followed by another 12-month period during the performance period. In order to conduct an analysis of the data prior to the performance period, we are establishing an initial low-volume threshold determination period consisting of 12 months. The initial 12-month segment of the low-volume threshold determination period would span from the last 4 months of a calendar year 2 years prior to the performance period followed by the first 8 months of the next calendar year and include a 60-day claims run out, which will allow us to inform eligible clinicians and groups of their low-volume status during the month (December) prior to the start of the performance period. The second 12-month segment of the low-volume threshold determination period would span from the last 4 months of a calendar year 1 year prior to the performance period followed by the first 8 months of the performance period in the next calendar year and include a 60-day claims run out, which will allow us to inform additional eligible clinicians and groups of their low-volume status during the performance period.

    Thus, for purposes of the 2019 MIPS payment adjustment, we will initially identify the low-volume status of individual eligible clinicians and groups based on 12 months of data starting from September 1, 2015 to August 31, 2016. In order to account for the identification of additional individual eligible clinicians and groups that do not exceed the low-volume threshold during the 2017 performance period, we will conduct another eligibility determination analysis based on 12 months of data starting from September 1, 2016 to August 31, 2017. For example, eligible clinicians who may have exceeded the low-volume threshold during the first determination assessment, but fall below the threshold during the performance period because their practice changed significantly, they changed practices from a prior year, etc. Similarly, for future years, we will conduct an initial eligibility determination analysis based on 12 months of data (consisting of the last 4 months of the calendar year 2 years prior to the performance period and the first 8 months of the calendar year prior to the performance period) to determine the low-volume status of individual eligible clinicians and groups, and conduct another eligibility determination analysis based on 12 months of data (consisting of the last 4 months of the calendar year prior to the performance period and the first 8 months of the performance period) to determine the low-volume status of additional individual MIPS eligible clinicians and groups. We will not change the low-volume status of any individual eligible clinician or group identified as not exceeding the low-volume threshold during the first eligibility determination analysis based on the second eligibility determination analysis. Thus, an individual eligible clinician or group that is identified as not exceeding the low-volume threshold during the first eligibility determination analysis will continue to be excluded from MIPS for the duration of the performance period regardless of the results of the second eligibility determination analysis. We will conduct the second eligibility determination analysis to account for the identification of additional, previously unidentified individual eligible clinicians and groups who do not exceed the low-volume threshold.

    We recognize that the low-volume threshold determination period effectively combines two 12-month segments from 2 consecutive calendar years, in which the two 12-month periods of data that would be used for our analysis will not align with the calendar years. Also, we note that the low-volume threshold determination period may impact new Medicare-enrolled eligible clinicians who are excluded from MIPS participation for the performance period in which they are identified as new Medicare-enrolled eligible clinicians. Such clinicians would ordinarily begin participating in MIPS in the subsequent year, but under our modified low-volume threshold, are more likely to be excluded for a second year. The low-volume threshold exclusion may apply if, for example, such eligible clinician became a new Medicare-enrolled eligible clinician during the last 4 months of the calendar year and did not exceed the low-volume threshold of billed Medicare Part B allowed charges. Since the initial eligibility determination period consists of the last 4 months of the calendar year 2 years prior to the performance period and the first 8 months of the calendar year prior to the performance period, these new Medicare-enrolled eligible clinicians could be identified as having a low-volume status if the analysis reflects billed Medicare Part B allowed charges less than $30,000 or the provided care for 100 or fewer Part B-enrolled Medicare beneficiaries. As noted above, we will not change the low-volume status of any individual MIPS eligible clinician or group identified as not exceeding the low-volume threshold during the first eligibility determination analysis based on the second eligibility determination analysis.

    d. Group Reporting (1) Background

    As noted in section II.E.1.e. of the proposed rule (81 FR 28176), section 1848(q)(1)(D) of the Act, requires the Secretary to establish and apply a process that includes features of the PQRS group practice reporting option (GPRO) established under section 1848(m)(3)(C) of the Act for MIPS eligible clinicians in a group for the purpose of assessing performance in the quality category and gives the Secretary the discretion to do so for the other performance categories. The process established for purposes of MIPS must, to the extent practicable, reflect the range of items and services furnished by the MIPS eligible clinicians in the group. We believe this means that the process established for purposes of MIPS should, to the extent practicable, encompass elements that enable MIPS eligible clinicians in a group to meet reporting requirements that reflect the range of items and services furnished by the MIPS eligible clinicians in the group. At § 414.1310(e), we proposed requirements for groups. For purposes of section 1848(q)(1)(D) of the Act, at § 414.1310(e)(1) we proposed the following way for individual MIPS eligible clinicians to have their performance assessed as a group: As part of a single TIN associated with two or more MIPS eligible clinicians, as identified by a NPI, that have their Medicare billing rights reassigned to the TIN (as discussed further in section II.E.2.b. of the proposed rule).

    To have its performance assessed as a group, at § 414.1310(e)(2), we proposed a group must meet the proposed definition of a group at all times during the performance period for the MIPS payment year. Additionally, at § 414.1310(e)(3) we proposed in order to have their performance assessed as a group, individual MIPS eligible clinicians within a group must aggregate their performance data across the TIN. At § 414.1310(e)(3), we proposed that a group electing to have its performance assessed as a group would be assessed as a group across all four MIPS performance categories. For example, if a group submits data for the quality performance category as a group, CMS would assess them as a group for the remaining three performance categories. We solicited public comments on the proposal regarding how groups will be assessed under MIPS.

    The following is a summary of the comments we received regarding our proposed requirements for groups, including: Individual MIPS eligible clinicians would have their performance assessed as a group as part of a single TIN associated with two or more MIPS eligible clinicians, as identified by a NPI, that have their Medicare billing rights reassigned to the TIN; a group must meet the definition of a group at all times during the performance period for the MIPS payment year; individual MIPS eligible clinicians within a group must aggregate their performance data across the TIN in order for their performance to be assessed as a group; and a group that elects to have its performance assessed as a group would be assessed as a group across all four MIPS performance categories.

    Comment: The majority of commenters were supportive of the proposed group requirements. In particular, several commenters supported our proposal to allow MIPS eligible clinicians to report across the four performance categories at an individual or group level. The commenters also expressed support for the way in which we would assess group performance.

    Response: We appreciate the support from commenters.

    Comment: One commenter supported CMS' recognition that MIPS eligible clinicians may practice in multiple settings and proposal to allow such MIPS eligible clinicians to be measured as individuals or through a group's performance.

    Response: We appreciate the support from the commenter.

    Comment: A few commenters recommended that CMS consider allowing for greater flexibility in the reporting requirements and allow MIPS eligible clinicians to participate either individually or as a group for each of the four performance categories, as it may be reasonable to report individually for some categories and as a group for other categories. One commenter indicated that reporting for the advancing care information measures via a group would be a helpful option, but there are hurdles clinicians and health IT vendors and developers may need to overcome during the first 2 years to do so.

    Response: We appreciate the feedback from the commenters. While we want to ensure that there is as much flexibility as possible within the MIPS program, we believe it is important that MIPS eligible clinicians choose how they will participate in MIPS as a whole, either as an individual or as a group. Whether MIPS eligible clinicians participate in MIPS as an individual or group, it is critical for us to assess the performance of individual MIPS eligible clinicians or groups across the four performance categories collectively as either an individual or group in order for the final score to reflect performance at a true individual or group level and to ensure the comparability of data. Section II.E.5.g.(5)(c) of this final rule with comment period describes group reporting requirements pertaining to the advancing care information performance category.

    Comment: A few commenters indicated that group reporting can be challenging if the group includes part-time clinicians.

    Response: We recognize that group-level reporting offers different advantages and disadvantages to different practices and therefore, it may not be the best option for all MIPS eligible clinicians who are part of a particular group. Depending on the composition of a group, which may include part-time clinicians, some groups may find meeting the MIPS requirements to be less burdensome if they report at the individual level rather than at the group level. Also, we note that some part-time clinicians may be excluded from MIPS participation at the individual level if they do not exceed the low-volume threshold (section II.E.3.c. of this final rule with comment period describes the low-volume threshold exclusion).

    Comment: One commenter requested clarification regarding whether or not clinicians excluded from MIPS would also be excluded from group-level reporting.

    Response: With clinician practices having the option to report at the individual (TIN/NPI) or group level (TIN), we elaborate on how a MIPS group's (TIN) performance is assessed and scored at the group level and how the MIPS payment adjustment is applied at the group level when a group includes clinicians who are excluded from MIPS at the individual level. We note that there are three types of MIPS exclusions: New Medicare-enrolled eligible clinicians, QPs and Partial QPs who do not report on applicable MIPS measures and activities, and eligible clinicians who do not exceed the low-volume threshold (see section II.E.3. of this final rule with comment period), which determine when an eligible clinician is not considered a MIPS eligible clinician and thus, not required to participate in MIPS. The two types of exclusions pertaining to new Medicare-enrolled eligible clinicians, and QPs and Partial QPs who do not report on applicable MIPS measures and activities are determined at the individual (NPI) level while the low-volume threshold exclusion is determined at the individual (TIN/NPI) level for individual reporting and at the group (TIN) level for group reporting.

    A group electing to submit data at the group level would have its performance assessed and scored across the TIN, which could include items and services furnished by individual NPIs within the TIN who are not required to participate in MIPS. For example, excluded eligible clinicians (new Medicare-enrolled, QPs, or Partial QPs who do not report on applicable MIPS measures and activities, and do not exceed the low-volume threshold) are part of the group, and therefore, would be considered in the group's score. However, the MIPS payment adjustment would apply differently at the group level in relation to each exclusion circumstance. For example, groups reporting at the group level that include new Medicare-enrolled eligible clinicians, or QPs or Partial QPs would have the MIPS payment adjustment only apply to the Medicare Part B allowed charges pertaining to the group's MIPS eligible clinicians and the MIPS payment adjustment would not apply to such clinicians excluded from MIPS based on these two types of exclusions. We reiterate that any individual (NPI) excluded from MIPS because they are identified as new Medicare-enrolled, QP, or Partial QP would not receive a MIPS payment adjustment, regardless of their MIPS participation.

    We note that the low-volume threshold is different from the other two exclusions in that it is not determined solely based on the individual NPI status, it is based on both the TIN/NPI (to determine an exclusion at the individual level) and TIN (to determine an exclusion at the group level) status. In regard to group-level reporting, the group, as a whole, is assessed to determine if the group (TIN) exceeds the low-volume threshold. Thus, eligible clinicians (TIN/NPI) who do not exceed the low-volume threshold at the individual reporting level and would otherwise be excluded from MIPS participation at the individual level, would be required to participate in MIPS at the group level if such eligible clinicians are part of a group reporting at the group level that exceeds the low-volume threshold.

    We considered aligning how the MIPS exclusions would be applied at the group level for each of the three exclusion circumstances. We recognize that alignment would provide a uniform application across the three exclusions and offer simplicity, but we also believe it is critical to ensure that there are opportunities encouraging coordination, teamwork, and shared responsibility within groups. In order to encourage coordination, teamwork, and shared responsibility at the group level, we will assess the low-volume threshold so that all clinicians within the group have the same status: All clinicians collectively exceed the low-volume threshold or they do not exceed the low-volume threshold.

    In addition, we recognize that individual clinicians who do not meet the definition of a MIPS eligible clinician during the first 2 years of MIPS such as physical and occupational therapists, clinical social workers, and others are not MIPS eligible. Thus, such clinicians are not required to participate in MIPS, but may voluntarily report measures and activities for MIPS. For those clinicians not MIPS eligible who voluntarily report for MIPS, they would not receive a MIPS payment adjustment. Accordingly, groups reporting at the group level may voluntarily include such eligible clinicians in its aggregated data that would be reported for measure and activities under MIPS. For groups reporting at the group level that voluntarily include eligible clinicians who do not meet the definition of a MIPS eligible clinician, they would have their performance assessed and scored across the TIN, but those clinicians would not receive a MIPS payment adjustment, regardless of their MIPS voluntary participation. We further note that these clinicians who are not eligible for MIPS, but volunteer to report, would not receive a MIPS payment adjustment.

    We are finalizing our proposals regarding group requirements; however, we welcome additional comment on: How we are applying the application of group-related policies pertaining to group-level performance assessment and scoring and the MIPS payment adjustment to groups with eligible clinicians excluded from MIPS based on the three exclusions or not MIPS eligible for the first 2 years of MIPS; the advantages and disadvantages of how we are applying the application of group-related policies when groups include eligible clinicians excluded from the requirement to participate in MIPS at the individual level; and alternative approaches that could be considered.

    Comment: One commenter expressed concerns that group reporting benchmarks and comparison groups have not yet been identified.

    Response: All MIPS eligible clinicians, regardless of specialty, geographic location, or whether they report as an individual or group, who submit data using the same submission mechanism would be included in the same benchmark. We refer readers to sections II.E.6.a.(2)(a) and II.E.6.a.(3)(a) of this final rule with comment period for further discussion of policies regarding quality measure and cost measure benchmarks under MIPS.

    Comment: One commenter requested clarification regarding group reporting for organizations with multiple practices/specialties.

    Response: As proposed, group reporting would occur and be aggregated at the TIN level. No distinct reporting occurs at the specialty or practice site level.

    Comment: One commenter requested clarification on what can be expected under MIPS by small practices for which measures are not applicable.

    Response: In section II.E.6.b.(2)(b) of this final rule with comment period, we describe our scoring methodology that is applied when there are a few or no applicable measures under the quality performance category for MIPS eligible clinicians or groups to report.

    Comment: One commenter recommended that CMS focus regulations on large systems and practices and have fewer regulations for small practices.

    Response: We believe that it is essential for our requirements pertaining to group-level reporting should be applicable to all groups regardless of size, geographic location, composition, or other differentiating factors. However, we believe that there are circumstances in which our policies should consider how different types of groups would be affected. In this final rule with comment period, we establish an exclusion for individual MIPS eligible clinicians and groups who do not exceed a low-volume threshold pertaining to a dollar value of Medicare Part B allowed charges or a Part B-enrolled beneficiary count. Also, we finalize our proposal relating to MIPS eligible clinicians practicing RHCs and FQHCs, in which services rendered by an eligible clinician that are payable under the RHC or FQHC methodology would not be subject to the MIPS payments adjustments.

    After consideration of the public comments we received, we are finalizing a modification to the following proposed policy:

    • Individual MIPS eligible clinicians who choose to report as a group will have their performance assessed as part of a single TIN associated with two or more eligible clinicians (including at least one MIPS eligible clinician), as identified by a NPI, that have their Medicare billing rights reassigned to the TIN (§ 414.1310(e)(1)).

    In addition, we are finalizing the following policies:

    • A group must meet the definition of a group at all times during the performance period for the MIPS payment year in order to have its performance to be assessed as a group (§ 414.1310(e)(2)).

    • Eligible clinicians and MIPS eligible clinicians within a group must aggregate their performance data across the TIN in order for their performance to be assessed as a group (§ 414.1310(e)(3)).

    • A group that elects to have its performance assessed as a group will be assessed as a group across all four MIPS performance categories (§ 414.1310(e)(4)).

    (2) Registration

    Under the PQRS, groups are required to complete a registration process to participate in PQRS as a group. During the implementation and administration of PQRS, we received feedback from stakeholders regarding the registration process for the various methods available for data submission. Stakeholders indicated that the registration process was burdensome and confusing. Additionally, we discovered that during the registration process when groups are required to select their group submission mechanism, groups sometimes selected the option not applicable to their group, which has created issues surrounding the mismatch of data. Unreconciled data mismatching can impact the quality of data. To address this issue, we proposed to eliminate a registration process for groups submitting data using third party entities. When groups submit data utilizing third party entities, such as a qualified registry, QCDR, or EHR, we are able to obtain group information from the third party entity and discern whether the data submitted represents group submission or individual submission once the data are submitted.

    At § 414.1310(e)(5), we proposed that a group must adhere to an election process established and required by CMS, as described in this section. We did not propose to require groups to register to have their performance assessed as a group except for groups submitting data on performance measures via participation in the CMS Web Interface or groups electing to report the Consumer Assessment of Healthcare Providers and Systems (CAHPS) for MIPS survey for the quality performance category as described further in section II.E.5.b. of the proposed rule. For all other data submission mechanisms, groups must work with appropriate third party entities to ensure the data submitted clearly indicates that the data represent a group submission rather than an individual submission. In order for groups to elect participation via the CMS Web Interface or administration of the CAHPS for MIPS survey, we proposed that such groups must register by June 30 of the applicable 12-month performance period (that is, June 30, 2017, for performance periods occurring in 2017). For the criteria regarding group reporting applicable to the four MIPS performance categories, see section II.E.5.a. of the proposed rule.

    The following is a summary of the comments we received regarding our proposal that requires a group participating via the CMS Web Interface or electing to administer the CAHPS for MIPS survey to adhere to an election process established and required by CMS.

    Comment: Several commenters expressed support for CMS's effort to ease the registration burden by not requiring registration or an election process for groups other than those electing to use the CMS Web Interface or CAHPS for MIPS survey for reporting of the quality performance category.

    Response: We appreciate the support from commenters regarding our proposal.

    Comment: One commenter expressed concern that clinicians who attempt to use the CMS Web Interface will not know if they have patients who satisfy reporting requirements until they attempt to submit their data. The commenter did not support the registration process required in order to select the use of the CMS Web Interface as a submission mechanism. The commenter asked whether clinicians will be able to elect other options once registration for the CMS Web Interface closes.

    Response: Similar to the process that has occurred in past years under the PQRS program, we intend to provide the beneficiary sample to the groups that have registered to participate via the CMS Web Interface approximately 1 month prior to the start of the submission period. The submission period for the CMS Web Interface will occur during an 8-week period following the close of the performance period that will begin no earlier than January 1 and end no later than March 31 (the specific start and end dates for the CMS Web Interface submission period will be published on the CMS Web site). This is the earliest the sample is available due to the timing required to establish and maintain an effective sample size.

    We encourage groups to review the measure specifications for each data submission mechanism and select the data submission mechanism that applies best to the group prior to registering to participate via the CMS Web Interface. We want to note that groups can determine if they would have Medicare beneficiaries to report data on behalf of for the CMS Web Interface measures. Groups that register to use the CMS Web Interface prior to the registration deadline (June 30) can cancel their registration or change their selection to report at an individual or group level only during the timeframe before the close of registration.

    After consideration of the public comments we received, we are finalizing the following policy:

    • A group must adhere to an election process established and required by CMS (§ 414.1310(e)(5)), which includes:

    ++ Groups will not be required to register to have their performance assessed as a group except for groups submitting data on performance measures via participation in the CMS Web Interface or groups electing to report the CAHPS for MIPS survey for the quality performance category. For all other data submission methods, groups must work with appropriate third party entities as necessary to ensure the data submitted clearly indicates that the data represent a group submission rather than an individual submission.

    ++ In order for groups to elect participation via the CMS Web Interface or administration of the CAHPS for MIPS survey, such groups must register by June 30 of the applicable performance period (that is, June 30, 2017, for performance periods occurring in 2017).

    Additionally, for operational purposes, we are considering the establishment of a voluntary registration process, if technically feasible, for groups that intend to submit data on performance measures via a qualified registry, QCDR, or EHR, which will enable such groups to specify whether or not they intend to participate as a group and which submission mechanism (qualified registry, QCDR, or EHR) they plan to use for reporting data, and provide other applicable information pertaining to the TIN/NPIs. In order for groups to know which requirements apply to their group for data submission purposes in advance of the performance period or submission period, we want to establish a mechanism that would allow us to identify the data submission mechanism a group intends to use and notify groups of the applicable requirements they would need to meet for the performance year, if technically feasible. We believe it is essential for groups to be aware of their applicable requirements in advance and as a result, the only means that would allow us to inform groups is dependent on us receiving such information from groups through a voluntary registration process; otherwise, it is impossible to contact groups without knowing who they are or inform groups of applicable requirements without knowing whether or not a group intends to report at the group level and the data submission mechanism a group is planning to utilize. For groups that would not voluntarily register, we would only be able to identify such groups after the close of the submission period when data has been submitted. To address this operational facet, we are considering the establishment of a voluntary registration process similar to PQRS in that groups would make an election of a data submission mechanism; however, based on feedback we have received over the years from PQRS participants, the voluntary registration process under MIPS would not restrict group participation to the selected options, including individual- or group-level reporting or a selected data submission mechanism, made by groups during the voluntary registration process; groups would have the flexibility to modify how they participate in MIPS.

    With the optional participation in a voluntary registration process, the assessment of a group's performance would not be impacted by whether or not a group elects to participate in voluntary registration. We note that if a group voluntarily registers, information provided by the group would be used to proactively inform MIPS eligible clinicians about the timeframe they would need to submit data, which would be provided to the group during the performance period. We intend to use the voluntary registration process as a means to provide additional educational materials that are targeted and tailored to such groups; and if technically feasible, provide such groups with access to additional toolkits. We believe it is important for groups to have such information in advance in order to prepare for the submission of data. Also, we note that the voluntary registration process differs from the registration process required for groups electing to submit data via the CMS Web Interface, such that groups registering on a voluntary basis would be able to opt out of group-level reporting and/or modify their associated settings such as the chosen submission mechanism at any time. The participation of a group in MIPS via a data submission mechanism other than the CMS Web Interface or a group electing to administer the CAHPS for MIPS survey would not be contingent upon engagement in the voluntary registration process. Whether or not a group elects to participate in voluntary registration, a group must meet all of the requirements pertaining to groups. We intend to issue further information regarding the voluntary registration process for groups in subregulatory guidance.

    e. Virtual Groups (1) Implementation

    Section 1848(q)(5)(I) of the Act establishes the use of voluntary virtual groups for certain assessment purposes. The statute requires the establishment and implementation of a process that allows an individual MIPS eligible clinician or a group consisting of not more than 10 MIPS eligible clinicians to elect to form a virtual group with at least one other such individual MIPS eligible clinician or group of not more than 10 MIPS eligible clinicians for a performance period of a year. As determined in statute, individual MIPS eligible clinicians and groups forming virtual groups are required to make such election prior to the start of the applicable performance period under MIPS and cannot change their election during the performance period. As discussed in section II.E.4. of the proposed rule, we proposed that the performance period would be based on a calendar year.

    As we assessed the timeline for the establishment and implementation of virtual groups and applicable election process and requirements for the first performance period under MIPS, we identified significant barriers regarding the development of a technological infrastructure required for successful implementation and the operationalization of such provisions that would negatively impact the execution of virtual groups as a conducive option for MIPS eligible clinicians or groups. The development of an electronic system before policies are finalized poses several risks, particularly relating to the impediments of completing and adequately testing the system before execution and assuring that any change in policy made during the rulemaking process are reflected in the system and operationalized accordingly. We believe that it would be exceedingly difficult to make a successful system to support the implementation of virtual groups, and given these factors, such implementation would compromise not only the integrity of the system, but the intent of the policies.

    Additionally, we recognize that it would be impossible for us to develop an entire infrastructure for electronic transactions pertaining to an election process, reporting of data, and performance measurement before the start of the performance period beginning on January 1, 2017. Moreover, the actual implementation timeframe would be more condensed given that the development, testing, and execution of such a system would need to be completed months in advance of the beginning of the performance period in order to provide MIPS eligible clinicians and groups with an election period.

    During the implementation and ongoing functionality of other programs such as PQRS, Medicare EHR Incentive Program, and VM, we received feedback from stakeholders regarding issues they encountered when submitting reportable data for these programs. With virtual groups as a new option, we want to minimize potential issues for end-users and implement a system that encourages and enables MIPS eligible clinicians and groups to participate in a virtual group. A web-based registration process, which would simplify and streamline the process for participation, is our preferred approach. Given the aforementioned dynamics discussed in this section, implementation for the CY 2017 performance period is infeasible as a result of the insufficient timeframe to develop a web-based registration process. We have assessed alternative approaches for the first year only, such as an email registration process, but believe that there are limitations and potential risks for numerous errors, such as submitted information being incomplete or not in the required format. A manual verification process would cause a significant delay in verifying registration due to the lack of an automated system to ensure the accuracy of the type of information submitted that is required for registration. We believe that an email registration process could become cumbersome and a burden for groups to pursue participation in a virtual group. Implementation of a web-based registration system for CY 2018 would provide the necessary time to establish and implement an election process and requirements applicable to virtual groups, and enable proper system development and operations. We intend to implement virtual groups for the CY 2018 performance period, and we intend to address all of the requirements pertaining to virtual groups in future rulemaking. We requested comments on factors we should consider regarding the establishment and implementation of virtual groups.

    The following is a summary of the comments we received regarding our intention to implement virtual groups for the CY 2018 performance period and factors we should consider regarding the establishment and implementation of virtual groups.

    Comment: Many commenters supported the development of virtual groups. Some commenters noted that virtual groups are needed because some patients require multidisciplinary care in and out of a hospital and practice.

    Response: We appreciate the support from commenters.

    Comment: Several commenters supported CMS' decision not to implement virtual groups in year 1 in order to allow for the successful technological infrastructure development and implementation of virtual groups, but requested that CMS outline the criteria and requirements regarding the execution of virtual groups as soon as possible. Several commenters recommended that CMS use year 1 to develop the much-needed guidance and assistance that outlines the steps groups would need to take in forming virtual groups, such as drafting written agreements and developing additional skills and tools.

    Response: We appreciate the support from commenters regarding the delay in the implementation of virtual groups. We intend to utilize this time to work with the stakeholder community to further advance the framework for virtual groups.

    Comment: Multiple commenters expressed concern that virtual groups would not be implemented in year 1 and requested that CMS operationalize the virtual group option immediately. A few commenters indicated that the delay would impact small and solo practices and rural clinicians. Some commenters requested that in the absence of the virtual group option, small and solo practices and rural clinicians should be eligible for positive payment adjustments, but exempt from any negative payment adjustment. The commenters stated that exempting these physicians from negative payment adjustments would better incentivize the pursuit of quality and performance improvement among solo and small practices. A few commenters recommended that all practices of 9 or fewer physicians be exempt from MIPS or APM requirements until the virtual group option has been tested and is fully operational. One commenter suggested that as an alternative to delaying the implementation of virtual groups, CMS should allow virtual groups to report performance data on behalf of small practices and HPSAs for the CY 2017 performance period.

    Response: As noted in the proposed rule, we identified significant barriers regarding the development of a technological infrastructure required for successful implementation and operationalization of the provisions pertaining to virtual groups. As a result, we believe that it would be technically infeasible to make a successful system to support the implementation of virtual groups for year 1. Also, we note that clinicians who are considered MIPS eligible clinicians are required to participate in MIPS unless they are eligible for one of the exclusions established in this final rule with comment period (see section II.E.3. of this final rule with comment period); thus, a MIPS eligible clinician participating in MIPS either as an individual or group will be subject to a payment adjustment whether it is positive, neutral, or negative. The Act does not provide discretion to only apply a payment adjustment when a MIPS eligible clinician receives a positive payment adjustment. In regard to the request to allow virtual groups to have an alternative function for year 1, we intend to implement virtual groups in a manner consistent with the statute.

    Comment: A few commenters recommended that CMS redirect funds from the $500 million set aside for bonus payments to top performers toward financing a “safe harbor” for solo and small practices and rural providers.

    Response: This is not permissible by statute, as the $500 million is available only for MIPS eligible clinicians with a final score at or above the additional performance threshold.

    Comment: Several commenters identified several factors CMS should consider as it develops further policies relating to virtual groups, including the following: Ensuring that virtual groups have shared accountability for performance improvement; limiting the submission mechanisms to those that require clinicians in the virtual group to collaborate on ongoing quality analysis and improvement; maintaining flexibility for factors being considered for virtual groups; implementing a virtual group pilot to be run prior to 2018 implementation; and hosting listening sessions to receive input and feedback on this option with specialty societies and other stakeholders. Several commenters requested that CMS avoid placing arbitrary limits on minimum or maximum size, geography proximity, or specialty of virtual groups, but allow virtual groups to determine group size, geographic affiliations, and group composition. One commenter encouraged CMS to explore broad options for virtual groups outside the norm of TIN/NPI grouping. However, a few commenters recommended that virtual groups be limited to practices of same or similar specialties or clinical standards. Another commenter requested more detail on the implementation of virtual groups.

    A few commenters recommended the following minimum standards for members of a virtual group: Have mutual interest in quality improvement; care for similar populations; and be responsible for the impact of their decisions on the whole group. A few commenters suggested that virtual groups should not have their performance ratings compared to other virtual groups, but instead, virtual groups should have their performance ratings compared to their annual performance rating during the initial implementation of virtual groups given that each virtual group's clinicians and beneficiaries may have varying risk preventing a direct comparison.

    Response: We appreciate the suggestions from the commenters and as a result of the recommendations, we are interested in obtaining further input from stakeholders regarding the types of provisions and elements that should be considered as we develop requirements applicable to virtual groups. Therefore, we are seeking additional comment on the following issues for future consideration: The advantages and disadvantages of establishing minimum standards, similar to those suggested by commenters as noted above; the types of standards could be established for members of a virtual group; the factors would need to be considered in establishing a set of standards; the advantages and disadvantages of requiring members of a virtual group to adhere to minimum standards; the types of factors or parameters could be considered in developing a virtual group framework to ensure that virtual groups would be able to effectively use their data for meaningful analytics; the advantages and disadvantages of forming a virtual group pilot in preparation for the development and implementation of virtual groups; the framework elements could be included to form a virtual group pilot.

    As we develop requirements applicable to virtual groups, we will also consider the ways in which virtual groups will each have unique characteristic compositions and varying patient populations and how the performance of virtual groups will be assessed, scored, and compared. We are committed to pursuing the active engagement of the stakeholders throughout the process of establishing and implementing virtual groups.

    Comment: Several commenters recognized the potential value of virtual groups to ease the burden of reporting under MIPS. Commenters recommended that CMS expand virtual groups to promote the adoption of activities that enhance care coordination and improve quality outcomes that are often out of reach for small practices due to limited resources; encourage virtual groups to establish shared clinical guidelines, promote clinician responsibility, and have the ability to track, analyze, and report performance results; and promote information-sharing and collaboration among its clinicians.

    Response: We appreciate the suggestions from the commenters and as a result of the recommendations, we are interested in obtaining further input from stakeholders regarding the technical and operational elements and data analytics/metrics that should be considered as we develop requirements applicable to virtual groups. Therefore, we are seeking additional comment on the following issues for future consideration: The types of requirements that could be established for virtual groups to promote and enhance the coordination of care and improve the quality of care and health outcomes; and the parameters (for example, shared patient population), if any, could be established to ensure virtual groups have the flexibility to form any composition of virtual group permissible under the Act while accounting for virtual groups reporting on measures across the four performance categories that are collectively applicable to a virtual group given that the composition of virtual groups could have many differing forms. We believe that each MIPS eligible clinician who is part of a virtual group has a shared responsibility in the performance of the virtual group and the formation of a virtual group provides an opportunity for MIPS eligible clinicians to share and potentially streamline best practices.

    Comment: One commenter requested clarification on what constitutes a virtual group and how virtual groups will be formed. The commenter recommended that performance for individual MIPS eligible clinicians in virtual groups should be based on specialty-specific measures. The commenter also recommended that, when assessing performance, CMS should develop sufficient risk adjustment mechanisms that ensure MIPS eligible clinicians are only scored on the components of care they have control over, and CMS should develop robust and appropriate attribution methods. Another commenter recommended that CMS require virtual groups to demonstrate a reliable mechanism for establishing patient attribution as well as the ability to report throughout the performance period.

    Response: We will consider these suggestions as we develop requirements applicable to virtual groups in future rulemaking. In regard to the commenter's request for clarification regarding what constitutes a virtual group and how they are formed, we note that section 1848(q)(5)(I) of the Act requires the establishment and implementation of a process that allows an individual MIPS eligible clinician or a group consisting of not more than 10 MIPS eligible clinicians to elect to form a virtual group with at least one other such individual MIPS eligible clinician or group of not more than 10 MIPS eligible clinicians for a performance period of a year.

    Comment: One commenter suggested that virtual groups could be organized similarly to the current PQRS GPRO, in which virtual groups would have the flexibility to select both quality and resources use measures once they are further developed.

    Response: We want to clarify that there is no virtual group reporting or similar option under PQRS. We note that virtual groups are not a data submission mechanism. MIPS eligible clinicians would have the option to participate in MIPS as individual MIPS eligible clinicians, groups, or, following implementation, virtual groups.

    Comment: One commenter recommended the use of third-party certifications to assist with emerging virtual groups. The commenter also suggested that CMS provide bonus points for clinicians that register as virtual groups, similar to electronic reporting of quality measures.

    Response: We will consider these suggestions as we develop requirements for virtual groups in future rulemaking.

    Comment: A few commenters encouraged CMS to assess many of the virtual group challenges associated with EHR technology. One commenter stated that most small independent clinician offices do not use the same EHR technology as their neighbors, and virtual groups would create reporting and measurement challenges, especially with respect to the advancing care information performance category; the commenter suggested that CMS provide attestation as an option.

    Another commenter indicated that the implementation of virtual groups could be unsuccessful based on the following factors: There is no necessary consistency in the nomenclature and methods used by different health IT vendors and developers, which would prevent prospective virtual group members from correctly understanding the degree and nature of the differences in approaches regarding data collection and submission; any vendor-related issues would be combined in unpredictable ways within virtual groups, causing the datasets to not correspond categorically and having inconsistent properties among the datasets; there is the prospect of a mismatch of properties for virtual group members on assessed measures, where neither excellence nor laggardly work would be clearly visible; and there is a risk of a practice joining a virtual group with “free riders,” which would result in a churning of membership and a serious loss of year-to-year comparison capabilities. In order to address such issues, the commenter recommended that CMS develop a system that includes the capability for clinicians and groups to participate in a service similar to online dating service applications that would allow clinicians and groups to use self-identifying descriptors to select their true peers within similar CEHRT.

    A few commenters requested clarification regarding the approved methods for submitting and aggregating disparate clinician data for virtual groups, and whether or not new clinicians should be included in virtual groups if they have not been part of the original TIN throughout the reporting year.

    Response: We thank the commenters for providing suggestions and identifying potential health IT challenges virtual groups may encounter regarding the reporting and submission of data. As a result of the recommendations and identification of potential barriers, we are interested in obtaining further input from stakeholders on these issues as we establish provisions pertaining to virtual groups and build a technological infrastructure for the operationalization of virtual groups. Therefore, we are seeking comment on the following issues for future consideration: The factors virtual groups would need to consider and address in order for the reporting and submission of data to be streamlined in a manner that allows for categorization of datasets and comparison capabilities; the factors an individual clinician or small practice who are part of a virtual group would need to consider in order for their CEHRT to have interoperability with other CEHRT if part of a virtual group; the advantages and disadvantages of having members of a virtual group use one form of CEHRT; the potential barriers that may make it difficult for virtual groups to be prepared to have a collective, streamlined system to capture measure data; and the timeframe virtual groups would need in order to build a system or coordinate a systematic infrastructure that allows for a collective, streamlined capturing of measure data.

    Comment: One commenter suggested having Virtual Integrated Clinical Networks (VICN) as an alternative type of delivery system within the Quality Payment Program. The commenter further indicated that the development of VICNs can lead to better patient care and lower costs by including only physicians and other clinicians who commit to value-based care at the outset. The commenter noted that in order to participate, clinicians would have to agree to work and practice in a value-based way, with transparency of patient satisfaction, clinical outcomes, and cost results.

    Response: We will consider the suggestion as we develop the framework and requirements for virtual groups.

    Comment: One commenter suggested that CMS change the name of virtual groups to virtual network since a group includes coordination of a wide range of physician and related ancillary services under one roof that is seamless to patients while the term “network” implies more of an alignment of multiple group practices and clinicians operating across the medical community for purposes of reporting in MIPS.

    Response: We will consider the suggestion as we establish the branding for virtual groups.

    Comment: Multiple commenters did not support virtual groups being limited to groups consisting of not more than 10 MIPS eligible clinicians to form a virtual group with at least one other MIPS eligible clinician or group of not more than 10 MIPS eligible clinicians.

    Response: With regard to commenters not supporting the composition limit of virtual groups, we note that section 1848(q)(5)(I) of the Act requires the establishment and implementation of a process that allows an individual MIPS eligible clinician or a group consisting of not more than 10 MIPS eligible clinicians to elect to form a virtual group with at least one other such individual MIPS eligible clinician or group of not more than 10 MIPS eligible clinicians for a performance period of a year. Thus, we do not have the authority to modify this statutory provision.

    Comment: A few commenters requested that CMS work with clinician communities as it establishes the framework for the virtual group option. Commenters recommended that CMS protect against antitrust issues that may arise regarding physician collaboration to recognize economies of scale. One commenter indicated that accreditation entities have experience with the Federal Trade Commission (FTC) rules related to clinically integrated networks formed to improve the quality and efficiency of care delivered to patients and that publicly vetted accreditation standards could guide the development of virtual groups in a manner that incentivizes sustainable growth as integrated networks capable of long-term success under value-based reimbursement.

    Response: We will consider the recommendations provided as we develop requirements pertaining to virtual groups.

    Comment: One commenter recommended that in future rulemaking, CMS create a unique identifier for virtual groups, allow multiple TINs and split TINs, avoid thresholds based on the number of patients treated, avoid restricting the number of participants in virtual groups, and avoid limitations on the number of virtual groups. Another commenter suggested that virtual groups should be reporting data at either the TIN level, NPI/TIN level, or APM level.

    Response: We appreciate the recommendations from the commenters and as a result of the suggestions, we are interested in obtaining further input from stakeholders regarding a group identifier for virtual groups. Therefore, we are seeking additional comment for future consideration on the following: The advantages and disadvantages of creating a new identifier for virtual groups; and the potential options for establishing an identifier for virtual groups. We intend to explore this issue.

    We thank the commenters for their input regarding our intention to implement virtual groups for the CY 2018 performance period and factors we should consider regarding the establishment and implementation of virtual groups. We intend to explore the types of requirements pertaining to virtual groups, including, but not limited to, defining a group identifier for virtual groups, establishing the reporting requirements for virtual groups, identifying the submission mechanisms available for virtual group participation, and establishing methodologies for how virtual group performance will be assessed and scored. In addition, during the CY 2017 performance period, we will be convening a user group of stakeholders to receive further input on the factors CMS should consider in establishing the requirements for virtual groups and identify mechanisms for the implementation of virtual groups in future years.

    (2) Election Process

    Section 1848(q)(5)(I)(iii)(I) of the Act provides that the election process must occur prior to the performance period and may not be changed during the performance period. We proposed to establish an election process that would end on June 30 of a calendar year preceding the applicable performance period. During the election process, we proposed that individual MIPS eligible clinicians and groups electing to be a virtual group would be required to register in order to submit reportable data. Virtual groups would be assessed across all four MIPS performance categories. In future rulemaking, we will address all elements relating to the election process and outline the criteria and requirements regarding the formation of virtual groups. We solicited public comments on this proposal.

    The following is summary of the comments we received regarding our proposals that apply to virtual groups, including: The establishment of an election process that would end on June 30 of a calendar year preceding the applicable performance period; the requirement of individual MIPS eligible clinicians and groups electing to be a virtual group to register in order to submit reportable data; and the assessment of virtual groups across all four MIPS performance categories.

    Comment: A few commenters requested that CMS reconsider the deadline by which virtual groups would be required to make an election to participate in MIPS. One commenter recommended that the deadline should be 90 days before the performance period as opposed to 6 months.

    Response: We will consider the recommendations as we establish the election process for virtual groups.

    Comment: One commenter indicated that a registration process for the virtual group option would be an unnecessary burden and recommended that registration by virtual groups should only be required if the group participates in MIPS via the CMS Web Interface. Another commenter expressed concern that without a manageable registration system for virtual groups, there would be too many loopholes, which would add confusion to the program.

    Response: We appreciate the commenters providing recommendations and we will consider the recommendations as we establish the virtual group registration process.

    After consideration of the public comments we received, and with the delay of virtual group implementation, we are not finalizing our proposal to establish a virtual group election process that would end on June 30 for the CY 2017 performance period; the proposed requirement of individual MIPS eligible clinicians and groups electing to be a virtual group to register in order to submit reportable data; or the proposed assessment of virtual groups across all four MIPS performance categories.

    4. MIPS Performance Period

    MIPS incorporates many of the requirements of several programs into a single, comprehensive program. This consolidation includes key policy goals as common themes across multiple categories such as quality improvement, patient and family engagement, and care coordination through interoperable health information exchange. However, each of these legacy programs included different eligibility requirements, reporting periods, and systems for clinicians seeking to participate. This means that we must balance potential impacts of changes to systems and technical requirements to successfully synchronize reporting, as noted in the discussion regarding the definition of a MIPS eligible clinician in the proposed rule (81 FR 28173). We must take operational feasibility, systems impacts, and education and outreach on participation into account in developing technical requirements for participation. One area where this is particularly important is in the definition of a performance period.

    MIPS applies to payments for items and services furnished on or after January 1, 2019. Section 1848(q)(4) of the Act requires the Secretary to establish a performance period (or periods) for a year (beginning with 2019). Such performance period (or periods) must begin and end prior to such year and be as close as possible to such year. In addition, section 1848(q)(7) of the Act provides that, not later than 30 days prior to January 1 of the applicable year, the Secretary must make available to each MIPS eligible clinician the MIPS adjustment (and, as applicable, the additional MIPS adjustment) applicable to the MIPS eligible clinician for items and services furnished by the MIPS eligible clinician during the year.

    We considered various factors when developing the policy for the MIPS performance period. Stakeholders have stated that having a performance period as close to when payments are adjusted is beneficial, even if such period would be less than a year. We have also received feedback from stakeholders that they prefer having a 1 year performance period and have further suggested that the performance period start during the calendar year (for example, having the performance period occurring from July 1 through June 30). We additionally considered operational factors, such as that a 1 year performance period may be beneficial for all four performance categories because many measures and activities cannot be reported in a shorter time frame. We also considered that data submission activities and claims for items and services furnished during the 1 year performance period (which could be used for claims- or administrative claims-based quality or cost measures) may not be fully processed until the following year.

    These circumstances will require adequate lead time to collect performance data, assess performance, and compute the MIPS adjustment so the applicable MIPS adjustment can be made available to each MIPS eligible clinician at least 30 days prior to when the MIPS payment adjustment is applied each year. For 2019, these actions will occur during 2018. In other payment systems, we have used claims that are processed within a specified time period after the end of the performance period, such as 60 or 90 days, for assessment of performance and application of the MIPS payment adjustment. For MIPS, we proposed at § 414.1325(g)(2) to use claims that are processed within 90 days, if operationally feasible, after the end of the performance period for purposes of assessing performance and computing the MIPS payment adjustment. We proposed that if we determined that it is not operationally feasible to have a claims data run-out for the 90-day timeframe, then we would utilize a 60-day duration in the calendar year immediately following the performance period.

    This proposal does not affect the performance period per se, but rather the deadline by which claims for items and services furnished during the performance period need to be processed for those items and services to be included in our calculation. To the extent that claims are used for submitting data on MIPS measures and activities to us, such claims would have to be processed by no later than 90 days after the end of the applicable performance period, in order for information on the claims to be included in our calculations. As noted in this section, if we determined that it is not operationally feasible to have a claims data run-out for the 90-day timeframe, then we would utilize a 60-day duration. As an alternative to our proposal, we also considered using claims that are paid within 60 days after 2017, for assessment of performance and application of the MIPS payment adjustment for 2019. We solicited comments on both approaches.

    Given the need to collect and process information, we proposed at § 414.1320 that for 2019 and subsequent years, the performance period under MIPS would be the calendar year (January 1 through December 31) 2 years prior to the year in which the MIPS adjustment is applied. For example, the performance period for the 2019 MIPS adjustment would be the full CY 2017, that is, January 1, 2017 through December 31, 2017. We proposed to use the 2017 performance year for the 2019 MIPS payment adjustment consistent with other CMS programs. This approach allows for a full year of measurement and sufficient time to base adjustments on complete and accurate information.

    For individual MIPS eligible clinicians and groups with less than 12 months of performance data to report, such as when a MIPS eligible clinician switches practices during the performance period or when a MIPS eligible clinician may have stopped practicing for some portion of the performance period (for example, a MIPS eligible clinician who is on family leave, or has an illness), we proposed that the individual MIPS eligible clinician or group would be required to report all performance data available from the performance period. Specifically, if a MIPS eligible clinician is reporting as an individual, they would report all partial year performance data. Alternatively, if the MIPS eligible clinician is reporting with a group, then the group would report all performance data available from the performance period, including partial year performance data available for the individual MIPS eligible clinician.

    Under this approach, MIPS eligible clinicians with partial year performance data could achieve a positive, neutral, or negative MIPS adjustment based on their performance data. We proposed this approach to incentivize accountability for all performance during the performance period. We also believe these policies would help minimize the impact of partial year data. First, MIPS eligible clinicians with volume below the low-volume threshold would be excluded from any MIPS payment adjustments. Second, MIPS eligible clinicians who report measures, yet have insufficient sample size, would not be scored on those measures and activities. Refer to section II.E.6. of this final rule with comment period for more information on scoring.

    To potentially refine this proposal in future years, we solicited comments on methods to accurately identify MIPS eligible clinicians with less than a 12-month reporting period, notwithstanding common and expected absences due to illness, vacation, or holiday leave. Reliable identification of these MIPS eligible clinicians would allow us to analyze the characteristics of MIPS eligible clinicians' patient population and better understand how a reduced reporting period impacts performance.

    We also solicited public comment on an alternative approach for future years for assessment of individual MIPS eligible clinicians with less than 12 months of performance data in the performance year. For example, if we can identify such MIPS eligible clinicians and confirm there are data issues that led to invalid performance calculations, then we could score the MIPS eligible clinician with a final score equal to the performance threshold, which would result in a zero MIPS payment adjustment. We note this approach would not assess a MIPS eligible clinicians' performance for partial-year performance data. We do not believe that consideration of partial year performance is necessary for assessment of groups, which should have adequate coverage across MIPS eligible clinicians to provide valid performance calculations.

    We also solicited comment on reasonable thresholds for considering performance that is less than 12 months. For example, we expect that some MIPS eligible clinicians will take leave related to illness, vacation, and holidays. We would not anticipate applying special policies for lack of performance related to these common and expected absences assuming MIPS eligible clinicians' quality reporting includes measures with sufficient sample size to generate valid and reliable scores. We solicited comment on how to account for MIPS eligible clinicians with extended leave that may affect measure sample size.

    We solicited comments on these proposals and approaches. The following is summary of the comments we received regarding our proposals for the MIPS performance period.

    Comment: Numerous commenters believed that the first MIPS performance period should be delayed or treated as a transition year. The commenters stated that the proposed timeline for implementation was too compressed, unrealistic, and aggressive. They cited numerous educational and readiness factors for the recommended delay including: Time needed for stakeholders to digest the final rule with comment period and engage in further education and to make the necessary modifications to their practices, not overly burden their systems with such a short implementation time, and time needed to establish the administrative and technological tools necessary to meet the reporting requirements. The commenters suggested numerous alternative start dates to allow what the commenters believed would be sufficient time for MIPS eligible clinicians to prepare for reporting, ranging from a 2-year delay in implementation, using CY 2018 as the initial assessment period for MIPS, a start date no less than 15 months between the adoption of the final rule with comment period and its implementation, a start date no earlier than July 1, 2017, and lastly a start date of April 1, 2017.

    Response: We appreciate the suggestions and have examined the issues raised closely. We agree with the commenters that to ensure a successful implementation of the MIPS, providing MIPS eligible clinicians' additional time to prepare their practices for reporting under MIPS is needed. Therefore, we have decided to finalize a modification of our proposal for the performance period for the transition year of MIPS to provide flexibility to MIPS eligible clinicians as they familiarize themselves with MIPS requirements in 2017 while maintaining reliability. Therefore, we are finalizing at § 414.1320(a)(1) that for purposes of the 2019 MIPS payment year, the performance period for all performance categories and submission mechanisms except for the cost performance category and data for the quality performance category reported through the CMS Web Interface, for the CAHPS for MIPS survey, and for the all-cause hospital readmission measure, is a minimum of a continuous 90-day period within CY 2017, up to and including the full CY 2017 (January 1, 2017 through December 31, 2017). Thus, MIPS eligible clinicians will only need to report for a minimum of a continuous 90-day period within CY 2017, for the majority of the submission mechanisms. This 90-day period can occur anytime within CY 2017, so long as the 90-day period begins on or after January 1, 2017, and ends on or before December 31, 2017. We note that the continuous 90-day period is a minimum; MIPS eligible clinicians may elect to report data on more than a continuous 90-day period, including a period of up to the full 12 months of 2017. For groups that elect to utilize the CMS Web Interface or report the CAHPS for MIPS survey, we note that these submission mechanisms utilize certain assignment and sampling methodologies that are based on a 12-month performance period. In addition, administrative claims-based measures (this includes all of the cost measures and the all-cause hospital readmission measure), are based on attributed population using the 12-month period. Additionally, we are finalizing at § 414.1320(a)(2) that for purposes of the 2019 MIPS payment year, for data reported through the CMS Web Interface or the CAHPS for MIPS survey and administrative claims-based cost and quality measures, the performance period under MIPS is CY 2017 (January 1, 2017 through December 31, 2017). Please note that, unless otherwise stated, any reference in this final rule with comment period to the “CY 2017 performance period” is intended to be an inclusive reference to all performance periods occurring during CY 2017. More details on these submission mechanisms are covered in section II.E.5.a.2. of this final rule with comment period.

    We believe the flexibilities we are providing in our modified proposal discussed above will provide time for stakeholders to engage in further education about the new requirements and make the necessary modifications to their practices to accommodate reporting under the MIPS. We note that the continuous 90-day period of time required for reporting can occur at any point within the CY 2017 performance period, up until and including October 2, 2017, which is the last date that the continuous 90-day period of time required for reporting can begin and end within the CY 2017 performance period.

    For the second year under the MIPS, we are finalizing our proposal to require reporting and performance assessment for the full CY performance period for purposes of the quality and cost performance categories. Specifically, we are finalizing at § 414.1320(b)(1) that for the 2020 MIPS adjustment, for purposes of the quality and cost performance categories, the performance period is CY 2018 (January 1, 2018 through December 31, 2018). We do believe, however, that for the improvement activities and advancing care information performance categories, utilizing a continuous 90-day period that occurs during the 12-month MIPS performance period will assist MIPS eligible clinicians as they continue to familiarize themselves with the requirements under the MIPS. Additionally, to allow MIPS eligible clinicians and groups adequate time to transition to technology certified to the 2015 Edition for use in CY 2018, we believe it is appropriate to allow reporting on any continuous 90-day period that occurs during the 12-month MIPS performance period for the advancing care information performance category in CY 2018. Specifically, for the improvement activities and advancing care information performance categories, we are finalizing at § 414.1320(b)(2) that the performance period under MIPS is a minimum of a continuous 90-day period within CY 2018, up to and including the full CY 2018 (January 1, 2018 through December 31, 2018).

    Comment: Other commenters suggested making 2018 the first performance period for the first payment year of 2019. They stated that MIPS eligible clinicians could receive more timely feedback on their performance and still have the opportunity to make improvements in the second half of 2017 before the first performance period would begin.

    Response: It is not technically feasible to establish the first performance period in 2018 and begin applying MIPS payment adjustments in 2019. Some of the factors involved include: Allowing for a data submission period that occurs after the close of the performance period, running our calculation and scoring engines to calculate performance category scores and final score, allowing for a targeted review period, establishing and maintaining budget neutrality and issuance of each MIPS eligible clinician's specific MIPS payment adjustment. Based on our experience under the PQRS, VM, and Medicare EHR Incentive Program for Eligible Professionals, all of these activities on average take upwards of 9-12 months. We will continue to examine these operational processes to add efficiencies and reduce this timeframe in future years.

    Comment: Other commenters noted that MIPS eligible clinicians ideally require 18 to 24 months' time to adequately identify, adopt, and apply measures to established workflows for consistent data capture. The commenters also noted that most MIPS eligible clinicians are not yet comfortable with ICD-10 and added that there are 1491 new ICD-10 CM codes becoming effective in October 2016, and that MIPS eligible clinicians would not have sufficient time to refine processes within the proposed timeline (that is, by January 1, 2017).

    Response: We are finalizing a modified CY 2017 performance period, as discussed above. We believe this will allow MIPS eligible clinicians to adequately identify, adopt, and apply measures to establish workflows for consistent data capture as they familiarize themselves with MIPS requirements in 2017. We appreciate the concern raised by the commenters on the introduction of the new ICD-10 codes. However, we note that there are numerous resources available to assist commenters on incorporating these codes into their workflows at https://www.cms.gov/medicare/Coding/ICD10/index.html.

    Comment: Another commenter requested more time for clinicians and payers other than Medicare to make adjustments to programs and amend large numbers of significant risk‐based contracts between states and health plans, and between health plans and their network delivery system individual practice associations (IPAs), groups, and clinicians. The commenter stated that this would allow time for significant contract and subcontract amendments for other payers, and system changes for metrics, claims, and benefit systems.

    Response: We believe the flexibilities we are providing in the first performance period, as discussed in this final rule with comment period, will allow MIPS eligible clinicians and third party intermediaries the time needed to update their systems to meet program requirements and amend any agreements as necessary.

    Comment: Some commenters were concerned that setting the performance period too soon would not give third party intermediaries, such as EHR vendors, qualified registries, health IT vendors, and others the time needed to update their systems to meet program requirements. The commenters recommended setting the performance period later to allow these third party intermediaries time to validate new data entry and testing tools and overhaul their systems to comply with 2015 edition certification requirements. Another commenter believed the proposed policies would often require the use of multiple database systems that could not be accomplished in the time required.

    Response: We agree with the commenters that ensuring that third party intermediaries have sufficient time to update their technologies and systems will be a key component of ensuring that MIPS eligible clinicians are ready to meet program requirements. We believe the flexibilities we are providing in the first performance period, as discussed in this final rule with comment period, will allow third party intermediaries the time needed to update their systems to support MIPS eligible clinician participation. We note that there are no new certification requirements required for the Quality Payment Program and many health IT vendors have already begun work toward the 2015 Edition certification criteria which were finalized in October 2015. We believe that the flexibility offered and the lead time to required use of technology certified to the 2015 Edition, will mitigate these concern; however, we intend to monitor health IT development progress, adoption and implementation, and the readiness of QCDRs, health IT vendors, and other third parties supporting MIPS eligible clinician participation.

    Comment: Another commenter believed a later start date would provide CMS with more time to address several issues that were absent from the proposed rule, including the development of virtual groups, improved risk-adjustment and attribution methods, further refinement of episode-based resource measures and measurement tools and enhanced data feedback to participants. One commenter stated that they believed that the government programs that regulate and support MIPS have yet to be designed, tested, and implemented. The commenter stated they do not have MIPS performance thresholds or measure benchmark data and therefore cannot prepare their office to streamline the new processes and report appropriately in 2017.

    Response: We respectfully disagree with the commenter and intend to address further refinements to the MIPS program in future years. We appreciate the commenter's desire to delay the start of the MIPS until we are able to have full implementation of these factors. However, as we have noted in other sections within this final rule with comment period we intend to implement these provisions when technically feasible, as in the case of virtual groups, and when available, as in the case of improved risk-adjustment and attribution methods as well as additional episode-based resource measures. Additionally, as noted in section II.E.10. of this final rule with comment period, we intend to provide feedback to participants as required by statute, and we will enhance these feedback efforts over time. Lastly, as indicated in section II.E.6.a. of this final rule with comment period, due to the additional factors we are incorporating to simplify our scoring methodology, we have published the MIPS performance threshold in this final rule with comment period, and we will publish the measure benchmarks where available prior to the beginning of the performance period.

    Comment: Several commenters recommended that the first performance period occur later than January 1, 2017 based on commenters' analysis of the MACRA statute. Some commenters believe a delayed start date of July 1, 2017 would better match Congressional intent that the performance period be as close to the MIPS payment adjustment period as possible, while still allowing for the related MIPS payment adjustments to take place in 2019. The commenters further recommended that CMS use the time between the publication of the final rule with comment period and a delayed performance period start date to test and refine the performance feedback mechanisms for the Quality Payment Program. The commenters stated that by including the “as close as possible” language in section 1848(q)(4) of the Act, the Congress sought to urge CMS to select a performance period that will close the gap on CMS's practice of setting a 2-year look-back period for Medicare quality programs.

    Response: We appreciate the commenters concerns about Congressional intent for having a performance period as close as possible to the related MIPS payment adjustments. However, we believe our proposal is consistent with section 1848(q)(4) of the Act, as a performance period that occurs 2 years prior to the payment year is as close to the payment year as is currently possible. As noted above, from our experiences under the PQRS, VM, and Medicare EHR Incentive Program for Eligible Professionals, it takes approximately 9-12 months to perform the operational processes to produce a comprehensive and accurate list of MIPS eligible clinicians to receive a MIPS payment adjustment. We will continue to assess this timeframe for efficiencies in the future.

    Comment: Some commenters noted that section 1848(s) of the Act, as added by section 102 of MACRA, requires a quality measure development plan with annual progress reports, the first of which must be issued by May 1, 2017. The commenters stated that by starting the Quality Payment Program on January 1, 2017, before the first annual progress report is finalized, CMS will not have finalized key program requirements before it begins MIPS.

    Response: We note that the commenters are referring to 2 separate requirements under section 1848(s) of the Act. The quality measure development plan, known as the CMS Quality Measure Development Plan (MDP), was finalized and posted on May 2, 2016, which is available at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf and required to be updated as appropriate. In addition, the MDP Annual Report, which is to report on progress in developing measures, is required to be posted annually beginning not later than May 1, 2017. We intend to post the initial MDP Annual Report on May 1, 2017. While these statutory requirements are mandatory and support the development of the MIPS program, they are not prerequisites for the implementation of the MIPS program.

    Comment: Several commenters stated that the performance period was too early and suggested that CMS create an initial transitional performance period or phase-in period for the MIPS program. These commenters recommended numerous modifications and advantages as part of the transitional or phase-in period including: Phasing in some of the performance requirements such as requiring fewer quality measures and/or improvement activities in the transition year, creation of gradual performance targets which would allow sufficient time for participants to adapt to data collection and reporting prior to increasing performance standards, and phasing in the MIPS adjustment amounts such as applying a maximum MIPS payment adjustment of 2 percent in the transition year of the program, or applying negative MIPS adjustments only to groups of MIPS eligible clinicians above a certain size. These commenters noted the advantages of a transitional or phase-in period include allowing CMS to offset its concerns around calculation of outcome and claims-based measures, the feasibility of using different reporting mechanisms, meeting statutory deadlines, postponing changes to the advancing care information performance category and the capability of CMS' internal processes.

    The commenters suggested various dates for the transitional or phase-in period such as: January 1, 2017 through June 30, 2017, July 1, 2017 through December 31, 2017, allowing MIPS eligible clinicians to select a 6-month performance period or allowing MIPS eligible clinicians to use the full calendar year with an optional look-back to January 1 in 2017. The commenters requested that CMS provide technical assistance and a submission verification process during the transition period.

    Response: We agree with the commenters that there are numerous advantages to having a transitional or phase-in period for the transition year. As indicated previously in this section of this final rule with comment period, we have modified the performance period for the transition year to occur for a minimum of one continuous 90-day period up to a full calendar year within CY 2017 for all data in a given performance category and submission mechanism. We believe that this modified performance period as well as the modifications we are making to our scoring methodology as reflected in section II.E.6. of this final rule with comment period address a number of the concerns the commenters have raised. Lastly, we note that section 1848(q)(6) of the Act requires us to apply the MIPS adjustment based on a linear sliding scale and an adjustment factor of an applicable percent, which the statute defines as 4 percent for 2019. We do not have the discretion to apply a smaller adjustment factor to MIPS eligible clinicians such as only 2 percent.

    Comment: Multiple commenters recommended that 2017 be utilized for reporting purposes only and not payment purposes. Their recommendations ranged from having 2017 function as a straightforward reporting year only, such as an “implementation and benchmarking” year which would still allow CMS to collect data, but would not be used for financial impacts in 2019. Other suggestions included utilizing 2017 as a beta test year for MIPS eligible clinicians, plan capabilities and system preparedness. The commenters believed that a staged approach to MACRA implementation would provide for more coordinated change within the delivery system for patients, which must remain a focus for all as we continue embracing the Triple Aim of improving the patient experience of care (including quality and satisfaction); improving the health of populations; and reducing the per capita cost of health care. More information regarding the Triple Aim may be found at http://www.hhs.gov/about/strategic-plan/strategic-goal-1/.

    Response: We would like to explain that MIPS is a program where payment adjustments must be applied based on each MIPS eligible clinician's total performance on measures and activities. As such, we are not able to apply MIPS payment adjustments based on reporting alone. Additionally, as we have discussed above, we have made modifications to the performance period for the transition year of MIPS, as well as to the scoring methodology, as discussed in section II.E.6. of this final rule with comment period to allow MIPS eligible clinicians the opportunity to gain experience under the program without negative payment consequences.

    Comment: Other commenters urged changes to MIPS to provide flexibility for small practices. The commenters suggested a voluntary phase-in for small practices over a several-year period. Alternatively, the commenters suggested that CMS should not penalize very small practices (for example, five or fewer MIPS eligible clinicians) for a specified period of time, allowing them to implement and learn about MIPS reporting. Another commenter suggested that for the transition year of MIPS, CMS could permit small practices to be credited with full participation in MIPS based on a single quarter of successfully submitted 2017 data and permit larger practices to submit two quarters of data.

    Response: We have provided considerable flexibility for small practices throughout our MIPS proposals and this final rule with comment period. Specifically, we believe our modified low-volume threshold policy, as discussed in section II.E.3.c. of this final rule with comment period, will provide small groups considerable flexibility that will address the commenters' concerns.

    Comment: Some commenters were concerned with CMS statements from the proposed rule—specifically, that MIPS eligible clinicians do not have to begin reporting at the start of the performance period, suggesting that MIPS eligible clinicians will have more time to collect data, change workflows, and implement required MIPS and APM changes—create confusion as many of the MIPS program's quality measures require actions to be taken at the point of care and cannot be completed at a later date.

    Response: Our comments from the proposed rule accurately reflected our proposed policies. We regret any confusion created by statements in the proposals. The commenters are correct that many quality measures are required to be reported for every encounter. It is also correct, however, that other quality measures do not require reporting of every encounter (that is, NQF 0043: Pneumonia Vaccination Status for Older Adults). In general, the performance period is a window of time to report measures and, depending on the measure, MIPS eligible clinicians may need to report for just one quarter and the specified number of encounters for a given measure, or may need multiple encounters in multiple quarters for other measures

    Comment: Some commenters stated that the proposal interrupts their current short-term course of action of meeting Meaningful Use in 2016 and requested that we utilize 2017 as a preparation year to implement, adopt, measure, monitor, and manage new measures and boost performance on measures that previously had low thresholds for which MIPS eligible clinicians have to maximize performance.

    Response: We note that for those MIPS eligible clinicians who have previously participated in the EHR Incentive Program, the measures and objectives that are required under the advancing care information performance category are a reduction in the number and types of measures as previously required. More information on the advancing care information performance category can be found in section II.E.5.g. of this final rule with comment period.

    Comment: There were various comments regarding the duration of the MIPS performance period. Many commenters supported the 12-month performance period and requested that CMS stick to that timeline. The commenters stated that if timelines must be changed, CMS should do so before the performance period begins. Several commenters supported the performance period of one full year versus 90 days. They believed this would lead to consistent and high-quality data submission. Another commenter generally supported the proposed performance period but cautioned CMS that any shortened performance periods could burden certain MIPS eligible clinicians whose practices vary in volume based on factors such as their geographies, specialties, and nature of the patients they treat that are outside of their control. Other commenters believed CMS should not delay the Quality Payment Program implementation or finalize an abbreviated performance period in the transition year. These commenters suggested that CMS act immediately on the premise that implementation for 2017 should begin now with clear education and guidance in order to ensure successful transitions to the new Quality Payment Program.

    Response: We appreciate the commenters' support. We believe that measuring performance on a 12-month period is the most accurate and reliable method for measuring a MIPS eligible clinician's performance. We note that we are modifying our proposal to require reporting for a minimum continuous 90-day period of time within the CY 2017 performance period for the majority of available submission mechanisms for all data in a given performance category and submission mechanism. However, we strongly encourage all MIPS eligible clinicians to submit data for up to the full calendar year if feasible for their practice. We anticipate that MIPS eligible clinicians who are able to submit a more robust data set, such as data on a 12-month period, will have the benefit of having their full population of patients measured, which will assist these MIPS eligible clinicians on their quality improvement goals.

    Comment: Some commenters believed MACRA's four MIPS performance categories are adding complexity to the delivery of patient-centered care and do not increase the time medical clinicians spend with patients. Specifically, the commenters believed that there is not much of a difference between PQRS/MU and the new “quality” and “advancing care information” performance categories. The commenters added that the improvement activities performance category appears complicated and the cost performance category is intensive. The commenters proposed a solution that measurable elements be for a 90-day period during the calendar year so that measuring tools will not need to be in place at all times, resulting in less disruption and a greater focus on patients.

    Response: Our intention in creating MIPS is to provide a more comprehensive and simplified system that provides value. The commenter is correct that we maintained many elements of the PQRS and EHR Incentive Program that we found through experience to be meaningful to clinicians. The requirements for the cost and improvement activities performance categories are described in sections II.E.5.e. and II.E.5.f., respectively, of this final rule with comment period. We believe these performance categories to be very low in burden. In addition, as described in section II.E.5.e of this final rule with comment period, the cost performance category will account for 0 percent of the final score in 2019 and we are redistributing the final score weight from cost performance category to the quality performance category. Lastly, as noted above, we are allowing MIPS eligible clinicians to report on quality, improvement activities, and advancing care information performance category information for a minimum of a continuous 90-day period during the CY 2017 performance period for the majority of available submission mechanisms for all data in a given performance category and submission mechanism. In addition, the cost performance category will be calculated based on the performance period using administrative claims data. As a result, individual MIPS eligible clinicians and groups will not be required to submit any additional information for the cost performance category.

    Comment: Another commenter believed a full year of quality reporting is necessary to ensure data reliability for small practices but encouraged CMS to finalize a 90-day performance period for the improvement activities and advancing care information performance categories. The commenter believed CMS could finalize a shorter performance period for quality reporting in the future if 2015 data is modeled to show sufficient reliability under a shorter performance period.

    Response: We agree with the commenter and believe that measuring performance on a 12-month period is the most accurate method for measuring a clinician's performance. However, for the transition year of MIPS, we are providing flexibility while maintaining reliability and finalizing a modified performance period, as discussed above, so that MIPS eligible clinicians may familiarize themselves with MIPS requirements.

    Comment: Several commenters requested that CMS define the performance period as less than a full year. The suggestions of the start date were varied including: A suggested start date of July 1, 2017, which would allow MIPS eligible clinicians enough time to review and select appropriate measures; a 9-month performance period of April 1 through December 31, 2017; a 90 day period from January 1st through March 31st of each year because the commenter believed that this shorter time frame would not differ significantly from a full-year assessment period; and a period occurring from January 15 through April 15 so that reports could be compiled and tested prior to submission. These commenters cited various concerns, including that full calendar year reporting would be a significant departure from current reporting requirements under the EHR Incentive Program and that it would not allow for full validation and testing of EHR-generated data following software upgrades or measurement specification changes. Other commenters were concerned that the proposal to use a full calendar year for the performance period could create administrative burden for practices and limit innovation without improving the validity of the data. The commenters recommended that in future years, CMS take advantage of the flexibility granted under the MACRA statute to allow MIPS eligible clinicians to select a shorter performance period for either the MIPS program or APM incentive payments. Another commenter believed that CMS should permit MIPS eligible clinicians to select a shorter performance period if they believe it is more appropriate for their practice.

    Response: We do understand and appreciate the concerns raised by commenters that the performance period for the transition year of the program may be a shorter length than 12 months. For the transition year of MIPS, we are providing flexibility while maintaining reliability and finalizing a modified performance period, as discussed above, so that MIPS eligible clinicians may familiarize themselves with MIPS requirements.

    Comment: A few commenters noted that measures for the cost performance category may need to be calculated over a longer period of time in order to ensure their reliability and applicability to practices, and recommended that if CMS shortens the initial MIPS performance period, CMS should make a distinction between performance periods for performance categories where data submission is required versus those where CMS calculates measures using administrative claims data. The commenters suggested that CMS should conduct detailed analysis of VM data to determine the extent to which including data for a year rather than 6 or 9 months improves reliability and expands applicability of the measures.

    Response: We appreciate the commenters' suggestions. We have not done an analysis to look at reliability of the measures using a 6-month or 9-month performance period. We will consider this approach for future rulemaking.

    Comment: Another commenter recommended that CMS should also reduce the case minimums for measures as MIPS eligible clinicians will not have sufficient time to see the same number of patients during a shortened performance period.

    Response: We refer the commenter to section II.E.6.a.(2) of this final rule with comment period where we discuss the quality scoring proposals and the case minimum requirements.

    Comment: Other commenters recommended a 90-day performance period for 2017 for private specialty practices, as well as a 90-day performance period for any reporting year that the practice is required to upgrade their version of CEHRT. For example, the commenters noted that in mid-2017, many MIPS eligible clinicians will be upgrading from EHR technology certified to the 2014 Edition to EHR technology certified to the 2015 Edition. The commenters stated that this can often cause data integrity issues and would continuously place the practice on a split CEHRT any year that this type of upgrade occurs. They suggested a 90-day performance period during the upgrade year would allow a practice to upgrade and attest to the most recent version and standards.

    Response: We are modifying our proposal to allow reporting for a minimum of a continuous 90-day period of time within the CY 2017 performance period for the majority of available submission mechanisms for all data in a given performance category and submission mechanism. Additionally, we understand the commenters' concerns and rationale for requesting a 90-day performance period. We note that for the first performance period in 2017, we will accept a minimum of 90 days of data within CY 2017, though we greatly encourage MIPS eligible clinicians to meet the full year performance period. In order to allow MIPS eligible clinicians and groups adequate time to transition to technology certified to the 2015 Edition for use in CY 2018, we believe it is appropriate to also allow a performance period of continuous 90-day period within the CY for the advancing care information performance category in CY 2018.

    Comment: Another commenter requested that CMS offer advance notice appropriate to the size of the change (for example, transitioning to new editions of CEHRTs might require years of notice, whereas annually updated benchmarks might require only a few months). The commenter requested that the proposed policies not be implemented until at least 6 months after the final rule with comment period is published.

    Response: We will provide as much advance notice as is necessary when making changes to the MIPS program. We recognize that all parties involved in the MIPS program require advance notice to make adjustments to accommodate changes.

    Comment: Some commenters suggested that CMS shorten the performance period to 9 months of the calendar year, followed by 3 months of data analysis to calculate the scores and MIPS payment adjustments. The rationale for this recommendation included allowing for a number of program improvements, including reducing administrative burden in MIPS, aligning the performance period across categories, shrinking the 2-year lag period between performance and payment, and increased relevance and timeliness of feedback. The commenters also stated that this would give opportunity to set benchmarks based on more current data. Based on one commenter's polling of its members, 92 percent preferred a performance period of any 90 consecutive days compared to the proposed performance period.

    Response: We considered utilizing a 9-month performance period as the commenter recommended, however we did not utilize this option since this would still require a “2-year lag” to account for the post submission processes of calculating the MIPS eligible clinician's final score, establishing budget neutrality and issuing the payment adjustment factors and allowing for a targeted review period to occur prior to the application of the MIPS payment adjustment to MIPS eligible clinicians claims. As stated above, we are modifying our proposal and finalizing that MIPS eligible clinicians will only need to report for a minimum of a continuous 90-day period in 2017, for the majority of the data submission mechanisms. We believe this flexibility will allow for a number of program improvements, including reducing administrative burden in MIPS for the transition year and will align across the quality, advancing care information, and improvement activities performance categories. In addition, we will continue working with stakeholders to improve feedback provisions under MIPS and to shorten the “2-year lag” that the commenter describes.

    Comment: One commenter stated that they recognized a shorter performance period may present challenges for CMS systems and processes; therefore, they urged CMS to work with MIPS eligible clinicians to develop options and a specific plan to provide accommodations where possible.

    Response: We appreciate the comment and will continue to work closely with stakeholders throughout the Quality Payment Program.

    Comment: Other commenters believed a shorter performance period would eliminate the participation burden and confusion for MIPS eligible clinicians who may switch practices mid-year and have to track and report data for multiple TIN/NPI combinations under the proposed full calendar year performance period.

    Response: We agree with the commenter that the shortened minimum continuous 90-day period of time will assist in decreasing participation burden. We note that the modified performance period will not eliminate the need for tracking multiple TIN/NPIs depending upon the specific circumstances of the MIPS eligible clinician, but we agree with the commenter that it will mitigate this issue.

    Comment: A few commenters recommended a 6-month performance period for MIPS with an optional look-back period for registries to increase sample size, validity and reliability and an extension of data submissions for QCDRs to April 31 following the performance period, or 4 months after the performance period to allow for the capture and analytics required for the use of risk-adjusted outcomes data.

    Response: Our modified proposal of a continuous 90-day period within the CY 2017 performance period for all data in a given performance category and submission mechanism is a minimum period and we strongly encourage all MIPS eligible clinicians to report on data for a full year where possible for their practice. We believe this policy will address the commenters' concerns while maintaining reliability. Our policies regarding the performance period are described in more detail in section II.E.4. of this final rule with comment period. We note that it is not clear how a longer data submission timeframe will help with the capture of risk-adjusted data elements used in outcomes measures. In most, if not all, instances, any co-morbidities affecting the outcome for a patient would be known before or at the time the care is rendered.

    Comment: One commenter suggested that if CMS rejects changing the initial performance period for 2017 to 90 days, it should implement preliminary and f-Final performance periods, with analysis periods (from January to March) and implementation periods (from April to May), to allow MIPS eligible clinicians to evaluate their performance with the various MIPS requirements from August to September, followed by a final performance period from October to December.

    Response: We thank the commenter for their feedback. As discussed above, we are modifying our proposal to allow reporting for a minimum of a continuous 90-day period within the CY 2017 performance period for the majority of available submission mechanisms for all data in a given performance category and submission mechanism.

    Comment: Many commenters stated that CMS must work to reduce the 2-year gap between the performance period and the payment year because it is burdensome, is not meaningful nor actionable as MIPS eligible clinicians will not know what they must adjust to meet benchmarks, and it hinders timely data reporting and feedback. One commenter acknowledged the operational difficulty associated with having performance periods close to MIPS payment adjustment periods, but requested that CMS work to shorten the look back period between performance assessment and adjustment.

    Response: We agree with commenters that improved feedback mechanisms are always important, and we will continue working with stakeholders to provide timely and better feedback under MIPS and to shorten the “2-year gap” that the commenter describes.

    Comment: There were various suggestions on the most appropriate time gap between the performance period and the payment year. Several commenters suggested that a 1-year gap would be more appropriate and others proposed a 6-month time gap. Another commenter believed, that the time lag of essentially 2 years between the performance period and the payment year severely disadvantages MIPS eligible clinicians falling below the top tier performance threshold and inflates the rating of competing MIPS eligible clinicians, who can rest on the laurels of their prior performance years. Further, the commenter noted that if a MIPS eligible clinician had an unsatisfactory performance rating, (for example, from data collected in January of 2016), and took corrective action to earn a higher rating, the efforts of that corrective action would not be available to the public for a minimum of 2 years. A few commenters believed CMS should increase the relevance and timeliness of data, which could be provided on a quarterly basis.

    Response: We appreciate the commenters' feedback. We agree with the commenters that a delay between the performance period and the MIPS payment adjustment year impacts the clinicians' ability to make timely improvements within their practice. For the initial years of MIPS, we do anticipate that this gap between the performance period and the payment adjustment year will continue to occur to allow time for submission and calculation of data, issuance of feedback, a targeted review period, calculation of final scores, and application of clinician-specific MIPS adjustments in time for the payment year.

    Comment: Other commenters believed CMS should use language clarifying that the MIPS performance period begins on January 1, 2017. The commenters suggested linking the language for the performance year with the adjustment year in some way (for example, “MIPS 2017/19”, “2017 performance period (2019)”).

    Response: We will ensure that all communications clearly indicate the link between the performance period and the MIPS payment adjustment year.

    Comment: A few commenters expressed support for CMS' proposal of a 90-day claims data run-out. Another commenter stated that if the proposed window is not feasible, the commenter supported a 60-day window.

    Response: We appreciate the commenter's feedback. Based on further analyses of Medicare Part B claims for 2014, we have determined that there is only a 0.5 percent difference in claims processing completeness when using 90 days rather than 60 days. Therefore, we are finalizing our alternative proposal at § 414.1325(f)(2) that the submission deadline for Medicare Part B claims, must be on claims with dates of service during the performance period that must be processed no later than 60 days following the close of the performance period.

    Comment: Another commenter requested more information regarding how MIPS eligible clinicians participating for part of the performance period will be assessed against MIPS eligible clinicians participating for the full performance period. The commenter cautioned against penalizing MIPS eligible clinicians not practicing for reasons beyond their control, such as for health reasons. Other commenters expressed concern that MIPS eligible clinicians could attempt to game the system with extended leave. Other commenters supported the expectations for reporting when MIPS eligible clinicians have a break in their practice, and one commenter expressed concern about MIPS eligible clinicians who change groups because doing so may negatively impact group performance. The commenters believed a policy for exceptions may mitigate the problem and provide consistency. Another commenter stated that MIPS eligible clinicians with less than 12 months of performance data should be assessed on the period of time for which they do report.

    Response: As discussed in this final rule with comment period, we are modifying our proposal to allow reporting for a minimum of a continuous 90-day period within the CY 2017 performance period for the majority of available submission mechanisms for all data in a given performance category and submission mechanism. We would like to note that we are finalizing that individual MIPS eligible clinician or groups who report less than 12 months of data (due to family leave, etc.) would be required to report all performance data available from the performance period. For example, for the performance period in 2017, MIPS eligible clinicians who have less than 90 days' worth of data would be required to submit all performance data that they have available. We are finalizing this proposal with modification to apply to any applicable performance period (for example, to any 90-day period). Based on the Medicare Part B data available to us, we do not intend to make any scoring adjustments based on the duration of the performance period. We recognize that a longer (that is, 12-month) performance period provides greater assurance of reliability with respect to the submitted data and therefore strongly encourage all MIPS eligible clinicians who have the ability to submit data for a period greater than 90 days, to do so.

    Comment: A few commenters supported the proposed performance period, but requested that CMS increase its outreach to MIPS eligible clinicians who have not successfully reported under PQRS in the past to help them to achieve the reporting standard during this time. A few commenters stated that going forward CMS should ensure that the timeframes for annual MACRA regulations, subregulatory guidance and other agency communications are sufficient to allow MIPS eligible clinicians and health plans to act on the information in advance of the applicable performance years. For purposes of publishing the list of APMs, Medical Home Models, MIPS APMs, Advanced APMs, and eventually other-payer APMs, the commenter believed that CMS should start the process at least 15 months in advance of the applicable performance year, and finalize the list at least 9 months in advance of the applicable performance year.

    Response: We appreciate the support. We have multiple mechanisms we have employed to reach out to all MIPS eligible clinicians to provide support. We will make every effort to ensure the timeframes for agency communications are sufficient to allow MIPS eligible clinicians and health plans to act on the information in advance of the applicable performance period. Please refer to section II.F.4. of this final rule with comment period for further information on how we will make clear the status of any APM upon its first public announcement.

    Comment: Other commenters urged CMS to communicate submission problems to both vendors and practices as soon as possible to allow for alternative submission mechanisms and to encourage vendors to be open about their ability to meet data submission standards.

    Response: We make every effort to communicate submission problems to stakeholders through multiple communication channels including health IT vendors, specialty societies, registries, and MIPS eligible clinicians as soon as possible and will continue to do so in the future.

    Comment: One commenter supported using claims paid within 60 days after the performance period.

    Response: We agree and appreciate the commenters support. We are finalizing our proposal to use claims that are processed within 60 days, after the end of the performance period for purposes of assessing performance and computing the MIPS payment adjustment.

    After consideration of the comments we received regarding the MIPS performance period, we are finalizing a modification of our proposal of a 12-month performance period that occurs 2 years prior to the applicable payment year. For the transition year of MIPS, we believe it is important that we provide flexibility to MIPS eligible clinicians as they familiarize themselves with MIPS requirements while maintaining reliability. Therefore, we are finalizing at § 414.1320(a)(1) that for purposes of the 2019 MIPS payment year, for all performance categories and submission mechanisms except for the cost performance category and data for the quality performance category reported through the CMS Web Interface, for the CAHPS for MIPS survey, and for the all-cause hospital readmission measure, the performance period under MIPS is a minimum of a continuous 90-day period within CY 2017, up to and including the full CY (January 1, 2017 through December 31, 2017). Thus, MIPS eligible clinicians will only need to report for a minimum of a continuous 90-day period within CY 2017, for the majority of the submission mechanisms. This 90-day period can occur anytime within CY 2017, so long as the 90-day period begins on or after January 1, 2017, and ends on or before December 31, 2017. Additionally, for further flexibility and ease of reporting this 90-day period can differ across performance categories. For example, a MIPS eligible clinician may utilize a 90-day period that spans from June 1, 2017-August 30, 2017 for the improvement activities performance category and could use a different 90-day period for the quality performance category, such as August 15, 2017-November 13, 2017. The continuous 90-day period is a minimum; MIPS eligible clinicians may elect to report data on more than a continuous 90-day period, including a period of up to the full 12 months of 2017. We note there are special circumstances in which MIPS eligible clinicians may submit data for a period of less than 90 days and avoid a negative MIPS payment adjustment. For example, in some circumstances, MIPS eligible clinicians may meet data completeness criteria for certain quality measures in less than the 90-day period. Also, in instances where MIPS eligible clinicians do not meet the data completeness criteria for quality measures, we will provide partial credit for these measures as discussed in section II.E.6. of this final rule with comment period.

    For groups that elect to utilize the CMS Web Interface or report the CAHPS for MIPS survey, we note that these submission mechanisms utilize certain assignment and sampling methodologies that are based on a 12-month period. In addition, administrative claims-based measures (this includes all of the cost measures and the all-cause readmission measure) are based on attributed population using the 12-month performance period. Accordingly, we are finalizing at § 414.1320(a)(2) that for purposes of the 2019 MIPS payment year, for data reported through the CMS Web Interface or the CAHPS for MIPS survey and administrative claims-based cost and quality measures, the performance period under MIPS is CY 2017 (January 1, 2017 through December 31, 2017). Please note that, unless otherwise stated, any reference in this final rule with comment period to the “CY 2017 performance period” is intended to be an inclusive reference to all performance periods occurring during CY 2017.

    Additionally, we are finalizing at § 414.1320(b)(1) that for purposes of the 2020 MIPS payment year, the performance period for the quality and cost performance categories is CY 2018 (January 1, 2018 through December 31, 2018). For the improvement activities and advancing care information performance categories, we are finalizing the same approach for the 2020 MIPS payment year that we will have in place for the transition year of MIPS. Specifically, we are finalizing at § 414.1320(b)(2) that for purposes of the 2020 MIPS payment year, the performance period for the improvement activities and advancing care information performance categories is a minimum of a continuous 90-day period within CY 2018, up to and including the full CY 2018 (January 1, 2018 through December 31, 2018).

    We are also finalizing a modification to our proposal, which was to use claims run-out data that are processed within 90 days, if operationally feasible, after the end of the performance period for purposes of assessing performance and computing the MIPS payment adjustment. Specifically, we are finalizing at § 414.1325(f)(2) to use claims with dates of service during the performance period that must be processed no later than 60 days following the close of the performance period for purposes of assessing performance and computing the MIPS payment adjustment.

    Lastly, we are finalizing our proposal that individual MIPS eligible clinicians or groups who report less than 12 months of data (due to family leave, etc.) would be required to report all performance data available from the applicable performance period (for example, to any 90-day period).

    5. MIPS Performance Category Measures and Activities a. Performance Category Measures and Reporting (1) Statutory Requirements

    Section 1848(q)(2)(A) of the Act requires the Secretary to use four performance categories in determining each MIPS eligible clinician's final score under the MIPS: Quality; cost; improvement activities; and advancing care information. Section 1848(q)(2)(B) of the Act, subject to section 1848(q)(2)(C) of the Act, describes the measures and activities that, for purposes of the MIPS performance standards, must be specified under each performance category for a performance period.

    Section 1848(q)(2)(B)(i) of the Act describes the measures and activities that must be specified under the MIPS quality performance category as the quality measures included in the annual final list of quality measures published under section 1848(q)(2)(D)(i) of the Act and the list of quality measures described in section 1848(q)(2)(D)(vi) of the Act used by QCDRs under section 1848(m)(3)(E) of the Act. Under section 1848(q)(2)(C)(i) of the Act, the Secretary must, as feasible, emphasize the application of outcome-based measures in applying section 1848(q)(2)(B)(i) of the Act. Under section 1848(q)(2)(C)(iii) of the Act, the Secretary may also use global measures, such as global outcome measures and population-based measures, for purposes of the quality performance category. Section 1848(q)(2)(B)(ii) of the Act describes the measures and activities that must be specified under the cost performance category as the measurement of cost for the performance period under section 1848(p)(3) of the Act, using the methodology under section 1848(r) of the Act as appropriate, and, as feasible and applicable, accounting for the cost of drugs under Part D.

    Section 1848(q)(2)(C)(ii) of the Act allows the Secretary to use measures from other CMS payment systems, such as measures for inpatient hospitals, for purposes of the quality and cost performance categories, except that the Secretary may not use measures for hospital outpatient departments, other than in the case of items and services furnished by emergency physicians, radiologists, and anesthesiologists. In the proposed rule, we solicited comment on how it might be feasible and when it might be appropriate to incorporate measures from other systems into MIPS for clinicians that work in facilities such as inpatient hospitals. For example, it may be appropriate to use such measures when other applicable measures are not available for individual MIPS eligible clinicians or when strong payment incentives are tied to measure performance, either at the facility level or with employed or affiliated MIPS eligible clinicians.

    Section 1848(q)(2)(B)(iii) of the Act describes the measures and activities that must be specified under the improvement activities performance category as improvement activities under subcategories specified by the Secretary for the performance period, which must include at least the subcategories specified in section 1848(q)(2)(B)(iii)(I) through (VI) of the Act. Section 1848(q)(2)(C)(v)(III) of the Act defines a improvement activities as an activity that relevant eligible clinician organizations and other relevant stakeholders identify as improving clinical practice or care delivery and that the Secretary determines, when effectively executed, is likely to result in improved outcomes. Section 1848(q)(2)(B)(iii) of the Act requires the Secretary to give consideration to the circumstances of small practices (consisting of 15 or fewer professionals) and practices located in rural areas and geographic HPSAs in establishing improvement activities.

    Section 1848(q)(2)(B)(iv) of the Act describes the measures and activities that must be specified under the advancing care information performance category as the requirements established for the performance period under section 1848(o)(2) for determining whether an eligible clinician is a meaningful EHR user.

    As discussed in the proposed rule (81 FR 28173), section 1848(q)(2)(C)(iv) of the Act requires the Secretary to give consideration to the circumstances of non-patient facing MIPS eligible clinicians in specifying measures and activities under the MIPS performance categories and allows the Secretary, to the extent feasible and appropriate, to take those circumstances into account and apply alternative measures or activities that fulfill the goals of the applicable performance category. In doing so, the Secretary is required to consult with non-patient facing professionals.

    Section 101(b) of MACRA amends certain provisions of section 1848(k), (m), (o), and (p) of the Act to generally provide that the Secretary will carry out such provisions in accordance with section 1848(q)(1)(F) of the Act for purposes of MIPS. Section 1848(q)(1)(F) of the Act provides that, in applying a provision of section 1848(k), (m), (o), and (p) of the Act for purposes of MIPS, the Secretary must adjust the application of the provision to ensure that it is consistent with the MIPS requirements and must not apply the provision to the extent that it is duplicative with a MIPS provision.

    We did not request comments on this section, but we did receive a few comments which are summarized below.

    Comment: Some commenters requested that MIPS begin in its most basic structure involving as few measures as possible due to the fact that the practices have little or no experience in these processes and very limited staff, particularly in smaller practices. Another commenter recommended that CMS reduce the number of MIPS measures across the four performance categories. The commenter expressed concern that the implementation time will be slow due to developing relationships with data submission vendors which will lead to practices being overwhelmed by the number of measures.

    Some commenters suggested that instead of focusing on four performance categories simultaneously, CMS should focus on interoperability and making that functionality fully workable before moving on to the next step.

    One commenter was very concerned that the cumulative effect of four sets of largely separate measures and activities, scoring methodologies, and reporting requirements could result in more administrative work for practices, not less, and encouraged CMS to consider additional ways to reduce the MIPS reporting burden for all practices such as reducing the number of required measures or activities in each MIPS performance category, lowering measure thresholds, establishing consistent definitions (such as for “small practices”) across categories, and providing more opportunities for “partial credit.” Other commenters urged CMS to take every possible step to dramatically simplify provisions and requirements, and to revise and develop practice-focused communications to reduce any remaining perceived complexity.

    Another commenter agreed with the level of flexibility CMS has proposed for MIPS eligible clinicians by allowing them to choose the specific quality performance measures most applicable to their practice and stated that CMS should design the requirements within the performance categories to work in concert with each other to ensure meaningful quality measurement. Some commenters asked if there will be interoperability between the four MIPS performance categories.

    Response: As discussed in section II.E.5.b.(3) of this final rule with comment period, we have decreased the data submission criteria for the quality performance category to a level that reduces burden while still maintaining meaningful measurements at this time. We will continue to assess this approach to improve on this aspect in the future. We appreciate the commenters' request for simplicity and the need for clear communications. We will continue to look for ways to simplify the MIPS program in the future and will work to ensure clear communications with the MIPS eligible clinician community on all of the MIPS provisions. We note that the definition of a small practice is the same across all four performance categories and is consistent with the statute. We have codified the definition of a small practice for MIPS at § 414.1305 as practices consisting of 15 or fewer clinicians and solo practitioners.

    Further, we are required by statute to utilize the four performance categories to determine the final score. We appreciate the support and agree that the goal of the MIPS program is that the four performance categories should work in concert with one another. In addition, as discussed in section II.E.5. of this final rule with comment period, we have modified our policies to have the four performance categories work more in concert with one another.

    Comment: One commenter requested that CMS simplify the MIPS to the extent practicable by further limiting the number of measures reportable under each performance category and refraining from introducing any new and previously untested measures (for example, population-based quality measures).

    Response: In any quality measurement program, we must balance the data collection burden that we must impose on MIPS eligible clinicians with the resulting quality performance data that we will receive. We believe that without sufficiently robust performance data, we cannot accurately measure quality performance. Therefore, we believe that we have appropriately struck a balance between requiring sufficient quality measure data from MIPS eligible clinicians and ensuring robust quality measurement at this time. Regarding the global and population-based measures, we refer the reader to section II.E.5.b.(6) of this final rule with comment period.

    Comment: One commenter stated that CMS appears to view the four MIPS categories as separate but should treat them holistically. The commenter suggested unifying definitions across all MIPS categories, such as the proposed definition of a “small practice” as consisting of 15 or fewer clinicians.

    Response: We are required by statute to utilize the four performance categories to determine the final score. As the program evolves we believe the performance categories will become more streamlined and integrated. The definition of a small practice is the same across all four performance categories and is consistent with the statute. We have codified the definition of a small practice for MIPS at § 414.1305 as practices consisting of 15 or fewer clinicians and solo practitioners.

    Comment: Some commenters suggested combining the improvement activities and advancing care information performance categories.

    Response: Each of these performance categories is statutorily mandated, and we believe each has a distinct role in the MIPS program.

    Comment: Another commenter stated that data and reporting requirements should generally be efficient, strong, and actionable for the purposes of quality improvement, payment, consumer decision-making, and any other areas where they can be useful. Another commenter generally recommended that quality measures in the MIPS program be meaningful, that innovative science should be accommodated when achieving quality aims in areas without measures or therapies, and incentives surrounding cost should reward high-value care, not simply low cost.

    Response: We appreciate the commenters' support.

    We have considered the comments received and will take them into consideration in the future development of performance feedback through separate notice-and-comment rulemaking.

    (2) Submission Mechanisms

    We proposed at § 414.1325(a) that individual MIPS eligible clinicians and groups would be required to submit data on measures and activities for the quality, improvement activities and advancing care information performance categories. We did not propose at § 414.1325(f) any data submission requirements for the cost performance category and for certain quality measures used to assess performance on the quality performance category and for certain activities in the improvement activities performance category. For the cost performance category, we proposed that each individual MIPS eligible clinician's and group's cost performance would be calculated using administrative claims data. As a result, individual MIPS eligible clinicians and groups would not be required to submit any additional information for the cost performance category. In addition, we would be using administrative claims data to calculate performance on a subset of the MIPS quality measures and the improvement activities performance category, if technically feasible. For this subset of quality measures and improvement activities, MIPS eligible clinicians and groups would not be required to submit additional information. For individual clinicians and groups that are not MIPS eligible clinicians, such as physical therapists, but elect to report to MIPS, we would calculate administrative claims cost measures and quality measures, if data are available. We proposed multiple data submission mechanisms for MIPS as outlined in Tables 1 and 2 in the proposed rule (81 FR 28182) and the final policies identified in Tables 3 and 4 in this final rule with comment period, to provide MIPS eligible clinicians with flexibility to submit their MIPS measures and activities in a manner that best accommodates the characteristics of their practice. We note that other terms have been used for these submission mechanisms in earlier programs and in industry.

    Table 1—Proposed Data Submission Mechanisms for MIPS Eligible Clinicians Reporting Individually as TIN/NPI Performance category/submission combinations accepted Individual reporting
  • data submission mechanisms
  • Quality Claims.
  • QCDR.
  • Qualified registry.
  • EHR.
  • Administrative claims (no submission required).
  • Cost Administrative claims (no submission required). Advancing Care Information Attestation.
  • QCDR.
  • Qualified registry.
  • EHR.
  • Improvement Activities Attestation.
  • QCDR.
  • Qualified registry.
  • EHR.
  • Administrative claims (if technically feasible, no submission required).
  • Table 2—Proposed Data Submission Mechanisms for Groups Performance category/submission combinations accepted Group Reporting
  • data submission mechanisms
  • Quality QCDR.
  • Qualified registry.
  • EHR.
  • CMS Web Interface (groups of 25 or more).
  • CMS-approved survey vendor for CAHPS for MIPS (must be reported in conjunction with another data submission mechanism.)
  • and
  • Administrative claims (no submission required).
  • Cost Administrative claims (no submission required). Advancing Care Information Attestation.
  • QCDR.
  • Qualified registry.
  • EHR.
  • CMS Web Interface (groups of 25 or more).
  • Improvement Activities Attestation.
  • QCDR.
  • Qualified registry.
  • EHR.
  • CMS Web Interface (groups of 25 or more).
  • Administrative claims (if technically feasible, no submission required).
  • We proposed at § 414.1325(d) that MIPS eligible clinicians and groups may elect to submit information via multiple mechanisms; however, they must use the same identifier for all performance categories and they may only use one submission mechanism per performance category. For example, a MIPS eligible clinician could use one submission mechanism for sending quality measures and another for sending improvement activities data, but a MIPS eligible clinician could not use two submission mechanisms for a single performance category such as submitting three quality measures via claims and three quality measures via registry. We believe the proposal to allow multiple mechanisms, while restricting the number of mechanisms per performance category, offers flexibility without adding undue complexity.

    For individual MIPS eligible clinicians, we proposed at § 414.1325(b), that an individual MIPS eligible clinician may choose to submit their quality, improvement activities, and advancing care information performance category data using qualified registry, QCDR, or EHR submission mechanisms. Furthermore, we proposed at § 414.1400 that a qualified registry, health IT vendor, or QCDR could submit data on behalf of the MIPS eligible clinician for the three performance categories: Quality, improvement activities, and advancing care information. In the proposed rule (81 FR 28280), we expanded third party intermediaries' capabilities by allowing them to submit data and activities for quality, improvement activities, and advancing care information performance categories. Additionally, we proposed at § 414.1325(b)(4) and (5) that individual MIPS eligible clinicians may elect to report quality information via Medicare Part B claims and their improvement activities and advancing care information performance category data through attestation.

    For groups that are not reporting through the APM scoring standard, we proposed at § 414.1325(c) that these groups may choose to submit their MIPS quality, improvement activities, and advancing care performance category information data using qualified registry, QCDR, EHR, or CMS Web Interface (for groups of 25+ MIPS eligible clinicians) submission mechanisms. Furthermore, we proposed at § 414.1400 that a qualified registry, health IT vendor that obtains data from a MIPS eligible clinician's CEHRT, or QCDR could submit data on behalf of the group for the three performance categories: Quality, improvement activities, and advancing care information. Additionally, we proposed that groups may elect to submit their improvement activities or advancing care information performance category data through attestation.

    For those MIPS eligible clinicians participating in an APM that uses the APM scoring standard, we refer readers to the proposed rule (81 FR 28234), which describes how certain APM Entities submit data to MIPS, including separate approaches to the quality and cost performance categories for APMs.

    We proposed one exception to the requirement for one reporting mechanism per performance category. Groups that elect to include CAHPS for MIPS survey as a quality measure must use a CMS-approved survey vendor. Their other quality information may be reported by any single one of the other proposed submission mechanisms.

    While we proposed to allow MIPS eligible clinicians and groups to submit data for different performance categories via multiple submission mechanisms, we encouraged MIPS eligible clinicians to submit MIPS information for the improvement activities and advancing care information performance categories through the same reporting mechanism that is used for quality reporting. We believe it would reduce administrative burden and would simplify the data submission process for MIPS eligible clinicians by having a single reporting mechanism for all three performance categories for which MIPS eligible clinicians would be required to submit data: Quality, improvement activities, and advancing care information performance category information. However, we were concerned that not all third party entities would be able to implement the changes necessary to support reporting on all performance categories in the transition year. We solicited comments for future rulemaking on whether we should propose requiring health IT vendors, QCDRs, and qualified registries to have the capability to submit data for all MIPS performance categories.

    As noted at (81 FR 28181), we proposed that MIPS eligible clinicians may report measures and activities using different submission methods for each performance category if they choose for reporting data for the CY 2017 performance period. As we gain experience under MIPS, we anticipate that in future years it may be beneficial for, and reduce burden on MIPS eligible clinicians and groups, to require data for multiple performance categories to come through a single submission mechanism.

    Further, we will be flexible in implementing MIPS. For example, if a MIPS eligible clinician does submit data via multiple submission mechanisms (for example, registry and QCDR), we would score all the measures in each submission mechanism and use the highest performance score for the MIPS eligible clinician or group as described at (81 FR 28247). However, we would not be blending measure results across submission mechanisms. We encourage MIPS eligible clinicians to report data for a given performance category using a single data submission mechanism.

    Finally, section 1848(q)(1)(E) of the Act requires the Secretary to encourage the use of QCDRs under section 1848(m)(3)(E) of the Act in carrying out MIPS. Section 1848(q)(5)(B)(ii)(I) of the Act requires the Secretary, under the final score methodology, to encourage MIPS eligible clinicians to report on applicable measures with respect to the quality performance category through the use of CEHRT and QCDRs. We note that the proposed rule used the term CEHRT and certified health IT in different contexts. For an explanation of these terms and contextual use within the proposed rule, we refer readers to the proposed rule (81 FR 28256).

    We have multiple policies to encourage the usage of QCDRs and CEHRT. In part, we are promoting the use of CEHRT by awarding bonus points in the quality scoring section for measures gathered and reported electronically via the QCDR, qualified registry, CMS Web Interface, or CEHRT submission mechanisms see the proposed rule (81 FR 28247). By promoting the use of CEHRT through various submission mechanisms, we believe MIPS eligible clinicians have flexibility in implementing electronic measure reporting in a manner which best suits their practice.

    To encourage the use of QCDRs, we have created opportunities for QCDRs to report new and innovative quality measures. In addition, several improvement activities emphasize QCDR participation. Finally, we allow for QCDRs to report data on all MIPS performance categories that require data submission and hope this will become a viable option for MIPS eligible clinicians. We believe these flexible options will allow MIPS eligible clinicians to more easily meet the submission criteria for MIPS, which in turn will positively affect their final score.

    We requested comments on these proposals.

    The following is summary of the comments we received on our proposals regarding MIPS data submission mechanisms.

    Comment: Several commenters expressed concern that, by providing too many data submission mechanisms and reporting flexibility to MIPS eligible clinicians, CMS would be allowing MIPS eligible clinicians to report on arbitrary quality metrics or metrics on which those MIPS eligible clinicians are performing well versus metrics that reflect areas of needed improvement. The commenters recommended that CMS ensure high standard final scoring, promote transparency, and enable meaningful comparisons of the clinicians' performance for specific services.

    Response: We believe allowing multiple data submission mechanisms is beneficial to the MIPS eligible clinicians as they may choose whichever data submission mechanism works best for their practice. We have provided many data submission options to allow the utmost flexibility for the MIPS eligible clinician. Based on our experience with existing quality reporting programs such as PQRS, we do not believe multiple data submission mechanisms will encourage MIPS eligible clinicians to report on arbitrary quality metrics or metrics on which those MIPS eligible clinicians are performing well versus metrics that reflect areas of needed improvement. We will monitor measure selection and performance through varying data submission mechanisms as we implement the program. However, we agree with commenters that measuring meaningful quality measures and encouraging improvement in the quality of care are important goals of the MIPS program. As such, we will monitor whether data submission mechanisms are allowing MIPS eligible clinicians to focus only on metrics where they are already performing well and will address any modifications needed to our policies based on these monitoring efforts in future rulemaking.

    Comment: Another commenter supported the requirement to use only one submission mechanism per performance category. Other commenters appreciated that CMS is allowing MIPS eligible clinicians to choose data submission options that vary by performance category.

    Response: We agree with the commenters and appreciate the support. We are finalizing the policy as proposed of requiring MIPS eligible clinicians to submit all performance category data for a specific performance category via the same data submission mechanism. In addition, we are finalizing the policy to allow MIPS eligible clinicians to submit data using differing submission mechanisms across different performance categories. We refer readers to section II.E.5.a.(2) of this final rule with comment period where we discuss our approach for the rare situations where a MIPS eligible clinician submits data for a performance category via multiple submission mechanisms (for example, submits data for the quality performance category through a registry and QCDR), and how we score those MIPS eligible clinicians. We further note that in that section we are seeking comment for further consideration on different approaches for addressing this scenario.

    Comment: Another commenter sought clarification as to whether MIPS eligible clinicians may use more than one data submission method per performance category. The commenter recommended the use of multiple data submission methods across performance categories because there are currently significant issues with extracting clinical data from EHRs to provide to a third party for calculation. The commenter believed that requiring a single submission method may force MIPS eligible clinicians to submit inaccurate data that does not reflect actual performance.

    Response: As noted in this final rule with comment period, MIPS eligible clinicians will have the flexibility to choose different submission mechanisms across different performance categories for example, utilizing a registry to submit data for quality and CEHRT for the advancing care information performance category. MIPS eligible clinicians will need to choose however, one submission mechanism per performance category, except for MIPS eligible clinicians who elect to report the CAHPS for MIPS survey, which must be reported via a CMS-approved survey vendor in conjunction with another submission mechanism for all other quality measures. As discussed in this section of this final rule with comment period, we are finalizing policy that allows MIPS eligible clinicians to choose to report for a minimum of as few as 90 consecutive days within CY 2017 for the majority of submission mechanisms. We believe this allows for adequate time for those MIPS eligible clinicians who are not already successfully reporting quality measures meaningful to their practice via CEHRT under the EHR Incentive Program and/or PQRS to evaluate their options and select the measures and a reporting mechanism that will work best for their practice. We will be providing subregulatory guidance for MIPS eligible clinicians who encounter issues with extracting clinical data from EHRs.

    Comment: A few commenters recommended that CMS reduce complexity by reducing the number of available reporting methods as health IT reduces the need to retain claims and registry-based reporting in the program. Other commenters supported the use of electronic data reporting mechanisms noted that due to the complexity of the MIPS, they were concerned that using claims data submission for quality measures may place MIPS eligible clinicians at a disadvantage due to the significant lag between performance feedback and the performance period.

    Response: We appreciate the commenters' feedback. We agree that the usage of health IT in the future will reduce our reliance on non-IT methods of reporting such as claims. We do believe, however, that we cannot eliminate submission mechanisms such as claims until broader adoption of health IT and registries occurs. Therefore, we do intend to finalize both the claims and registry submission mechanisms. We also refer readers to section II.E.8.a. for final polices regarding performance feedback.

    Comment: Some commenters expressed appreciation for our proposal to continue claims-based reporting for the quality performance category because this is the most convenient method for hospitals-based clinicians. The commenters explained that hospital-based MIPS eligible clinicians must use the EHRs of the hospitals in which they practice, which may limit the capabilities of these EHRs for reporting measures. Other commenters requested that CMS ensure that the option for claims reporting was available to all MIPS eligible clinicians, noting that there was only one anesthesia-related quality measure available for reporting via registry. Under such circumstances, the commenters asked CMS to ensure that MIPS did not impose excessive time and cost burdens on MIPS eligible clinicians by forcing them to use a different submission mechanism. Another commenter noted that the preservation of the claims-based reporting option will help those emergency medicine practices that have relied on this reporting option in the past make the transition to the new MIPS requirements. The commenter noted the additional administrative burden associated with registry reporting, including registration fees.

    Response: We appreciate the commenters' support. We do note that we intend to reduce the number of claims-based measures in the future as more measures are available through health IT mechanisms such as registries, QCDRs, and health IT vendors, but we understand that many MIPS eligible clinicians still submit these types of measures. We believe claims-based measures are a necessary option to minimize reporting burden for MIPS eligible clinicians at this time. We intend to work with MIPS eligible clinicians and other stakeholders to continue improving available measures and reporting methods for MIPS. In addition, we are finalizing policies that offer MIPS eligible clinicians substantial flexibility and sustain proven pathways for successful participation. Those MIPS eligible clinicians who are not already successfully reporting quality measures meaningful to their practice via one of these pathways will need to evaluate the options available to them and choose which available reporting mechanism and measures they believe will work best for their practice.

    Comment: A few commenters recommended that more quality measures be made available for reporting via claims or EHRs noting that there were more quality measures available for reporting by registry compared with EHRs or claims. The commenters stated that this will push clinicians to sign up with registries, undercuts fully using EHRs, and only services the interests of organizations who manage registries.

    Response: We appreciate the commenters' concern and are working with measure developers to develop more measures that are electronically based. We refer the commenter to the Measure Development Plan for more information https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf.

    Additionally, in section II.E.9.(b). of this final rule with comment period, we have expanded health IT vendors' opportunities by allowing health IT vendors to submit data on measures, activities, or objectives for any of the following MIPS performance categories: (i) Quality; (ii) improvement activities; or (iii) advancing care information. In addition, the health IT vendor submitting data on behalf of a MIPS eligible clinician or group would be required to obtain data from the MIPS eligible clinician's certified EHR technology. However, the health IT vendor would be able to submit the same information the qualified registry is able to. Therefore, we do not believe there is a disparity between health IT vendors and qualified registry's quality data submission capabilities.

    Comment: Other commenters stated that the use of CEHRT in all areas of the MIPS program should be required rather than just encouraged. The commenters stated that the use of CEHRT is required for participation in the Meaningful Use EHR Incentive Programs, is vitally important for ensuring successful interoperability, and is already part of the definition of a Meaningful EHR User for MIPS.

    Response: We do not believe it is appropriate to require CEHRT in all areas of the MIPS program as many MIPS eligible clinicians may not have had past experience relevant to the performance categories and use of EHR technology because they were not previously eligible to participate in the Medicare EHR Incentive Program. The restructuring of program requirements described in this final rule with comment period are geared toward increasing participation and EHR adoption. We believe this is the most effective way to encourage the adoption of CEHRT, and introduce new MIPS eligible clinicians to the use of certified EHR technology and health IT overall. As discussed in section II.E.6.a.(2)(f) of this final rule with comment period, we are promoting the use of CEHRT by awarding bonus points in the quality scoring section for measures gathered and reported electronically via the QCDR, qualified registry, CMS Web Interface, or CEHRT submission mechanisms. By promoting use of CEHRT through various submission mechanisms, we believe MIPS eligible clinicians have flexibility in implementing electronic reporting in a manner which best suits their practice.

    Comment: One commenter requested information on how non-Medicare payers would route claims data to CMS for purposes of considering cost performance category data.

    Response: All measures used under the cost performance category would be derived from Medicare administrative claims data submitted for billing on Part B claims by MIPS eligible clinicians and as a result, participation would not require use of a separate data submission mechanism. Please note that the cost performance category is being reweighted to zero for the transition year of MIPS. Refer to section II.E.5.e. of this final rule with comment for more information on the cost performance category.

    Comment: Other commenters requested clarification on the difference between “claims” and “administrative claims” as reporting methods, citing slides 24 and 39 of the May 10th Quality Payment Program presentation, available at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Quality-Payment-Program.html. The commenters were confused because “claims” was listed as a method of reporting but it was stated that “administrative claims” will not require submission.

    Response: The “claims” submission mechanism refers to those quality measures as described in section II.E.5.b.(6). of this final rule with comment period. The claims submission mechanism requires MIPS eligible clinicians to append certain billing codes to denominator eligible claims to indicate to us the required quality action or exclusion occurred. Conversely, the administrative claims submission mechanism refers to those measures described in section II.E.5.b. for the quality performance category and section II.E.5.e. for the cost performance category of this final rule with comment period. Administrative claims submissions require no separate data submission to CMS. Rather, we calculate these measures based on data available from MIPS eligible clinicians' billings on Medicare Part B claims.

    Comment: Other commenters stated that some of the measures and activities, such as the CAHPS for MIPS survey, were dependent on third party intermediaries, over which practices have little control. The commenters recommended that CMS reduce requirements that are outside of the practice's control.

    Response: We believe the MIPS program has a broad span of measures and activities from which to choose. There are many measures and activities that are not dependent on a third party intermediary. We encourage MIPS eligible clinicians to report the measures and activities that are most meaningful to their practice.

    Comment: Another commenter stated that if CMS were to require vendors to have the capability to submit data for all performance categories, a vendor would need adequate time to implement any required changes going forward, would need CMS to produce implementation guides for 2017 reporting as soon as possible with the capability to ask CMS clarifying questions, and would need a testing tool no later than the 3rd quarter. Several commenters did not support the proposed requirement that vendors have the capability to submit data for all MIPS performance categories. The commenters stated many product developers and product or service vendors have developed solutions tailored to specific areas of healthcare quality and performance improvement. The commenters stated that given the breadth of the proposed MIPS requirements, CMS should not require health IT companies to have the capability to submit information for all four MIPS performance categories because this task may be outside of their organizational and client priorities. Another commenter stated that while they appreciate CMS' attempts to reduce administrative burden they have a concern that third party entities will not be able to implement the necessary changes to support reporting on all performance categories in the transition year. In addition, the commenter was concerned that the additional cost of creating this functionality will be passed on to MIPS eligible clinicians in the form of higher fees for using those products and services. The commenter urged CMS to work with health IT developers, vendors, and other data intermediaries to ensure that data products and services evolve as CMS's policies evolve and to ensure adequate advanced notice of upcoming changes so that MIPS eligible clinicians will not be penalized for failing to report data the third party intermediary's technology was not updated to collect.

    Response: We would like to explain that we are not finalizing a requirement that a third party intermediary submitting data on behalf of a MIPS eligible clinician or group must become qualified to submit data for multiple MIPS performance categories, nor are we finalizing a certification requirement for submission of data. We are instead finalizing specific requirements for QCDRs related to quality data submission, and for a health IT vendor or other authorized third party intermediary that is submitting data for any or all of the MIPS performance categories on behalf of an MIPS eligible clinician or group must meet the form and manner requirements for each submission method. We direct readers to section II.E.9.b. of this final rule with comment period for further discussion of health IT vendor and other authorized third party intermediaries. We direct readers to section II.E.9.a. of this final rule with comment period for further discussion of submission requirements for QCDRs.

    Comment: Another commenter stated that the CMS Web Interface should have fewer down times during the first quarter submission period, following the performance period, to compensate for MIPS eligible clinicians' need to submit their files.

    Response: We intend to make every effort to keep the CMS Web Interface from having down times during the first quarter submission period. In some instances, down times are required to account for necessary system maintenance within CMS. When these down times do occur, we make every effort to ensure that the down times do not occur near final submission deadlines and to notify all groups and impacted parties well in advance so they can account for these down times during the data submission period.

    Comment: One commenter encouraged utilizing EHRs and claims to collect quality measure data whenever possible.

    Response: We agree with utilizing EHR whenever possible and encourage the use of EHR to collect data whenever possible. However, we intend to reduce the number of claims-based measures that in future years, but we note that many MIPS eligible clinicians still submit these types of measures. We believe claims-based measures are a necessary option to minimize reporting burden for MIPS eligible clinicians. We intend to work with MIPS eligible clinicians and other stakeholders to continue improving available measures and reporting methods for MIPS.

    Comment: One commenter expressed concern that multi-specialty groups reporting through a QCDR would face challenges if multiple specialties wanted to report non-MIPS measures. This commenter believed this would require reporting via two different submission mechanisms.

    Response: QCDRs are able to report both non-MIPS measures and MIPS measures. They are provided a great deal of flexibility and should be able to report for multiple specialties.

    Comment: Another commenter requested clarity regarding the submission mechanisms for a group. The commenter sought flexibility to use the most appropriate submission mechanism for each of the performance categories. Another commenter suggested continuing 2017 reporting via CMS Web Interface for groups. The commenter stated that at a minimum, the CMS Web Interface reporting and EHR direct reporting should be maintained.

    Response: Please refer to the final submission mechanisms in Tables 3 and 4 of this final rule with comment period for the available submission mechanisms for all MIPS eligible clinicians.

    Comment: Another commenter expressed concern that CMS proposed to allow measures which are available to report via EHR technology to be reported via a QCDR, because the commenter believed this would result in unnecessary burden as practices would be required to seek another data submission vendor beyond their EHR vendor. The commenter recommended that CMS allow MIPS eligible clinicians to report quality measures and improvement activities using their certified EHR technology.

    Response: MIPS eligible clinicians will have the flexibility to submit their quality measures and improvement activities using their certified EHR technology. The health IT vendor would need to meet the requirements as described in section II.E.9.b. of this final rule with comment period to offer this flexibility to their clients.

    Comment: A few commenters agreed with the proposal to allow third party submission entities, such as QCDRs and qualified registries, to submit data for the performance categories of quality, advancing care information, and improvement activities. The commenters believed that allowing MIPS eligible clinicians to use a single, third party data submission method reduces the administrative burden on MIPS eligible clinicians, facilitates consolidation and standardization of data from disparate EHRs and other systems, and enables the third parties to provide timely, actionable feedback to MIPS eligible clinicians on opportunities for improvement in quality and value. Other commenters agreed with the proposals that encourage the use of QCDRs because QCDRs are able to quickly implement new quality measures to assist MIPS eligible clinicians with accurately measuring, reporting, and taking action on data most meaningful to their practices. Another commenter stated that vendors and QCDRs should have the capability to submit data for all MIPS performance categories. The commenter believed that working through a single vendor is the only way to provide a full picture of overall performance.

    Response: We thank the commenters for their support.

    Comment: A few commenters expressed support for the Quality Payment Program's approach of streamlining the PQRS, VM, and EHR Incentive Program into MIPS and encouraged CMS to continue to allow existing data reporting tools to report MIPS quality data. Hospitals have already made significant investments in existing reporting tools. Other commenters supported the option to use a single reporting mechanism under MIPS. The commenters considered this a positive development, and one that would be attractive to many groups and hospitals. Some commenters noted that CMS offers significant flexibility across performance category reporting options, and supported the proposal to accept data submissions from multiple mechanisms. The commenters urged CMS to retain this flexibility in future years and to hold QCDR and other vendors accountable for offering MIPS reporting capabilities across all performance categories. One commenter was pleased that CMS is allowing flexibility in measure selection and reporting via any reporting mechanism, and report as an individual or a group. Another commenter supported the proposal allowing MIPS eligible clinicians who are in a group to report on MIPS either as part of the group or individually. This flexibility would allow low performing groups the opportunity to reap the benefits of their higher performance. Other commenters were very supportive of the use of bonus points in the quality performance category to encourage the use of CEHRT and electronic reporting of CQMs.

    Response: We thank the commenters for their support on the various approaches. We would like to explain that groups must report either entirely as a group or entirely as individuals; groups may not have only some individual reporting. Groups must decide to report as a group across all four performance categories.

    Comment: Another commenter recommended that CMS adopt a clear, straightforward, and prospective process for practices to determine whether a MIPS performance category applies to their particular specialty and subspecialty.

    Response: We agree with the commenter and are working to establish educational tools and materials that will clearly indicate to MIPS eligible clinicians their requirements based on their specialty or practice type.

    Comment: One commenter urged CMS to offer a quality and cost performance category measure reporting option in which hospital-based MIPS eligible clinicians can use the hospital's measure performance under CMS hospital quality programs for purposes of MIPS.

    Response: We appreciate the feedback and will take it into consideration for future rulemaking. We also note that in the Appendix in Table C of this final rule with comment period we have created a specialty-specific measure set for hospitalists.

    Comment: Another commenter recommended that CMS and HRSA collaborate to develop a data submission mechanism that would allow MIPS eligible clinicians practicing in FQHCs to submit quality data one time for both MIPS and Uniform Data System (UDS).

    Response: We intend to address this option in the future through separate notice-and-comment rulemaking.

    Comment: Some commenters supported the proposed data submission mechanisms and the proposal that MIPS eligible clinicians and groups must use the same mechanism to report for a given performance category with the exception of those reporting the CAHPS for MIPS survey.

    Response: We thank the commenters for their support.

    Comment: Other commenters agreed with the proposal to maintain a manual attestation portal option for some of the performance categories. The commenters believed that this option provided MIPS eligible clinicians with an option of consolidating and submitting data on their own, which for some may reduce their overall cost to participate. The commenters recommended that this option remain in place for the future, but that if CMS decided to remove it, they provide EHR vendors at least 18 months' notice to develop and deploy data submission mechanisms.

    Response: We appreciate the support and will take the feedback into consideration in the future.

    Comment: Another commenter encouraged CMS to ensure that the reporting requirements for MIPS are aligned with each of the American Board of Medical Specialties (ABMS) Member Board's requirements for Maintenance of Certification, particularly activities required to fulfill Part IV: Improvement in Medical Practice.

    Response: We align our quality efforts where possible. We intend to continue to receive input from stakeholders, including ABMS, in the future.

    Comment: One commenter suggested that CMS ensure that the MIPS reporting process is simple to understand, conducive to automated reporting and clinically relevant.

    Response: We believe we have made the reporting process as flexible and simple as possible for the MIPS program at this time. We have provided several data submission mechanisms, activities, and measures for MIPS eligible clinicians to choose from. We intend to continue to work to improve the program in the future as we gain experience under the Quality Payment Program.

    Comment: Another commenter was appreciative that CMS outlined a data validation and auditing process in the proposed rule. The commenter requested more details about implementation, including CMS' timeline for providing performance reports to MIPS eligible clinicians.

    Response: We thank the commenters for their support. We refer readers to section II.E.8.e. for information on data validation and section II.E.8.a. for information on performance feedback of this final rule with comment period.

    Comment: A few commenters urged CMS to integrate patient and family caregiver perspectives as part of Quality Payment Program development. The commenters noted that value and quality are often perceived through “effectiveness” and “cost” whereas the patient typically prioritizes outcomes beyond clinical measures.

    Response: We agree that the patient and family caregiver perspective is important, but note that we would expect patients and caregivers to prioritize successful health outcomes. We are finalizing the policy that the CAHPS for MIPS survey would count as a patient experience measure which is a type of high priority measure. In addition, a MIPS eligible clinician may be awarded points under the improvement activities performance category as the CAHPS for MIPS survey is included in the Patient Safety and Practice Assessment subcategory.

    Comment: One commenter expressed concern that no measures exist that are useful to MIPS eligible clinicians working in multiple settings with diverse patient populations.

    Response: We believe the MIPS program has a broad span of measures and activities from which to choose. There are many measures and activities that are applicable to multiple treatment facility types and diverse patient populations. We encourage MIPS eligible clinicians to report the measures and activities that are most meaningful to their practice.

    Comment: One commenter stated that CMS should clarify the reporting options for nephrologists who practice in multiple settings. The commenter urged CMS to provide illustrative examples of options for nephrologists based on actual sample clinical practices.

    Response: The final data submission options for all MIPS eligible clinicians are outlined in this final rule with comment period in Tables 3 and 4. We intend to provide further subregulatory guidance and training opportunities for all MIPS eligible clinicians in the future. In addition, the MIPS eligible clinician may reach out to the Quality Payment Program Service Center with any questions.

    Comment: Other commenters recommended that CMS not amend the technical specifications for eCQMs until MIPS eligible clinicians are required to transition to 2015 Edition CEHRT to report data for MIPS. In addition, the commenters requested that CMS maintain the eMeasure versions issued with the EHR Incentive Program Stage 2 final rule until that transition point. The commenters noted that by delaying any changes to eCQM measures until 2018, CMS will give the health IT industry and MIPS eligible clinicians the necessary time to adapt to new reporting demands and respond appropriately to new specifications.

    Response: We understand the concerns of needing necessary time to adapt to new reporting requirements. Therefore, we did not make major amendments to the technical standards for eCQMs. We have updated measure specification for various eCQMs to align with current clinical guidelines. However, this alignment should not impact technical standards and certification requirements. We plan to update the EHR community to allow necessary time for implementers to adapt any new standards required to report eCQMs in the future.

    Comment: One commenter recommended that technologies such as the CMS Web Interface be available for submission of all data, not just the quality performance category.

    Response: We appreciate the feedback and note that we are expanding the ability of the CMS Web Interface to be used for submissions on improvement activities, advancing care information, and quality performance categories.

    Comment: Another commenter stated that the avenue for reporting different measures requires careful consideration because there are appropriate avenues of reporting depending upon different measure types. The commenter stated that this should be taken into consideration during measure development.

    Response: We appreciate the feedback and will take this suggestion into consideration in the future.

    Comment: One commenter supported allowing groups to utilize a CMS‐approved survey vendor for CAHPS for MIPS survey data collection in conjunction with another data submission mechanism. Another commenter proposed expanding the survey option in the future to include a CMS‐approved survey vendor for CAHPS for MIPS survey data collection for MIPS eligible clinicians reporting individually.

    Response: We would like to note that when a MIPS eligible clinician utilizes the CAHPS for MIPS survey they must also utilize another data submission mechanism in conjunction with it. We will take the suggestion of expanding the survey option to individuals in the future.

    Comment: One commenter believed that CMS could simplify MIPS reporting by streamlining the number of submission methods and focusing on the options that are most appropriate for each performance category. The commenter recommended the following options: (1) Quality: EHR Direct, QCDR, Qualified Registry, CMS Web Interface, remove Claims; (2) Cost: Claims; (3) Improvement Activities: Attestation, Claims, EHR Direct, QCDR, qualified registry, and CMS Web Interface; (4) Advancing care information: Attestation, EHR Direct, remove QCDR, remove qualified registry, and remove CMS Web Interface.

    Response: We appreciate the feedback as we are striving to balance simplicity with flexibility. We believe that by having numerous data submission mechanisms available for selection it reduces burden to MIPS eligible clinicians. The data submission options for all MIPS eligible clinicians are outlined in this final rule with comment period in Tables 3 and 4.

    Comment: Some commenters opposed the lack of transparency of the claims-based quality and cost performance category measures. The commenters recommended that CMS make the claims-based attribution of patients and diagnoses fully transparent to MIPS eligible clinicians and beneficiaries. They suggested CMS modify them so they accurately reflect each MIPS eligible clinician's contribution to quality and resource utilization.

    Response: We appreciate the feedback and will take the suggestions into consideration in the future. We would like to note that information regarding claims-based quality and cost performance category measures can be found in the Appendix of this final rule with comment period under Table A through Table G under the “data submission method” tab. In addition, claims-based quality measures information may be found at QualityPaymentProgram.cms.gov.

    Comment: Another commenter recommended that CMS consider allowing MIPS eligible clinicians to report across multiple QCDRs because allowing MIPS eligible clinicians to report through multiple QCDRs would permit the specificity of reporting required for diverse specialties, but without increasing the IT integration burden on MIPS eligible clinicians who might already be reporting through these registries.

    Response: Many QCDRs charge their participants for collecting and reporting data. Not only might this increase the cost to MIPS eligible clinicians, but it would make the calculation of the quality score that much more cumbersome and prone to error. Errors that could occur include incorrect submission of TIN or NPI information, incomplete data for one or more measures, etc. We note, however, that MIPS eligible clinicians do have the flexibility to submit data using different submission mechanisms across the different performance categories. For example, one QCDR could report the advancing care information performance category for a particular MIPS eligible clinician, and that MIPS eligible clinician could use another QCDR to report the quality performance category.

    Comment: One commenter requested that CMS clearly state the reporting requirements for each reporting mechanism for quality. The commenter noted that MIPS eligible clinicians who elect to submit four eCQMs will submit that data through a QCDR, qualified registry, or EHR with the QRDA standard that is certified, and then be restricted on their ability to use the attestation mechanism for the remaining two quality measures if they elect to submit non-eCQMs that do not require certification. The commenter agreed that not all submitted measures need to be eCQMs, but believed CMS needed to provide greater clarity on handling such a scenario and wanted CMS to consider the submission mechanism's ability to submit data using a single standard.

    Response: The quality data submission criteria is described in section II.E.5.a.(2) of this final rule with comment period. We would like to explain that attestation is not a submission mechanism allowed for the quality performance category, rather only for the improvement activities and advancing care information performance categories. Additionally, we are finalizing our policy that MIPS eligible clinicians would need to submit data for a given performance category only one submission mechanism. We refer readers to section II.E.5.a.(2) of this final rule with comment period where we discuss our approach for the rare situations where a MIPS eligible clinician submits data for a performance category via multiple submission mechanisms (for example, submits data for the quality performance category through a registry and QCDR), and how we score those MIPS eligible clinicians. We further note that in that section we are seeking comment for further consideration on different approaches for addressing this scenario.

    Comment: Some commenters agreed with the proposal of using submission methods already available in the current PQRS program because this allows QCDRs to focus on the creation of measures and adapting to final MIPS rule rather than on the submission process itself.

    Response: We appreciate the commenters' support.

    Comment: Several commenters noted they support the CMS goals of patient-centered health care, and the aim of the MIPS program for evidence-based and outcome-driven quality performance reporting. These commenters appreciated that the flexibility allowed in the MIPS program, including the variety of reporting options, is intended to meet the needs of the wide variety of MIPS eligible clinicians. The commenters believed, however, that the variety of reporting options can easily create confusion due to the increased number of choices and methods. Such confusion will be challenging in general, but could be especially problematic for 2017, given the short time to prepare. One commenter suggested that technical requirements for reporting options should be incorporated into CEHRT, and not added through subregulatory guidance. Another commenter stated that there are too many reporting options, and the number of options should be reduced.

    Response: We appreciate the commenters' support. We have provided several data submission mechanisms to allow flexibility for the MIPS eligible clinician. It is important to note that substantive aspects of technical requirements for reporting options incorporated into CEHRT have been addressed in section II.E.g. of this final rule with comment period. However, we intend to issue subregulatory guidance regarding further details on the form and manner of EHR submission.

    Comment: One commenter recommended CMS allow each specialty group within a multi-specialty practice to report its own group data file. The commenter suggested that if this cannot be done under a single TIN, then CMS should explicitly encourage multi-specialty practices that wish to report specialty-specific measure sets and improvement activities at the group level to register each specialty group under a different TIN for identification purposes. The commenter recognized that there may be operational challenges to implementing this recommendation and is willing to work with CMS and its vendors to develop the framework for the efficient collection and calculation of multiple data files for a single MIPS performance category from a group.

    Response: We appreciate the commenters' recommendation and will take it into consideration in future rulemaking. We refer readers to section II.E.1.e. of this final rule with comment period for more information on groups.

    After consideration of the comments on our proposals regarding the MIPS data submission mechanisms, we are modifying the data submission mechanisms at § 414.1325. We will not be finalizing the data submission mechanism of administrative claims for the improvement activities performance category, as it is not technically feasible at this time. All other data submission mechanisms will be finalized as proposed. Specifically, we are finalizing at § 414.1325(a) that MIPS eligible clinicians and groups must submit measures, objectives, and activities for the quality, improvement activities, and advancing care information performance categories.

    Refer to Tables 3 and 4 of this final rule with comment period for the finalized data submission mechanisms. Table 3 contains a summary of the data submission mechanisms for individual MIPS eligible clinicians that we are finalizing at § 414.1325(b) and (e). Table 4 contains a summary of the data submission mechanisms for groups that are not reporting through an APM that we are finalizing at § 414.1325(c) and § 414.1325(e). Furthermore, we are finalizing our proposal at § 414.1325(d) that except for groups that elect to report the CAHPS for MIPS survey, MIPS eligible clinicians and groups may elect to submit information via multiple mechanisms; however, they must use the same identifier for all performance categories and they may only use one submission mechanism per performance category. In addition, we are finalizing at § 414.1305 the following definitions as proposed: (1) Attestation means a secure mechanism, specified by CMS, with respect to a particular performance period, whereby a MIPS eligible clinician or group may submit the required data for the advancing care information or the improvement activities performance categories of MIPS in a manner specified by CMS; (2) CMS-approved survey vendor means a survey vendor that is approved by CMS for a particular performance period to administer the CAHPS for MIPS survey and to transmit survey measures data to CMS; and (3) CMS Web Interface means a web product developed by CMS that is used by groups that have elected to utilize the CMS Web Interface to submit data on the MIPS measures and activities.

    Table 3—Data Submission Mechanisms for MIPS Eligible Clinicians Reporting Individually as TIN/NPI Performance
  • category/
  • submission
  • combinations
  • accepted
  • Individual reporting data
  • submission mechanisms
  • Quality Claims.
  • QCDR.
  • Qualified registry.
  • EHR.
  • Cost Administrative claims (no submission required). Advancing Care Information Attestation.
  • QCDR.
  • Qualified registry.
  • EHR.
  • Improvement Activities Attestation.
  • QCDR.
  • Qualified registry.
  • EHR.
  • TABLE 4—Data Submission Mechanisms for Groups Performance
  • category/
  • submission
  • combinations
  • accepted
  • Group reporting data
  • submission mechanisms
  • Quality QCDR.
  • Qualified registry.
  • EHR.
  • CMS Web Interface (groups of 25 or more).
  • CMS-approved survey vendor for CAHPS for MIPS (must be reported in conjunction with another data submission mechanism.).
  • and
  • Administrative claims (For all-cause hospital readmission measure—no submission required).
  • Cost Administrative claims (no submission required). Advancing Care Information Attestation.
  • QCDR.
  • Qualified registry.
  • EHR.
  • CMS Web Interface (groups of 25 or more).
  • Improvement Activities Attestation.
  • QCDR.
  • Qualified registry.
  • EHR.
  • CMS Web Interface (groups of 25 or more).
  • (3) Submission Deadlines

    For the submission mechanisms described in the proposed rule (81 FR 28181), we proposed a submission deadline whereby all associated data for all performance categories must be submitted. In establishing the submission deadlines, we took into account multiple considerations, including the type of submission mechanism, the MIPS performance period, and stakeholder input and our experiences under the submission deadlines for the PQRS, VM, and Medicare EHR Incentive Programs.

    Historically, under the PQRS, VM, or Medicare EHR Incentive Programs, the submission of data occurred after the close of the performance periods. Our experience has shown that allowing for the submission of data after the close of the performance period provides either the MIPS eligible clinician or the third party intermediary time to ensure the data they submit to us is valid, accurate and has undergone necessary data quality checks. Stakeholders have also stated that they would appreciate the ability to submit data to us on a more frequent basis so they can receive feedback more frequently throughout the performance period. We also note that, as described in the proposed rule (81 FR 28179), the MIPS performance period for payments adjusted in 2019 is CY 2017 (January 1 through December 31).

    Based on the factors noted, we proposed at § 414.1325(e) that the data submission deadline for the qualified registry, QCDR, EHR, and attestation submission mechanisms would be March 31 following the close of the performance period. We anticipate that the submission period would begin January 2 following the close of the performance period. For example, for the first MIPS performance period, the data submission period would occur from January 2, 2018, through March 31, 2018. We note that this submission period is the same time frame as what is currently available to EPs and group practices under PQRS. We were interested in receiving feedback on whether it is advantageous to either (1) have a shorter time frame following the close of the performance period, or (2) have a submission period that would occur throughout the performance period, such as bi-annual or quarterly submissions; and (3) whether January 1 should also be included in the submission period. We requested comments on these items.

    We further proposed that for the Medicare Part B claims submission mechanism, the submission deadline would occur during the performance period with claims required to be processed no later than 90 days following the close of the performance period. Lastly, for the CMS Web Interface submission mechanism, the submission deadline will occur during an 8-week period following the close of the performance period that will begin no earlier than January 1 and end no later than March 31. For example, the CMS Web Interface submission period could span an 8-week timeframe beginning January 16 and ending March 13. The specific deadline during this timeframe will be published on the CMS Web site.

    We requested comments on these proposals.

    The following is a summary of the comments we received on our proposals regarding MIPS submission deadlines.

    Comment: One commenter requested clarity on the first reporting deadline.

    Response: The first proposed submission deadline for the qualified registry, QCDR, EHR, and attestation submission mechanisms is from January 2nd, 2018 through March 31st, 2018. For the CMS Web Interface submission mechanism, the first proposed submission deadline will occur during an 8-week period following the close of the performance period that will begin no earlier than January 1 and end no later than March 31 (for example, January 16 through March 13, 2018). The specific deadline during this timeframe will be published on the CMS Web site.

    Comment: Several commenters supported the data submission deadline of March 31 of the year following the performance period. The commenters also suggested that more frequent submissions could be useful but only if data are easy to submit. Another commenter recommended that CMS not make more frequent data submission a requirement, but allow for reporters to submit data on a more frequent basis if they so choose. The commenter saw benefit to more frequent data submission, but stated that there are some concerns CMS should consider. For example, they noted that monthly submission would not work well with the advancing care information performance category requirement that requires reporting patients' choosing to view their patient portal, as patients would have to visit the portal during the month after their appointment in order for the portal visit to count towards the measure.

    Response: We appreciate the commenters' support. We intend to explore the capability of more frequent data submission to the MIPS program. As a starting point we intend to allow for optional, early data submissions for the qualified registry, QCDR, EHR, and attestation submission mechanisms. Specifically, we would allow submissions to begin earlier than January 2, 2018 for those individual MIPS eligible clinicians and groups who would like to optionally submit data early to us, if technically feasible. If it is not technically feasible to allow the submission period to begin prior to January 2 following the close of the performance period, the submission period will occur from January 2 through March 31 following the close of the performance period. Please note that the final deadline for these submission mechanisms will remain March 31, 2018. Additional details related to the technical feasibility of early data submissions will be made available at QualityPaymentProgram.cms.gov.

    Comment: Some commenters were concerned about timelines for the PQRS, VM, and Medicare EHR Incentive Program for EPs. The commenters believed it was unfair to expect MIPS eligible clinicians and groups to complete full calendar year reporting in 2016 for EHR Incentive Program and PQRS and then completely switch to a new program while still completing attestations for 2016 programs.

    Response: We understand the commenters' concerns and therefore have modified our proposed policy to allow more flexibility and time for MIPS eligible clinicians to transition to CEHRT and familiarize themselves with MIPS requirements. As discussed in section II.E.5.b.(3) of this final rule with comment period, we are finalizing the policy that MIPS clinicians will only need to report for a minimum of a continuous 90-day period within CY 2017, for the majority of the submission mechanisms for all data in a given performance category and submission mechanism, to qualify for an upward adjustment for the transition year.

    Comment: Another commenter called for the elimination of reporting electronically to data registries unless the registries have been empirically demonstrated to improve care and reduce cost in practice.

    Response: We appreciate the comment regarding the function of a qualified registry to improve care and reduce cost in practice. We agree that registries are a tool to drive value in clinical practice. For MIPS, a qualified registry or QCDR is required to provide attestation statements from the MIPS eligible clinicians during the data submission period that all of the data (quality measures, improvement activities, and advancing care information measures and activities, if applicable) and results are accurate and complete.

    Comment: Another commenter believed that limiting performance category data submission to one mechanism per performance category will limit innovation and disincentivize reporting the highest quality data available. The commenter believed that if MIPS eligible clinicians could report some of the required quality measures through a QCDR, they should be allowed to do so. Other commenters supported CMS' proposal to retain reporting mechanisms available in PQRS but opposed the proposal to allow only one submission mechanism per performance category, especially for the quality performance category. The commenters stated that some MIPS eligible clinicians may need to report through multiple mechanisms, such as MIPS eligible clinicians reporting a proposed specialty-specific measure set containing measures requiring differing submission mechanisms. A few commenters requested that CMS reconsider its proposal that all quality measures used by CMS must be submitted from the same reporting method because there are limits in the applicable reporting methods for certain measures, with some specialty-specific measure sets having very few EHR-enabled measures. These commenters believed the MIPS eligible clinicians should be able to use multiple reporting options. Another commenter urged CMS to limit the number of measure data reporting options so hospitals, health systems, and national stewards can accurately assess and benchmark performance over time. Another commenter recommended that for at least the first 3 to 5 years of the program, the submission mechanism flexibility to report measures using a variety of mechanisms remain in place.

    Response: MIPS eligible clinicians may choose whichever data submission mechanisms works best for their practice. We have provided many data submission options to allow the utmost flexibility for the MIPS eligible clinician. We believe the proposal to allow multiple mechanisms, while restricting the number of mechanisms per performance category, offers flexibility without adding undue complexity. We discuss our policies related to multiple methods of reporting within a performance category in section II.E.5.a. of this final rule with comment period. We would also like to note that in section II.E.6.a. of this final rule with comment period we are seeking comment for further consideration on additional flexibilities that should be offered for MIPS eligible clinicians in this situation.

    In addition, we do not believe that allowing these various submission mechanisms impacts the ability to create reliable and accurate measure benchmarks. We discuss our policies related to measure benchmarks in more detail in section II.E.6.e. of this final rule with comment period.

    Comment: One commenter recommended that CMS require Medicare Part B claims to be submitted, rather than processed, within 90 days of the close of the applicable performance period, as MIPS eligible clinicians have no control over how quickly claims are processed and should not be held responsible for delays. Another commenter recommended that the submission time period be extended to 12 weeks, as more data will be required to be submitted than historically during that time period. Other commenters expressed concern with CMS' proposed submission deadline and requested a minimum 90-day submission period as MIPS eligible clinicians employed by health systems may not have access to December data until February and cumulative data even later. The commenters further believed that submission periods should be standardized regardless of submission mechanism and suggest a submission period from January 1 through March 31. A few commenters agreed with the proposed 90-day submission period policy for submittal of data via the claims mechanism and noted that the prior deadline was often too challenging for MIPS eligible clinicians to meet.

    Response: In establishing the submission deadlines, we took into account multiple considerations, including the type of submission mechanism, the MIPS performance period, and stakeholder input and our experiences under the submission deadlines for the PQRS, VM, and Medicare EHR Incentive Program. Our experience has shown that allowing for the submission of data after the close of the performance period provides either the MIPS eligible clinician or the third party intermediary time to ensure the data they submit to us is valid, accurate and has undergone necessary data quality checks. We do note, however, that as indicated previously in this final rule with comment period, we would allow submissions to begin earlier than January 2, 2018 for those individual MIPS eligible clinicians and groups who would like to optionally submit data early to us, provided that it is technically feasible. If it is not technically feasible, individual MIPS eligible clinicians and groups will still be able to submit data during the normal data submission period. Please note that the final deadline for all submission mechanisms will remain at March 31, 2018. However, for the Medicare Part B claims submission mechanism, we believe the best approach for the data submission deadline is to require Medicare Part B claims to be processed no later than 60 days following the close of the performance period.

    Comment: Another commenter stated that despite MIPS data submission via the CMS Web Interface, the process of data verification prior to submission is still manual and labor-intensive. The commenter encouraged CMS to explore methods for allowing test submissions (whether throughout the performance period or during the submission window) to uncover any possible submission errors; this would provide an opportunity for CMS to give feedback to MIPS eligible clinicians and third party intermediaries in advance of the submission deadline.

    Response: We appreciate the feedback and would like to note as indicated previously in this final rule with comment period, we would allow submissions to begin earlier than January 2, 2018 for those individual MIPS eligible clinicians and groups who would like to optionally submit data early to us, if technically feasible. If it is not technically feasible to allow the submission period to begin prior to January 2 following the close of the performance period, the submission period will occur from January 2 through March 31 following the close of the performance period. Please note that the final deadline for these submission mechanisms will remain March 31, 2018.

    Comment: We received comments on our request for feedback on whether it is advantageous to either (1) have a shorter time frame following the close of the performance period, or (2) have a submission period that would occur throughout the performance period, such as bi-annual or quarterly submissions; and (3) whether January 1 should also be included in the submission period. A few commenters opposed shorter reporting timeframes for MIPS eligible clinicians using the CMS Web Interface or other reporting mechanisms. The commenters recommended, in general, quarterly or semi-annual data submission periods with a minimum report of at least once annually, and subsequently a quarterly report by CMS detailing MIPS eligible clinicians' progress. The commenters recommended a real-time tool for MIPS eligible clinicians to be able to track their MIPS progress. Another commenter stated that MIPS reporting deadlines should be no earlier than 2 months following the notification of QP status. Other commenters stated that bi-annual and quarterly submission period requirements would be advantageous only if CMS intended to provide timely MIPS eligible clinician feedback on a quarterly basis. They stated that if quarterly reporting were to be required, EHR vendors would need to have upfront notice regarding changes in measures in order to prepare. One commenter expressed that clinicians must know the standards by which they will be measured in advance of the performance period and require 3 months after the performance period to scrub data before submitting. The commenter stated that quarterly data submission would be too burdensome.

    Response: We appreciate the feedback and agree with the commenter that we want to strike the right balance on allowing for more frequent submissions which would allow us to issue more frequent performance feedback, while ensuring that the process that is developed is not overly burdensome. Therefore, as indicated previously in this final rule with comment period, we would allow submissions to begin earlier than January 2, 2018 for those individual MIPS eligible clinicians and groups who would like to optionally submit data early to us, if technically feasible. If it is not technically feasible to allow the submission period to begin prior to January 2 following the close of the performance period, the submission period will occur from January 2 through March 31 following the close of the performance period. Please note that the final deadline for these submission mechanisms will remain March 31, 2018.

    After consideration of the comments received on the proposals regarding MIPS submission deadlines, we are finalizing the submission deadlines as proposed with one modification. Specifically, we are finalizing at § 414.1325(f) the data submission deadline for the qualified registry, QCDR, EHR, and attestation submission mechanisms as March 31 following the close of the performance period. The submission period will begin prior to January 2 following the close of the performance period, if technically feasible. For example, for the first MIPS performance period, the data submission period will occur prior to January 2, 2018, through March 31, 2018, if technically feasible. If it is not technically feasible to allow the submission period to begin prior to January 2 following the close of the performance period, the submission period will occur from January 2 through March 31 following the close of the performance period. In any case, the final deadline will remain March 31, 2018.

    We further finalize at § 414.1325(f)(2) that for the Medicare Part B claims submission mechanism, the submission deadline must be on claims with dates of service during the performance period that must be processed no later than 60 days following the close of the performance period. Lastly, for the CMS Web Interface submission mechanism, we are finalizing at § 414.1325(f)(3) the submission deadline must be an 8-week period following the close of the performance period that will begin no earlier than January 1, and end no later than March 31. For example, the CMS Web Interface submission period could span an 8-week timeframe beginning January 16 and ending March 13. The specific deadline during this timeframe will be published on the CMS Web site.

    b. Quality Performance Category (1) Background (a) General Overview and Strategy

    The MIPS program is one piece of the broader health care infrastructure needed to reform the health care system and improve health care quality, efficiency, and patient safety for all Americans. We seek to balance the sometimes competing considerations of the health system and minimize burdens on health care providers given the short timeframe available under the MACRA for implementation. Ultimately, MIPS should, in concert with other provisions of the Act, support health care that is patient-centered, evidence-based, prevention-oriented, outcome driven, efficient, and equitable.

    Under MIPS, clinicians are incentivized to engage in improvement measures and activities that have a proven impact on patient health and safety and are relevant to their patient population. We envision a future state where MIPS eligible clinicians will be seamlessly using their certified health IT to leverage advanced clinical quality measurement to manage patient populations with the least amount of workflow disruption and reporting burden. Ensuring clinicians are held accountable for patients' transitions across the continuum of care is imperative. For example, when a patient is discharged from an emergency department (ED) to a primary care physician office, health care providers on both sides of the transition should have a shared incentive for a seamless transition. Clinicians may also be working with a QCDR to abstract and report quality measures to CMS and commercial payers and to track patients longitudinally over time for quality improvement.

    Ideally, clinicians in the MIPS program will have accountability for quality and cost measures that are related to one another and will be engaged in improvement activities that directly help them improve in both specialty-specific clinical practice and more holistic areas (for example, patient experience, prevention, population health). The cost performance category will provide clinicians with information needed to delivery efficient, effective, and high-value care. Finally, MIPS eligible clinicians will be using CEHRT and other tools which leverage interoperable standards for data capture, usage, and exchange in order to facilitate and enhance patient and family engagement, care coordination among diverse care team members, and continuous learning and rapid-cycle improvement leveraging advanced quality measurement and safety initiatives.

    One of our goals in the MIPS program is to use a patient-centered approach to program development that will lead to better, smarter, and healthier care. Part of that goal includes meaningful measurement which we hope to achieve through:

    • Measuring performance on measures that are relevant and meaningful.

    • Maximizing the benefits of CEHRT.

    • Flexible scoring that recognizes all of a MIPS eligible clinician's efforts above a minimum level of effort and rewards performance that goes above and beyond the norm.

    • Measures that are built around real clinical workflows and data captured in the course of patient care activities.

    • Measures and scoring that can discern meaningful differences in performance in each performance category and collectively between low and high performers.

    (b) The MACRA Requirements

    Sections 1848(q)(1)(A)(i) and (ii) of the Act require the Secretary to develop a methodology for assessing the total performance of each MIPS eligible clinician according to performance standards and, using that methodology, to provide for a final score for each MIPS eligible clinician. Section 1848(q)(2)(A)(i) of the Act requires us to use the quality performance category in determining each MIPS eligible clinician's final score, and section 1848(q)(2)(B)(i) of the Act describes the measures and activities that must be specified under the quality performance category.

    The statute does not specify the number of quality measures on which a MIPS eligible clinician must report, nor does it specify the amount or type of information that a MIPS eligible clinician must report on each quality measure. However, section 1848(q)(2)(C)(i) of the Act requires the Secretary, as feasible, to emphasize the application of outcomes-based measures.

    Sections 1848(q)(1)(E) of the Act requires the Secretary to encourage the use of QCDRs, and section 1848(q)(5)(B)(ii)(I) of the Act requires the Secretary to encourage the use of CEHRT and QCDRs for reporting measures under the quality performance category under the final score methodology, but the statute does not limit the Secretary's discretion to establish other reporting mechanisms.

    Section 1848(q)(2)(C)(iv) of the Act generally requires the Secretary to give consideration to the circumstances of non-patient facing MIPS eligible clinicians and allows the Secretary, to the extent feasible and appropriate, to apply alternative measures or activities to such clinicians.

    (c) Relationship to the PQRS and VM

    Previously, the PQRS, which is a pay-for-reporting program, defined requirements for satisfactory reporting and satisfactory participation to earn payment incentives or to avoid a PQRS payment adjustment EPs could choose from a number of reporting mechanisms and options. Based on the reporting option, the EP had to report on a certain number of measures for a certain portion of their patients. In addition, the measures had to span a set number of National Quality Strategy (NQS) domains, information related to the NQS can be found at http://www.ahrq.gov/workingforquality/about.htm. The VM built its policies off the PQRS criteria for avoiding the PQRS payment adjustment. Groups that did not meet the criteria as a group to avoid the PQRS payment adjustment or groups that did not have at least 50 percent of the EPs that did not meet the criteria as individuals to avoid the PQRS payment adjustment automatically received the maximum negative adjustment established under the VM and are not measured on their quality performance.

    MIPS, in contrast to PQRS, is not a pay-for-reporting program, and we proposed that it would not have a “satisfactory reporting” requirement. However, to develop an appropriate methodology for scoring the quality performance category, we believe that MIPS needs to define the expected data submission criteria and that the measures need to meet a data completeness standard. In the proposed rule (81 FR 28184), we proposed the minimum data submission criteria and data completeness standard for the MIPS quality performance category for the submission mechanisms that were discussed in the proposed rule (81 FR 28181), as well as benchmarks against which eligible clinicians' performance would be assessed. The scoring methodology discussed in the proposed rule (81 FR 28220) would adjust the quality performance category scores based on whether or not an individual MIPS eligible clinician or group met these criteria and how their performance compared against the benchmarks.

    In the MIPS and APMs RFI, we requested feedback on numerous provisions related to data submission criteria including: How many measures should be required? Should we maintain the policy that measures cover a specified number of NQS domains? How do we apply the quality performance category to MIPS eligible clinicians that are in specialties that may not have enough measures to meet our defined criteria? Several themes emerged from the comments. Commenters expressed concern that the general PQRS satisfactory reporting requirement to report nine measures across three NQS domains is too high and forces eligible clinicians to report measures that are not relevant to their practices. The commenters requested a more meaningful set of requirements that focused on patient care, with some expressing the opinion that NQS domain requirements are arbitrary and make reporting more difficult. Some commenters requested that we align measures across payers and consider using core measure sets. Other commenters expressed the need for flexibility and different reporting options for different types of practices.

    In response to the MIPS and APMs RFI comments, and based on our desire to simplify the MIPS reporting system and make the measurement more meaningful, we proposed MIPS quality criteria that focus on measures that are important to beneficiaries and maintain some of the flexibility from PQRS, while addressing several of the issues that concerned commenters.

    • To encourage meaningful measurement, we proposed to allow individual MIPS eligible clinicians and groups the flexibility to determine the most meaningful measures and reporting mechanisms for their practice.

    • To simplify the reporting criteria, we are aligning the submission criteria for several of the reporting mechanisms.

    • To reduce administrative burden and focus on measures that matter, we are lowering the expected number of the measures for several of the reporting mechanisms, yet are still requiring that certain types of measures be reported.

    • To create alignment with other payers and reduce burden on MIPS eligible clinicians, we are incorporating measures that align with other national payers.

    • To create a more comprehensive picture of the practice performance, we also proposed to use all-payer data where possible.

    As beneficiary health is always our top priority, we proposed criteria to continue encouraging the reporting of certain measures such as outcome, appropriate use, patient safety, efficiency, care coordination, or patient experience measures. However, we proposed to remove the requirement for measures to span across multiple domains of the NQS. We continue to believe the NQS domains to be extremely important and we encourage MIPS eligible clinicians to continue to strive to provide care that focuses on: effective clinical care, communication, efficiency and cost reduction, person and caregiver-centered experience and outcomes, community and population health, and patient safety. While we will not require that a certain number of measures must span multiple domains, we encourage MIPS eligible clinicians to select measures that cross multiple domains. In addition, we believe the MIPS program overall, with the focus on cost, improvement activities, and advancing care information performance categories, will naturally cover many elements in the NQS.

    (2) Contribution to the Final Score

    For the 2019 MIPS adjustment year, the quality performance category will account for 50 percent of the final score, subject to the Secretary's authority to assign different scoring weights under section 1848(q)(5)(F) of the Act. Section 1848(q)(2)(E)(i)(I)(aa) of the Act states the quality performance category will account for 30 percent of the final score for MIPS. However, section 1848(q)(2)(E)(i)(I)(bb) of the Act stipulates that for the first and second years for which MIPS applies to payments, the percentage of the final score applicable for the quality performance category will be increased so that the total percentage points of the increase equals the total number of percentage points by which the percentage applied for the cost performance category is less than 30 percent. Section 1848(q)(2)(E)(i)(II)(bb) of the Act requires that, for the transition year for which MIPS applies to payments, not more than 10 percent of the of final score shall be based on performance to the cost performance category. Furthermore, section 1848(q)(2)(E)(i)(II)(bb) of the Act states that, for the second year for which MIPS applies to payments, not more than 15 percent of the final score shall be based on performance to the cost performance category. We proposed at § 414.1330 for payment years 2019 and 2020, 50 percent and 45 percent, respectively, of the MIPS final score would be based on performance on the quality performance category. For the third and future years, 30 percent of the MIPS final score would be based on performance on the quality performance category.

    Section 1848(q)(5)(B)(i) of the Act requires the Secretary to treat any MIPS eligible clinician who fails to report on a required measure or activity as achieving the lowest potential score applicable to the measure or activity. Specifically, under our proposed scoring policies, a MIPS eligible clinician or group that reports on all required measures and activities could potentially obtain the highest score possible within the performance category, presuming they performed well on the measures and activities they reported. A MIPS eligible clinician or group who does not meet the reporting threshold would receive a zero score for the unreported items in the category (in accordance with section 1848(q)(5)(B)(i) of the Act). The MIPS eligible clinician or group could still obtain a relatively good score by performing very well on the remaining items, but a zero score would prevent the MIPS eligible clinician or group from obtaining the highest possible score.

    The following is summary of the comments we received regarding our general strategy and the quality performance category contribution to the final score.

    Comment: Numerous commenters supported the focus on quality in the proposed rule and our proposal that, for payment year 2019, 50 percent of the final score would be based on performance on quality measures.

    Response: We thank the commenters for their support.

    Comment: Other commenters were concerned with the quality performance category's final score weights decreasing to 30 percent for payment years 2021 and beyond, as some eligible clinicians will not be eligible to participate in MIPS and receive a MIPS adjustment until payment year 2021. The commenters believed this would be a disadvantage with the cost performance category final score weight increasing. The commenters noted that increasing penalties under MIPS would also place such clinicians in an unfair position. The commenters requested that CMS make appropriate considerations for such MIPS eligible clinicians.

    Response: We appreciate the concerns raised that MIPS eligible clinicians who are not initially eligible to participate in MIPS and receive MIPS adjustments until payment year 2021 might have a different starting point than those MIPS eligible clinicians who begin participating in CY 2017. We note that those MIPS eligible clinicians who are not initially eligible to participate in MIPS and receive MIPS adjustments, do have the option to volunteer to report. By volunteering to report, these eligible clinicians will gain experience with the MIPS scoring system prior to being required to do so. We will, however, take the commenter's recommendation into consideration for future rulemaking.

    Comment: Another commenter requested that when the time comes to include rehabilitation therapists in MIPS program, they be granted the same stepped-down percentage of scoring for quality and stepped-up percentage of scoring for cost that are in place for those MIPS eligible clinicians participating in MIPS program in the first 2 years. Such an approach would give those MIPS eligible clinicians the same time and consideration doctors of medicine or osteopathy, doctors of dental surgery or dental medicine, physician assistants, nurse practitioners, clinical nurse specialists, and certified registered nurse anesthetists will receive during their transition to MIPS program.

    Response: We would like to explain that in the first 2 years of the MIPS program, the quality weight will be higher and the cost weight will be lower. In addition, we note that those MIPS eligible clinicians who are not initially eligible to participate in MIPS in 2017 for the 2019 MIPS payment year, do have the option to voluntarily report. By volunteering to report, these eligible clinicians will gain experience with the MIPS scoring system prior to being required to do so. We thank the commenter for their feedback and will take their comments into consideration in future rulemaking.

    Comment: One commenter supported CMS' proposal to incentivize MIPS eligible clinicians to use CEHRT for end-to-end electronic reporting.

    Response: We thank the commenter for their support.

    Comment: One commenter stated they were concerned about how different evaluation criteria have been weighed in the MIPS program. They believed there was an arbitrary nature and bias in the weighting for MIPS which they stated cannot be corrected through a change in weighting. The commenter provided an example of the scoring system including bonus points, which they believed results in an inaccurate view of real outcomes.

    Response: We do not believe that the evaluation criteria we have developed and proposed for MIPS are arbitrary or biased. Moreover, as we explained in the proposed rule (81 FR 28255), bonus points are intended to recognize quality measurement priorities. We believe that recognition is necessary to focus quality improvement efforts on specific CMS goals.

    Comment: Another commenter suggested for the quality performance measures that CMS adopt standards and mapping tools by ensuring that eCQM calculations are accurate. In addition, the commenter stated CMS should adopt standards to ensure different EHRs are accurately and uniformly capturing eCQMs. Another commenter recommended that CMS ensure that the eCQMs in the quality performance category align with measures used by other payers and accrediting and certification programs (for example, NCQA), noting that if the specifications do not align, the commenter believed that shared data will not help streamline the reporting processes.

    Response: We thank the commenters and agree that adopting standards to accurately and uniformly capture eCQMs is essential. We currently use the Health Level Seven (HL7) standard Health Quality Measures Format (HQMF) for electronically documenting eCQM content as well as the Quality Data Model (QDM) for measure logic. We will continue to ensure industry standards are used and refined in order best capture eCQM data.

    Comment: One commenter recommended that CMS consider merging the quality and cost performance categories as a ratio of quality and cost.

    Response: We do not believe we have the statutory authority to merge the quality and cost performance categories. MACRA specified the four performance categories we are required to incorporate into the MIPS program.

    After consideration of the comments received regarding our general strategy and the quality performance category contribution to the final score and the additional factors described in section II.E.5.b. of this final rule with comment period, we are not finalizing this policy as proposed. Rather, as discussed in section II.E.5.e. of this final rule with comment period, the cost performance category will account for 0 percent of the final score in 2019, 10 percent of the final score in 2020, and 30 percent of the final score in 2021 and future MIPS payment years, as required by statute. In accordance with section 1848(q)(2)(E)(i)(I)(bb) of the Act, we are redistributing the final score weight from cost performance category to the quality performance category. Therefore, we are finalizing at § 414.1330(b) for MIPS payment years 2019 and 2020, 60 percent and 50 percent, respectively, of the MIPS final score will be based on performance on the quality performance category. For the third and future years, 30 percent of the MIPS final score will be based on performance on the quality performance category.

    (3) Quality Data Submission Criteria (a) Submission Criteria

    The following are the proposed criteria for the various proposed MIPS data submission mechanisms described in the proposed rule (81 FR 28181) for the quality performance category.

    (i) Submission Criteria for Quality Measures Excluding CMS Web Interface and CAHPS for MIPS

    We proposed at § 414.1335 that individual MIPS eligible clinicians submitting data via claims and individual MIPS eligible clinicians and groups submitting via all mechanisms (excluding CMS Web Interface, and for CAHPS for MIPS survey, CMS-approved survey vendors) would be required to meet the following submission criteria. We proposed that for the applicable 12-month performance period, the MIPS eligible clinician or group would report at least six measures including one cross-cutting measure (if patient-facing) found in Table C of the Appendix in this final rule with comment period and including at least one outcome measure. If an applicable outcome measure is not available, we proposed that the MIPS eligible clinician or group would be required to report one other high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measures) in lieu of an outcome measure. If fewer than six measures apply to the individual MIPS eligible clinician or group, then we proposed the MIPS eligible clinician or group would be required to report on each measure that is applicable.

    MIPS eligible clinicians and groups would select their measures from either the list of all MIPS measures in Table A of the Appendix in this final rule with comment period, or a set of specialty-specific measure set in Table E of the Appendix in this final rule with comment period. We noted that some specialty-specific measure sets include measures grouped by subspecialty; in these cases, the measure set is defined at the subspecialty level.

    We designed the specialty-specific measure sets to address feedback we have received in the past that the quality measure selection process can be confusing. A common complaint about PQRS was that EPs were asked to review close to 300 measures to find applicable measures for their specialty. The specialty measure sets in Table E of the Appendix in this final rule with comment period, are the same measures that are within Table A of the Appendix in this final rule with comment period, however these are sorted consistent with the American Board of Medical Specialties (ABMS) specialties. Please note that these specialty-specific measure sets are not all inclusive of every specialty or subspecialty. We requested comments on the measures proposed under each of the specialty-specific measure sets. Specifically, we solicited comments on whether or not the measures proposed for inclusion in the specialty-specific measure sets are appropriate for the designated specialty or subspecialty and whether there are additional proposed measures that should be included in a particular specialty-specific measure set.

    Furthermore, in the proposed rule we noted that there were some special scenarios for those MIPS eligible clinicians who selected their measures from a specialty-specific measure set at either the specialty or subspecialty level (Table E of the Appendix in this final rule with comment period). We provided the following example in the proposed rule, where some of the specialty-specific measure sets have fewer than six measures, in these instances MIPS eligible clinicians would report on all of the available measures including an outcome measure or, if an outcome measure is unavailable, report another high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measures), within the set and a cross-cutting measure if they are a patient-facing MIPS eligible clinician. To illustrate, at the subspecialty-level the electrophysiology cardiac specialist specialty-specific measure set only has three measures within the set, all of which are outcome measures. MIPS eligible clinicians and groups reporting on the electrophysiology cardiac specialist specialty-specific measure set would report on all three measures and since these MIPS eligible clinicians are patient-facing they must also report on a cross-cutting measure which is defined in Table C of the Appendix in this final rule with comment period. In other scenarios, the specialty-specific measure sets may have six or more measures, and in these instances MIPS eligible clinicians would report on at least six measures including at least one cross-cutting measure and at least one outcome measure or, if an outcome measure is unavailable, report another high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measure). Specifically, the general surgery specialty-specific measure set has eight measures within the set, including four outcome measures, three other high priority measures and one process measure. MIPS eligible clinicians and groups reporting on the general surgery specialty-specific measure set would either have the option to report on all measures within the set or could select six measures from the set and since these MIPS eligible clinicians are patient-facing one of their six measures must be a cross-cutting measure which is defined in Table C of the Appendix in this final rule with comment period.

    As noted above, the submission criteria is provided for each specialty-specific measure set, or in the measure set defined at the subspecialty level, if applicable. Regardless of the number of measures that are contained in a specialty-specific measure set, MIPS eligible clinicians reporting on a measure set would be required to report at least one cross-cutting measure and either at least one outcome measure or, if no outcome measures are available in that specialty-specific measure set, report another high priority measure. We proposed that MIPS eligible clinicians or groups that report on a specialty-specific measure set that includes more than six measures can report on as many measures as they wish as long as they meet the minimum requirement to report at least six measures, including one cross-cutting measure and one outcome measure, or if an outcome measure is not available another high priority measure. We solicited comment on our proposal to allow reporting of specialty-specific measure sets to meet the submission criteria for the quality performance category, including whether it is appropriate to allow reporting of a measure set at the subspecialty level to meet such criteria, since reporting at the subspecialty level would require reporting on fewer measures.

    Alternatively, we solicited comment on whether we should only consider reporting up to six measures at the higher overall specialty level to satisfy the submission criteria. We noted that our proposal to allow reporting of specialty-specific measure sets at the subspecialty level was intended to address the fact that very specialized clinicians who may be represented by our subspecialty categories may only have one or two applicable measures. Further, we note that we will continue to work with specialty societies and other measure developers to increase the availability of applicable measures for specialists across the board.

    We proposed to define a high priority measure at § 414.1305 as an outcome, appropriate use, patient safety, efficiency, patient experience, or care coordination quality measures. These measures are identified in Table A of the Appendix in this final rule with comment period. We further note that measure types listed as an “intermediate outcome” are considered outcome measures for the purposes of scoring (see 81 FR 28247).

    As an alternative to the above proposals, we also considered requiring individual MIPS eligible clinicians submitting via claims and individual MIPS eligible clinicians and groups submitting via all mechanisms (excluding the CMS Web Interface and, for CAHPS for MIPS survey, CMS-approved survey vendors) to meet the following submission criteria. For the applicable 12-month performance period, the MIPS eligible clinician or group would report at least six measures including one cross-cutting measure (if patient-facing) found in Table C of the Appendix in this final rule with comment period and one high priority measure (outcome, appropriate use, patient safety, efficiency, patient experience, and care coordination measures). If fewer than six measures apply to the individual MIPS eligible clinician or group, then the MIPS eligible clinician or group must report on each measure that is applicable. MIPS eligible clinicians and groups will have to select their measures from either the list of all MIPS Measures in Table A of the Appendix in this final rule with comment period or a set of specialty-specific measure set in Table E of the Appendix in this final rule with comment period.

    As discussed in the proposed rule (81 FR 28173), MIPS eligible clinicians who are non-patient facing MIPS eligible clinicians would not be required to report any cross-cutting measures. For further details on non-patient facing MIPS eligible clinician discussions, we refer readers to section II.E.1.b. of this final rule with comment period.

    In addition, in the proposed rule (81 FR 28187) we discussed our intention to develop a validation process to review and validate a MIPS eligible clinician's or group's ability to report on at least six quality measures, or a specialty-specific measure set, with a sufficient sample size, including at least one cross-cutting measure (if the MIPS eligible clinician is patient-facing) and either an outcome measure if one is available or another high priority measure. If a MIPS eligible clinician or group had the ability to report on the minimum required measures with sufficient sample size and elects to report on fewer than the minimum required measures, then, as described in the proposed scoring algorithm (81 FR 28254), the missing measures would be scored with a zero performance score.

    Our proposal is a decrease from the 2016 PQRS requirement to report at least nine measures. In addition, as previously noted, we proposed to no longer require reporting across multiple NQS domains. We believed these proposals were the best approach for the quality performance category because they decrease the MIPS eligible clinician's reporting burden while focusing on more meaningful types of measures.

    We also note that we believe that outcome measures are more valuable than clinical process measures and are instrumental to improving the quality of care patients receive. To keep the emphasis on such measures in the statute, we plan to increase the requirements for reporting outcome measures over the next several years through future rulemaking, as more outcome measures become available. For example, we may increase the required number of outcome measures to two or three. We also believe that appropriate use, patient experience, safety, and care coordination measures are more relevant than clinical process measures for improving care of patients. Through future rulemaking, we plan to increase the requirements for reporting on these types of measures over time.

    In consideration of which MIPS measures to identify as reasonably focused on appropriate use, we have selected measures which focus on minimizing overuse of services, treatments, or the related ancillary testing that may promote overuse of services and treatments. We have also included select measures of underuse of specific treatments or services that either (1) reflected overuse of alternative treatments and services that were are not evidence-based or supported by clinical guidelines; or (2) where the intent of the measure reflected overuse of alternative treatments and services that were not evidence-based or supported by clinical guidelines. We realize there are differing opinions on what constitutes appropriate use. Therefore, we solicited comments on what specific measures of over or under use should be included as appropriate use measures.

    We plan to incorporate new measures as they become available and will give the public the opportunity to comment on these provisions through future notice and comment rulemaking. Under the Improving Medicare Post-Acute Transformation (IMPACT) Act of 2014, the Office of ASPE has been conducting studies on the issue of risk adjustment for sociodemographic factors on quality measures and cost, as well as other strategies for including SDS evaluation in CMS programs. We will closely examine the ASPE studies when they are available and incorporate findings as feasible and appropriate through future rulemaking. We look forward to working with stakeholders in this process. In addition, we solicited comments on ways to minimize potential gaming, for example, requiring MIPS eligible clinicians to report only on measures for which they have a sufficient sample size, to address concerns that MIPS eligible clinicians may solely report on measures that do not have a sufficient sample size to decrease the overall weight on their quality score. More information on the way we proposed to score MIPS eligible clinicians in this scenario is discussed in the proposed rule (81 FR 28187). We also solicited comment on whether these proposals sufficiently encourage clinicians and measure developers to move away from clinical process measures and towards outcome measures and measures that reflect other NQS domains. We requested comments on these proposals.

    The following is summary of the comments we received regarding our proposal on submission criteria for quality measures excluding CMS Web Interface and CAHPS for MIPS.

    Comment: Many commenters expressed support for lowering the reporting threshold from nine to six quality measures, including one cross-cutting and one outcome measure, and no longer requiring that MIPS eligible clinicians report on measures that span three NQS domains.

    Response: We thank the commenters for their support.

    Comment: Another commenter appreciated the decreased requirement relative to PQRS of reporting on six quality measures for MIPS; however, the commenter was disappointed about our proposal to maintain an absolute minimum number of measures that MIPS eligible clinicians are required to report. The commenter believed that the current quality measures list is insufficient to cover all practice types. The commenter stated that the challenge of participating would only be exacerbated by imposition of a minimum number of measures. The commenter appreciated the lack of penalty if a MIPS eligible clinician is unable to report on the minimum requirement when they do not have applicable measures. A few commenters noted that emergency clinicians who report via claims cannot report on six measures. They stated that it was not clear from proposal whether these MIPS eligible clinicians would still be able to qualify for the full potential score available under the scoring methodology. Another commenter requested CMS provide special consideration be given to clinicians practicing at urgent care centers, including reducing the required number of quality measures to report on from six to four.

    Response: We would like to note that MIPS eligible clinicians with fewer than six applicable measures are not required to report six measures, and must only report those measures that are applicable. While claims-based reporting is one submission mechanism available, emergency clinicians also have the option to use the other submission mechanisms available to satisfy the requirements. We further note that we have revised the emergency medicine specialty-specific measure set whereby the set now includes 17 measures with 11 of them reportable via claims. Emergency medicine clinicians will be able to report measures to earn the full potential score.

    Comment: Some commenters disagreed with our proposed measure threshold of six measures, and recommended maintaining the PQRS threshold of reported measures at nine. These commenters were concerned that lowering the threshold of reported measures (from nine to six) sends the wrong signal about the importance of quality measures within MIPS. The commenters believed MIPS eligible clinicians might pick and choose measures that they perform well on, providing a less comprehensive picture of quality of care. Instead, the commenters stated CMS should establish mandatory core sets of measures by specialty/subspecialty groups to signal areas where MIPS eligible clinicians should focus their attention and increase comparability across MIPS eligible clinicians. Other commenters believed a core set of measures would create unequal performance by groups of different sizes and specialties, allowing single specialty groups to report only measures specific to their practice. The commenters recommended that CMS establish benchmarks for a set of core quality measures.

    Conversely, other commenters disagreed with our proposed measure threshold of six measures, and recommended that the measure threshold be lowered. Recommendations ranged from four measures, three measures or one to two measures. These commenters indicated that a reduced threshold would allow MIPS eligible clinicians to choose a few measures that will have a high impact on care improvements. Additionally, commenters were concerned that the threshold of six may burden practices that are struggling to find relevant measures and jeopardize their ability to achieve the maximum number of points under the quality performance category. The commenters stated that fewer required measures will reduce administrative burden, better reflect the conditions and realities of medical practice, allow MIPS eligible clinicians time to focus on quality improvement, and lead to more accurate measurement and a better snapshot of quality. Some commenters requested that CMS, the Department of Health (DOH), The Joint Commission (TJC), and Det Norske Veritas (DNV) join forces to focus on meaningful improvement.

    Response: We do not believe the thresholds for quality measurement should be lowered further. In any quality measurement program, we must balance the data collection burden that we must impose on MIPS eligible clinicians with the resulting quality performance data that we will receive. We believe that without sufficiently robust performance data, we cannot accurately measure quality performance. Therefore, we believe that we have appropriately struck a balance between requiring sufficient quality measure data from clinicians and ensuring robust quality measurement at this time. We want to emphasize that we are committed to working with stakeholders to improve our quality programs including MIPS. An integral part of these programs are quality measures that reflect the scope and variety of the many types of clinical practice. It is important that we offer enough quality measures that assess the various practice types and that clinicians report sufficient measures to allow a reasonable comparison of their quality performance.

    We do note that for the initial performance period under the MIPS many flexibilities have been implemented, including a modified scoring approach which ensures that MIPS eligible clinicians who prefer to only submit data on one or two measures can avoid a negative MIPS adjustment. Furthermore, our modified scoring approach incentivizes high performers who have a robust data set available. We refer readers to section II.E.6. of this final rule with comment period for more details on the scoring approach.

    Comment: Another commenter referenced our proposal, which stated that “if fewer than six measures apply to the individual MIPS eligible clinician or group, then the MIPS eligible clinician or group would be required to report on each measure that is applicable,” and mentioned that this statement seemed to provide no penalty. The commenter requested clarification on this language to ensure that groups would not be penalized for submitting fewer than six measures. Another commenter requested clarification on how CMS proposes to define “applicable.” One commenter suggested that MIPS eligible clinicians should have the opportunity to pre-certify with CMS that fewer than six measures are available to them prior to the beginning of the performance period.

    Response: While we expect this to occur in only rare circumstances, we would like to confirm the commenter's understanding. If fewer than six measures apply to the MIPS eligible clinician or group, the MIPS eligible clinician or group would be required to report on each applicable measure. Additionally, groups that report on a specialty-specific measure set that has fewer than six measures would only need to report the measures within that specialty-specific measure set. Generally, we define “applicable” to mean measures relevant to a particular MIPS eligible clinician's services or care rendered. The MIPS eligible clinician should be able to review the measure specifications to see if their services fall into the denominator of the measure. For example, if a MIPS eligible clinician who is an interventional radiologist decides to submit data via a specialty-specific measure set by selecting the interventional radiology specialty-specific measure sets, this MIPS eligible clinician would not have six measures applicable to them. Therefore, the MIPS eligible clinician would submit data on all of the measures defined within the specialty-specific measure set. MIPS eligible clinicians who do not have six individual measures available to them should select their appropriate specialty-specific measure set, because that pre-defines which measures are applicable to their specialty and provides certain assurances to them. For the majority of MIPS eligible clinicians choosing the specialty-specific measure sets provides a means to select applicable measures and, if the set includes less than 6 measures, this also assures that there is no need to report any additional measures. Furthermore, we will apply a clinical relation test to the quality data submissions to determine if the MIPS eligible clinician could have reported other measures. For more information on the clinical relation test, see section II.E.6.a.(2) of this final rule with comment period, where we discuss our validation process. Lastly, we are working to provide additional toolkits and educational materials to MIPS eligible clinicians prior to the performance period that will ease the burden on identification of which measures are applicable to MIPS eligible clinicians. If the MIPS eligible clinician required assistance, they may contact the Quality Payment Program Service Center.

    Comment: Another commenter requested that CMS add a requirement that MIPS-eligible clinicians report at least six measures, including one cross-cutting measure (if patient-facing), at least one outcome measure, and at least one high-priority measure. The commenter was concerned that high-priority measures may not be reported if they are a substitute for outcome measures.

    Response: We agree with the commenter that we want to maintain an emphasis on both outcome and high priority measures within the MIPS. We will take this comment into consideration for future rulemaking.

    Comment: Numerous commenters supported the proposal to encourage reporting of outcome measures over clinical process measures. One commenter noted that significant work remains to ensure measurement efforts across the health care system are focused on the most important quality issues, while other commenters recommended that future quality metrics emphasize patient care and health outcomes.

    Response: We thank the commenters for their support. We intend to finalize our proposal that one of the six measures a MIPS eligible clinicians must report on is an outcome measure.

    Comment: One commenter recommended that patient experience and patient satisfaction should not be categorized as quality metrics since these measures and surveys include factors outside the control of the clinician. The commenter stated that patient satisfaction, while important, does not always correlate with better clinical outcomes and may even conflict with clinically indicated treatments. In addition, another commenter expressed concern that the emphasis on patient opinions and their care experiences drives up cost.

    Response: We do believe it is important to assess patient experience of care, as it represents items such as communication and family engagement, which are important factors of the health care experience and these are measures that are important to patients and families. While patient experience may not always be directly related to health outcomes, there is evidence of a correlation between higher scores on patient experience surveys and better health outcomes. Please refer to http://www.ahrq.gov/cahps/consumer-reporting/research/index.html for more information on AHRQ studies pertaining to patient experience survey and better health outcomes.

    Comment: A few commenters supported the proposed reduction in burden in the MIPS quality performance category, but noted that MIPS eligible clinician specialties lacking validated outcome measures or “high priority” measures are likely to be at a disadvantage under this performance category because the quality performance category lacks sufficient specialty-specific quality measures. The commenters recommended that CMS work with specialty societies and measure development bodies to increase the availability of specialty-specific quality measure sets. Another commenter supported the reduced number of quality measures required for reporting, but recommended that specialty MIPS eligible clinicians not be required to report a cross-cutting measure. Some commenters supported CMS's proposal to allow the reporting of specialty and subspecialty specific measure sets to meet the submission criteria for the quality performance category, even if it would mean a MIPS eligible clinician or group would report on fewer than six measures.

    Response: We thank the commenters for their feedback. We believe that all MIPS eligible clinicians regardless of their specialty have a high priority measure available. Therefore, we intend to finalize that if a MIPS eligible clinician does not have an outcome measure available, they are required to report on a high priority measure.

    Comment: Several commenters recommended eliminating the proposed requirement that an outcome measure and a cross-cutting measure be reported in the quality performance category. One commenter believed this proposal may disadvantage small or rural practices and posed challenges for QCDRs. The commenter noted that some approved QCDRs do not incorporate value codes in their data collection process, and many specialized QCDRs may not capture the data needed to report cross-cutting measures. The commenter believed the requirement for reporting on cross cutting measures also makes the 90 percent reporting threshold for QCDRs nearly impossible to meet. Another commenter stated that, until more valid and reliable outcome measures are developed, CMS should keep flexibility of measures throughout and lift the requirements that certain types of measures be reported, such as outcomes-based or cross-cutting measures. Other commenters recommended that specialty-specific measure sets lacking outcome measures be clearly marked as such and also contain notations as to which measures would qualify as high-priority alternatives. Several commenters recommended CMS provide bonus points for these measures rather than require all participants to report on them, and that CMS not require use of any specific measure types in the initial years of the program.

    Response: We appreciate the comments and have examined the policies very carefully. We have modified our proposal for the transition year of MIPS and are finalizing that for the applicable performance period, the MIPS eligible clinician or group would report at least six measures including at least one outcome measure. If an applicable outcome measure is not available, the MIPS eligible clinician or group would be required to report one other high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measures) in lieu of an outcome measure. If fewer than six measures apply to the individual MIPS eligible clinician or group, then the MIPS eligible clinician or group would be required to report on each measure that is applicable. We note that generally, we define “applicable” to mean measures relevant to a particular MIPS eligible clinician's services or care rendered.

    We are not finalizing the requirement that one of the measures must be a cross-cutting measure. Although we still believe that the concept of having a common set of measures available to clinicians that they can draw from is important we understand that not all of these measures are the most meaningful to clinicians and their scope of practice. We do strongly recommend however that where appropriate, clinicians continue to perform and submit data on these measures to CMS. Lastly, while we recognize that there are limitations in the current set of available outcome measures, we believe that a strong emphasis on outcome-based measurement is critical to improving the quality of care. Due to these limitations in the available outcome measure set, we are finalizing that MIPS eligible clinician may select another high priority measure if an outcome is not available.

    Comment: A few commenters recommended that CMS provide a “safe harbor” for reporting on new quality measures with innovative approaches and improvement by allowing entities to register “test measures” which would not be scored on but would count as a subset of the six quality measures with a participation credit. In addition, the commenters stated that CMS should provide a transitional period during the first half of 2017 in which MIPS eligible clinicians can receive written confirmation from CMS that their intended measures meet the requirements. The commenter expressed concern that CMS needs to provide specifications and a scoring methodology for the population health measures to improve transparency.

    Response: As noted in other sections of this final rule with comment period, we are providing a transitional year for the first performance period under the MIPS. We also note that commenters successfully reporting an appropriate specialty-specific measure set for a sufficient portion of their beneficiary population will have met all minimum reporting requirements for the quality category. We appreciate the commenter's feedback and will incorporate their suggestion as we develop toolkits and educational materials. We refer the commenter to section II.E.5.b.(6) and II.E.6. of this final rule with comment period for information on population health measures and the MIPS scoring methodology respectively.

    Comment: Another commenter urged CMS to pursue the following policies in the quality performance category: The commenter urged CMS to reconsider its proposal to require reporting on a minimum of six measures, if six measures apply. Instead, CMS should encourage the use of non-MIPS measures associated with a QCDR and/or allow MIPS eligible clinicians to select measures that directly relate to their clinical specialty and outcomes for their patients; and CMS should carefully monitor modifications to the cross-cutting measures list and ensure that at least one cross-cutting measure remains on this list for each category of MIPS eligible clinicians to allow them to remain compliant with the proposed requirements. Alternately, CMS could develop an option similar to the outcomes measures reporting requirement that would allow the MIPS eligible clinician to report a different type of measure, such as a high priority measure, if a cross-cutting measure does not apply.

    Response: We thank the commenter for their feedback and will take these recommendations into consideration for future rulemaking. We would like to note that there are already a number of outcome and specialty-specific measure sets available for reporting. In addition, the cross-cutting measure requirement is not being finalized.

    Comment: One commenter recommended that CMS develop a pilot program/test within the first MIPS implementation year that identifies a core measure set that allows direct comparison among MIPS eligible clinician performance where commonly applicable metrics allow for such a measure set for specific MIPS eligible clinician specialties. The commenter supported the general flexibility of quality reporting, but was concerned that the existing proposal may not foster true comparisons and performance could vary based on the measures selected to report rather than differences in quality performance. Another commenter encouraged CMS to identify a strategy to assess the most appropriate number of measures and distribution of metrics that MIPS eligible clinicians should be required to report. The commenter believed these analyses would provide necessary information for CMS to make evidence-based decisions with regard to changes to the quality measures reporting requirements to ensure an accurate account of the quality of care individual patients are receiving.

    Response: The majority of the quality measures that are being included in the MIPS program have already been utilized in PQRS for many years. In addition, we have created specialty-specific measure sets that may be utilized by specialist. We do not believe we need a pilot program as these measures have already been tested. The quality measures go through a rigorous evaluation process prior to being accepted in the MIPS program. With respect to the ideal number of measures that should be required per the commenter's suggestion above, we believe that our final submission requirements of six measures is the appropriate number based on our experience under the PQRS, VM and Medicare EHR Incentive Programs. We will however take the commenter's suggestion into consideration for future analyses and rulemaking.

    Comment: A few commenters were concerned that using self-reported measures and tying payment to self-reported quality measures will give MIPS eligible clinicians an incentive to select and report measures on which they perform well, especially when they have a large number of measures from which to choose. The commenters were also concerned that MIPS eligible clinicians are not likely to select certain high priority measures because of unfavorable results, such as overuse measures (for example, imaging for low-back pain) or because of the effort required to collect the measure (for example, the CAHPS for MIPS survey). The commenters stated self-reporting would tend to produce compressed ranges for measures that are scored in MIPS, which they believed would mean MIPS eligible clinicians would receive different incentive payments based on very small gradations in performance.

    Other commenters expressed concern that the ability of MIPS eligible clinicians to select their own measures could result in the reliance on low-bar measures that do not drive value-based care. The commenters recommended that CMS encourage MIPS eligible clinicians to report both an outcome and a high priority measures representative of their patient populations. Another commenter stated CMS should finalize requirements that provide more explicit standards around the type and caliber of measures that MIPS eligible clinicians and groups must report. The commenter encouraged CMS to utilize variations in weighting and scoring of measures to incentivize greater reporting on clinical and patient-reported outcomes measures. The commenter supported the inclusion of patient-reported outcomes and patient experience measures in MIPS.

    Other commenters recommended re-evaluation of the quality measures required by MIPS. The commenters stated that under the proposed rule, MIPS eligible clinicians participating in MIPS would choose six quality measures to report, one of which must be an outcome measure, and another a cross-cutting measure. The commenters recognized that CMS proposed this approach to reduce administrative burden and allow clinicians the flexibility to choose appropriate measures; however, was concerned that this approach may not meaningfully advance the quality of care provided to Medicare beneficiaries. The commenters stated given the financial incentive, the commenter would expect that MIPS eligible clinicians will select those measures on which they are already high-performing and on which they believe they can be at the top of the curve. Thus, they will focus more effort on the few areas that are existing strengths, and have limited incentive to drive improvement in a broad set of areas. The commenter recommended that CMS leverage the work of the Core Quality Measure Collaborative—which brought together stakeholders from America's Health Insurance Plans (AHIP), CMS and the National Quality Forum (NQF), as well as national physician organizations, employers and consumers—and select core sets of measures for each specialty to report. The commenters also proposed bonus points for clinicians who choose to report innovative, outcome-based measures in addition to the core set.

    Response: We agree with the commenters that there are certain challenges in using self-reported measures rather than a core or common measure set that all clinicians would be required to submit. We also appreciate the emphasis placed on outcome measurement. We do however believe that there are certain challenges in creating a core or common measure set for clinicians, as compared to other settings, due to the various practice and specialty types that clinicians may practice under. However, we have included the measures in the core measure sets that were developed by the Core Quality Measure Collaborative in the MIPS measure set and several of the specialty-specific measure sets. Lastly, we note that as indicated in other sections of this rule the first performance period of MIPS is a transitional year. We will take these comments into consideration for future rulemaking and will continue to monitor whether clinicians select only low-bar measures or measures on which their performance is already high. We will address any changes to policies based on these monitoring activities through future rulemaking.

    Comment: A few commenters recommended that CMS remove the requirement that specialists reporting under the specialty-specific measure set report a cross-cutting measure because they believed that the list of cross-cutting measures was not truly applicable to all specialties. For example, the commenters stated that emergency medicine MIPS eligible clinicians have only one proposed cross-cutting measure that is somewhat relevant: PQRS #3 1 7: High Blood Pressure Screening and Follow-Up. The commenters stated that the measure is problematic for emergency medicine because follow-up is required for any patient outside of the “normal” range. While the measure does include exclusion for patients in “emergent or urgent situations where time is of the essence and to delay treatment would jeopardize the patient's health status,” the commenters noted that a substantial number of ED patients are inadvertently included in the universe addressed by this measure, requiring burdensome documentation, follow-up, and even unnecessary downstream medical care.

    Response: We appreciate the comments and have examined the policies very carefully. As discussed above, we have modified our proposal for the transition year of MIPS. We are not requiring a cross-cutting measure but rather are finalizing that for the applicable performance period, the MIPS eligible clinician or group would report at least six measures including at least one outcome measure. If an applicable outcome measure is not available, the MIPS eligible clinician or group would be required to report one other high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measures) in lieu of an outcome measure. If fewer than six measures apply to the individual MIPS eligible clinician or group, then the MIPS eligible clinician or group would be required to report on each measure that is applicable or may report more measures that are applicable. We note that generally, we define “applicable” to mean measures relevant to a particular MIPS eligible clinician's services or care rendered.

    Comment: Some commenters urged CMS to take advantage of promoting a new set of cross-cutting quality measures—including measures generally applicable to patients with rare, chronic, and multiple chronic conditions—that would incorporate a patient-centered perspective, adding a critical patient voice to quality measurement.

    Response: We appreciate the suggestion and will take into consideration in the future.

    Comment: Other commenters supported the reporting criteria for cross-cutting measures and outcome measures. The commenters hoped that CMS would work with specialties that do not fall under the American Board of Medical Specialties' board certification to develop specialty-specific measure sets for clinicians such as physical therapists, as this may help clinicians who are less familiar with the program report successfully. Additionally, the commenters supported the flexibility of reporting either the specialty-specific measure set or the six measures.

    Response: We appreciate the commenters' support. We welcome suggestions for additional specialty-specific measure sets in the future.

    Comment: Another commenter urged CMS to use the recommendations of the National Academy of Medicine's (NAM) 2015 Vital Signs report, available at http://www.nationalacademies.org/hmd/Reports/2015/Vital-Signs-Core-Metrics.aspx, to identify the highest priority measures for development and implementation in the MIPS program.

    Response: When we identified high priority measures, we sought feedback from numerous stakeholders and we encourage commenters to submit any specific suggestions for future consideration. We will take this specific suggestion into consideration for future rulemaking.

    Comment: A few commenters recommended that CMS provide clarification on how proposed specialty-specific measure sets will be scored, given many have less than the required number of measures and do not include a required outcome or high priority measure. The commenters were also concerned that many sets may not be applicable for sub-specialists, and many specialties do not have a proposed specialty-specific measure set. In addition, the commenters stated that the number of applicable measures in a specialty-specific measure set may be reduced based on the proposed submission mechanism. For example, the commenters sought clarification as to whether an urologist who reports the one eCQM in the set (PQRS 50: Urinary Incontinence: Assessment of Presence or Absence Plan of Care for Urinary Incontinence in Women) is only accountable for the one eCQM and not accountable for reporting on an outcome or high priority measure.

    Response: We would like to explain that if fewer than six measures apply to the MIPS eligible clinician or group, the MIPS eligible clinician or group would be required to report on each applicable measure or may report more measures that are applicable. We note that generally, we define “applicable” to mean measures relevant to a particular MIPS eligible clinician's services or care rendered. Additionally, groups that report on a specialty-specific measure set that has fewer than six measures would only need to report the measures within that specialty-specific measure set. Please see section II.E.6. of this final rule with comment period for more on scoring. Finally, we would like to explain that if an MIPS eligible clinician or group reports via a data submission method that only has one applicable measure reportable via that method, the MIPS eligible clinician or group is only responsible for the measure that is applicable via that method. Alternatively, if an MIPS eligible clinician or group reports via a data submission method that does not have any measures reportable via that method, the MIPS eligible clinician or group must choose a data submission method that has one or more applicable measures. Given the potential for gaming in this situation, we will monitor whether MIPS eligible clinicians appear to be actively selecting submission mechanisms and measures sets with few applicable measures; we will address any changes to policies based on these monitoring activities through future rulemaking. We will also seek to expand the availability of measures available for reporting via all submission methods to the extent feasible.

    Comment: Some commenters recommended that CMS include in the specialty-specific measure sets those cross‐cutting measures that are most applicable to the specialty, rather than maintaining a separate list of cross-cutting measures and requiring MIPS eligible clinicians to refer to two lists. The commenters recommended that a geriatric measure set be created that will encourage geriatrician reporting and measures directly associated with improvements in care for the elderly.

    Response: We agree with the commenter and although we are not finalizing the requirement that MIPS eligible clinicians must report on a cross-cutting measure, we do still believe these measures add value. Therefore, we have incorporated the appropriate cross-cutting measures into the specialty-specific measure sets located in Table E of the Appendix in this final rule with comment period.

    Comment: Another commenter noted that there may be MIPS eligible clinicians whose services overlap in one or more specialty areas, and that flexibility is therefore necessary, yet believed that, in order for payers and patients to have a clear comparison, the ability to distinguish clinicians on like metrics is critical. Thus, with regard to specialty-specific measure sets, the commenter recommended that MIPS eligible clinicians be required to select a minimum number of quality measures from within their appropriate specialty-specific measure set. The commenter recommended that CMS continue to explore specialty-specific measure sets for additional specialty and subspecialty areas in order to enhance and refine meaningful comparisons over time.

    Response: If a clinician has a specialty set, by submitting all of the measures in that set (which may be fewer than six), they will potentially achieve a maximum quality score, depending on their performance. If the measure set has fewer than six measures, and the clinician reports all the measures in that set, there is not a requirement for further reporting. We thank the commenters for the suggestion and intend to work with the specialty societies to further develop specialty measure sets, specifically those that would be applicable for subspecialists.

    Comment: Some commenters urged CMS to hold all MIPS eligible clinician types to the six measure requirement, suggesting that a sub-specialty could select from the broader specialty list to reach six measures, or if necessary, report cross-cutting measure to achieve six measures if they have insufficient specialty-specific measures sets available to them.

    Response: We appreciate the commenters' suggestion and agree that it is important for clinicians to submit a sufficient number of measures. However, we are concerned that some subspecialists do not currently have a sufficient number of applicable measures to reach our 6 measure requirement; we are working with specialty societies to ensure that all specialists soon have access to a sufficient number of measures. To assure that these subspecialists report a sufficient number of measures in the interim period, we are finalizing our proposal to allow subspecialists to submit a specialty-specific measure set fully in lieu of meeting the six measure minimum requirement.

    Comment: One commenter urged CMS to be more transparent on how designations used for high priority are determined. The commenter stated that since bonus points are factored into the determination of a domain or a measure's priority, it is vital that CMS considered recommendations from measure stewards and QCDR entities for this determination.

    Response: We define high priority measures as outcome, patient experience, patient safety, care coordination, cost, and appropriate use. These measures are designated and identified in rulemaking, based on their NQF designation or if the measures are not NQF endorsed, based on their NQS domain designation or measure description as defined by the measure owners, stewards and clinical experts. We welcome commenters' feedback on high priority measure determinations in the future.

    Comment: Some commenters stated that measures applicability should be determined by analyzing the MIPS eligible clinician's claims, not just their specialty designation.

    Response: We agree and intend to determine measure applicability based on claims data whenever possible. Absent claims data we would use other identifying factors such as specialty designation. Generally, we define “applicable” to mean measures relevant to a particular MIPS eligible clinician's services or care rendered. When we initially proposed the specialty-specific measure sets we factored into consideration both of the elements the commenter suggested.

    Comment: A few commenters encouraged CMS to emphasize that specialty-specific measures sets are intended as a helpful tool as opposed to a required set of submissions. The commenters believed it is simpler for all MIPS eligible clinicians to report on six measures when they have eligible patients within the denominators of the approved measures so that everyone meets the same standards. Another commenter recommended that specialists and sub-specialists be required to meet the same program expectations including reporting on six measures. The commenter stated that if six measures are not available in the sub-specialty list, the MIPS eligible clinicians would need to report at the higher specialty level or cross-cutting measure until they reach a total of six measures. If CMS allows a lower number of quality measures for a particular specialty group in MIPS, the lower number of measures for reporting should be available to all MIPS eligible clinicians. If specialists and sub-specialists do not report on six measures, the commenter recommended that they should receive a score of zero for measures not reported.

    Response: We agree with the commenters that specialty-specific measure sets are intended to be helpful to MIPS eligible clinicians under the MIPS program. While it may be simpler to require the same six measures of all MIPS eligible clinicians, we do not believe it is appropriate to hold MIPS eligible clinicians accountable for measures that are not within the scope of their practice. The specialty-specific measure sets includes measures from the comprehensive list of MIPS quality measures available (Reference Table A). Measures within the specialty-specific measure set should be more relevant for the specialists and should be easier to identify and report. If a MIPS eligible clinician does not believe the measures within a specialty-specific measure set are relevant for their practice, they can choose any six measures within the comprehensive quality measure list. If a specialty measure set is further broken out by sub-specialty exists, we would recommend that the MIPS eligible clinician should submit measures within the sub-specialty set. We have made every effort to ensure the sub-specialty set includes the relevant measures for the particular sub-specialty.

    Comment: Another commenter approved of the proposed specialty-specific measures for the MIPS quality category and encouraged the creation of more specialty-specific measure sets. The commenter stated that currently, many specialty-specific measure sets have fewer than six measures, and many also do not have any outcome based measures. In addition, some of the specialty-specific measure sets have few or no EHR submission-eligible measures. The commenter urged CMS to prioritize e-specified measures currently listed as registry-only to enable clinicians to make maximum use of their CEHRT for reporting. The commenter also requested that CMS clarify MIPS eligible clinicians' obligations for quality measure reporting when no single reporting method will meet the reporting requirements even though the full specialty-specific measure set would do so.

    Response: We thank the commenter for their support of specialty-specific measure sets. It is our intent to adopt more specialty-specific measure sets over time, especially as new measures become available. Although some of the specialty-specific measure sets do not all have six measures they all contain an outcome or other high priority measure. When a MIPS eligible clinician chooses to report a specialty-specific measure set they are only required to report what is in the set and what is reportable through the selected data submission mechanism. We note, in rare situations where a MIPS eligible clinician submits data for a performance category via multiple submission mechanisms (for example, submits data for the quality performance category through a registry and claims), we would score all the options (such as scoring the quality performance category with data from a registry, and also scoring the quality performance category with data from claims) and use the highest performance category score for the MIPS eligible clinician final score. We would not however, combine the submission mechanisms to calculate an aggregated performance category score. We refer readers to section II.E.6. of this final rule with comment period for more information on scoring. Lastly, we agree with the commenter that eCQMs are a priority, and we intend to continue adopting additional measures of this type on the future. We intend to continue leveraging MIPS eligible clinicians' use of CEHRT for quality reporting requirements to the greatest extent possible.

    Comment: A few commenters supported CMS' focus on outcome measures, and specifically supported CMS' proposal to require MIPS eligible clinicians to report on at least one outcome measure and to allow MIPS eligible clinicians to earn two additional points for each additional outcome measure reported because the commenters stated that outcome measures provide more meaning and value for Medicare beneficiaries and are critical for delivering high quality care. Several other commenters commended CMS' plan to increase the requirements for reporting outcome measures over the next several years through future rulemaking, as more outcome measures become available. The commenter recommended that CMS consider accelerating the implementation of additional outcome or high quality measures, and expressed support for additional bonus points awarded to MIPS eligible clinicians for reporting additional outcome or high quality measures. One commenter agreed that outcome measures should be emphasized in the future, as these are the true indicators of healthcare services reflected directly on a patient's health status. Another commenter recommended that CMS develop of both clinical outcomes (for example, survival for patients with cancer and other life threatening conditions) and patient-reported outcome measures (for example, quality of life, functional status, and patient experience) to support this aim.

    Response: We thank the commenters and agree; we believe outcome measures are critical to quality improvement. We will take the commenters' suggestions into consideration for future rulemaking.

    Comment: Other commenters stated that if quality is based on good outcomes, MIPS eligible clinicians may deter treating the sickest patients since it will negatively impact their numbers, thereby resulting in sick patients not receiving timely and proper treatment and increasing national medical expenditures.

    Response: We have confidence in the clinician community and its commitment to their patients' overall wellbeing. To date, there is no evidence from the PQRS, VM, or Medicare EHR Incentive Program for EPs that clinicians have been deterred from seeing all types of patients seeking their care. We also note that many outcomes measures are risk-adjusted to account for beneficiary severity prior to treatment. We do recognize this issue is a concern for some stakeholders and will monitor MIPS eligible clinicians' performance under the MIPS for this unintended consequence.

    Comment: A few commenters recommended that CMS set limits on some of the measures that may be reported by multiple MIPS eligible clinicians with respect to one patient. For example, many beneficiaries will see multiple MIPS eligible clinicians. Hypothetically, the commenters believed it would not be appropriate for the body mass index (BMI) measure to be reported by a patient's primary care physician, cardiologist, endocrinologist, ophthalmologist, and rheumatologist in the same year.

    Response: We thank the commenters for the suggestion and will take it into consideration in the future.

    Comment: Another commenter opposed CMS' overall policy to attempt to assess patient experience and satisfaction under the quality performance category of MIPS with outcomes-based measures. The commenter stated that these measures and surveys include factors that may be outside the control of the MIPS eligible clinician, such as hospital nursing and staff behavior and performance and wait times in a hospital setting due to inadequate staffing levels and physical plant design. Also, patient satisfaction, while important, does not always correlate with better clinical outcomes and may even conflict with clinically indicated treatments. Another commenter believed patients should be asked to report outcomes across a continuum of care domains including treatment benefit, side effects, symptom management, care coordination, shared decision-making, advanced care planning, and affordability.

    Response: We respectfully disagree and believe that outcomes-based measures and high priority measures are critical to measuring health care quality. We thank the commenter also for their thoughts on patient satisfaction surveys, but we believe it is appropriate to measure and incentivize directly MIPS eligible clinicians' performance on patient experience surveys which uniquely present patients the opportunity to assess the care that they received. There is evidence that performance on patient experience surveys is positively correlated with better patient outcomes. We intend to continue working with stakeholders to improve available measures.

    Comment: Other commenters stated the measures in the physical medicine specialty-specific measure set are all process measures and that the only way one can report on six out of seven measures is via a registry. Although the measures could be applicable to some Physical Medicine and Rehabilitation (PM&R) physicians, the commenters believed they are not applicable to all PM&R MIPS eligible clinicians. The commenters urged CMS to remove the specialty-specific measure set and work with American Academy of Physical Medicine and Rehabilitation (AAPM&R) on identifying better measurements for their specialty.

    Response: If MIPS eligible clinicians find that the measures within a specialty-specific measure set are not applicable to their practice, they may report any of the measures that are available under the MIPS program. We believe that the physical medicine specialty-specific measure set is applicable to PM&R MIPS eligible clinicians and that this policy appropriately accommodates those MIPS eligible clinicians that are unable to report the full specialty-specific measure set. Although all measures within the specialty-specific measure set may not be applicable to all PM&R clinicians, we believe that most PM&R clinicians will be able to report the measures within the set because they are relevant for most with the specialty. If an MIPS eligible clinician finds that they are unable to report the specialty-specific measure set, they are able to report any six measures from the larger quality measure set. We will continue to work with specialty societies to adjust the specialty-specific measure sets as more relevant measures become available. We also welcome specific feedback from MIPS eligible clinicians who are specialists on what quality measures would be most appropriate for their specialty-specific measure set.

    Comment: Another commenter supported the reporting of specialty-specific measure sets as meeting the full requirements in the quality performance category because specialty MIPS eligible clinicians struggle to meet many other measures outside their domain and should not be penalized for not going outside their specialty by having to find additional measures to report that may not be appropriate for the care they provide.

    Response: We thank the commenter for their support. We note that the only additional measure that would be calculated as part of an MIPS eligible clinician's quality score is the population-based measure which does not require any data submission, reflected in Table B of the Appendix in this final rule with comment period, which only applies to groups of 16 or greater. For more information on this measure we refer readers to the Global and Population-Based Measures section below.

    Comment: Several commenters suggested that quality measurement and reporting must measure things that are clinically meaningful and should emphasize outcomes over process measures. The commenters added that quality measurement should also incorporate patient experience measures and patient-reported outcomes measures (PROMs), and quality measures should be disaggregated by race/ethnicity, gender, gender identity, sexual orientation, age, and disability status. Another commenter recommended that patient-reported outcome measures (PROMs) be given greater weight in the MIPS program. Other commenters encouraged the inclusion of medication adherence measures beyond those currently included under the quality performance category.

    Response: We agree with commenters that quality measurement must capture clinically-meaningful topics. We further agree that patient-reported measures are important and we have included a number of PROMs in MIPS. We intend to expand their portfolio in the future. We will consider the commenter's suggestions on quality measure demographics and medication adherence measures, particularly in the context of risk-adjustment, and increased weighting in the future.

    Comment: A few commenters recommended that CMS provide an incentive to MIPS eligible clinicians to submit eCQMs and not deter MIPS eligible clinicians from using CEHRT for eCQMs. The commenters recommended that CMS provide an exemption on reporting a cross-cutting ensure for MIPS eligible clinicians who use CEHRT/health IT vendors to report eCQMs for the quality performance category.

    Response: We thank the commenters for these suggestions. We refer the commenter to section II.E.6. of this final rule with comment period where we describe our policies for bonus points available for using CEHRT in a data submission pathway that to report patient demographic and clinical data electronically from end to end. An exemption on reporting a cross-cutting measure is not necessary considering our decision not to finalize a requirement to report a cross-cutting measure.

    Comment: One commenter urged CMS to maintain greater control of the reporting under Quality Payment Program and to provide more thoroughly defined measurements. They also urged CMS to incorporate more reporting requirements that would assess the actual and overall quality of care being provided to beneficiaries.

    Response: We thank the commenter for the feedback. We have structured the MIPS program to rely on the MIPS eligible clinician's choice of specialty, which remains in the clinician's control, and which we expect reflects the services that they provide, as well as the quality measures that those MIPS eligible clinicians select. The quality measures go through a rigorous review process to assure they are thoroughly defined measurements as discussed in section II.E.5.c. of this final rule with comment period. We believe the MIPS program is designed to assess actual and overall quality of care being provided to the beneficiaries.

    Comment: Other commenters stated their small staff does not have time to spend on reporting quality metrics.

    Response: It has been our intention to adopt measures that are as minimally burdensome as possible. We have also adopted several other policies for smaller practices in order to ensure that MIPS does not impose significant burdens on them. We encourage the commenters to contact the Quality Payment Program Service Center for assistance reporting applicable measures.

    Comment: One commenter believed that some flexibility in reporting requirements under quality would be helpful, especially for small practices, but encouraged CMS to balance the need for flexibility against the need for consistent reporting across MIPS eligible clinicians. Another commenter stated that CMS should allow small practices to report a smaller number of quality measures, at least for the initial few years.

    Response: We thank the commenter. We have attempted to be flexible with the measures that we have adopted under MIPS. It has been our intention to adopt measures that are as minimally burdensome as possible. We have also adopted several other policies for smaller practices in order to ensure that MIPS does not impose significant burdens on them.

    Comment: Another commenter supported narrowing the requirements for improving quality measurement and reporting for MIPS based on data collected as a natural part of clinical workflow using health information technology.

    Response: We will take this comment into account in the future. We believe that electronic quality measurement is an important facet of quality programs more generally.

    Comment: One commenter supported allowing flexibility for MIPS eligible clinicians to choose measures that are relevant to their type of care.

    Response: We thank the commenter and agree.

    Comment: Other commenters encouraged CMS and Health Resources and Services Administration (HRSA) to align the quality measurement sections of MIPS and the Uniform Data System so that FQHCs can submit one set of quality data one time for both purposes.

    Response: We thank the commenters for their suggestion and will examine this option for future rulemaking. Please refer to section II.E.1.d. of this final rule with comment period for more information regarding FQHCs.

    Comment: Some commenters requested that CMS clarify the proposal to eliminate the need to track and report duplicative quality measures by modifying its proposal to require that if quality is reported in a manner acceptable under MIPS or an APM, it would not need to be reported under the Medicaid EHR Incentive Program. The commenters were concerned the programs could potentially cause the same conflict CMS specifically noted MIPS and APMs were intended to correct.

    Response: We thank the commenters and have worked to eliminate duplicative measures between MIPS and other programs where possible. We intend to continue to align MIPS and the Medicaid EHR Incentive Program to the greatest extent possible. As we have noted in section II.E.5.g. of this final rule with comment period, the requirements for the Medicaid EHR Incentive Program for EPs were not impacted by the MACRA. There is a requirement to submit CQMs to the state as part of a successful attestation for the Medicaid EHR Incentive Program. While the MIPS objectives for the advancing care information performance category are aligned to some extent with the Stage 3 objectives in the Medicaid EHR Incentive Program, they are two distinct programs, and reporting will stay separate.

    Comment: Another commenter stated that while the quality section discusses outcome measures, much of the measures are traditional, clinic based process measures. The commenter was unclear how such measures will drive transformation.

    Response: We currently have approximately 64 outcome measures available from which MIPS eligible clinicians may choose. We do agree that more work needs to occur on outcome measure development to impact the quality of care provided. As additional outcome measures are developed, we will incorporate these for future rulemaking.

    Comment: One commenter agreed that moving to more “high value” measures or “measures that matter” is important. However, the commenter recommended that neurologists be able to select measures that have the greatest value in driving improvement for their patients. The commenter stated that measures considered “high value” may differ by specialty or patient population.

    Response: We appreciate the commenters support. We recommend that all MIPS eligible clinicians select measures that have the greatest value in driving improvement for their patients.

    Comment: Another commenter suggested that MIPS eligible clinicians who report different quality measures from the prior year should be requested to provide the rationale for the change. The commenter suggested that CMS request the MIPS eligible clinician report data for the same categories as the prior year to preclude the chance that a MIPS eligible clinician may be seeking to find loopholes and flaws in the system.

    Response: We appreciate the suggestion and will take it into consideration for future years of the program. We will also monitor whether clinicians appear to be switching measures to improve their scores, rather than due to changing medical goals or patient populations. We will report back on the results of our monitoring in future rulemaking.

    Comment: A few commenters requested that MIPS eligible clinicians reporting quality using third party submission mechanisms not certified to all available measures only be required to report from the list of measures to which the system is certified. That is, receive an exemption from standard reporting requirements similar to the flexibility built in for others who lack reportable measures.

    Response: We respectfully disagree that an exemption is necessary in the circumstance the commenters describe. MIPS eligible clinicians choosing to report data via third party intermediary should select an entity from the list of qualified vendors that is able to report on the quality metrics that MIPS eligible clinician believes are most appropriate for their practice and that they wish to report to CMS.

    Comment: Some commenters encouraged CMS to further evaluate the use of more than one measure, which must be an outcome measure or a high priority measure, when more than one measure exists and each measures a distinct and different health outcome; and if an applicable outcome measure is not available, another high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measures) in lieu of an outcome measure should be considered. Thus, the commenters recommended that CMS consider the requirement of two (or more) outcome or high quality measures, as a component of the final score, when available.

    Response: Thank you for the feedback, and we will consider this in future rulemaking. We also want to refer this commenter to section II.E.6.a.(2) of this final rule with comment period where we describe the bonus points available for high priority measures and section II.E.5.b.(3)(a) of this final rule with comment period where we describe our interest in increasing the emphasis on outcome measures moving forward.

    Comment: Other commenters urged CMS to continue to include process measures in quality reporting programs while testing relevant outcomes measures for future inclusion. Specifically, the commenters were concerned that a small number of orthopedic surgery outcomes measures currently exist and believed that more time is required to develop relevant outcomes measures before CMS emphasizes outcomes for specialty clinicians.

    Response: MIPS eligible clinicians are required to submit data on an outcome measure if available, but if not, another high priority measure may be selected. We agree with the commenter that additional outcome measure development needs to occur.

    Comment: A few commenters wanted to know if there would be any impacts (beyond loss of points) if a MIPS eligible clinician chooses to not report any outcome or high priority condition measures.

    Response: The commenters are correct that the only impacts for not submitting outcomes or high priority measures would be a loss of points under the quality performance category.

    Comment: Several commenters recommended that CMS reinstate measures group reporting as an option under MIPS. The commenters stated that by removing this option CMS has skewed reporting in favor of large group practices, the majority of whom report through the GPRO web-interface that allows for and requires reporting on a sampling of patients. One commenter noted that while measure groups are not the most popular reporting option in PQRS, MIPS eligible clinicians choosing this option have had a high success rate and that measures included in a measures group undergo a deliberate process that ensures a comprehensive picture of care is measured. One commenter indicated many oncology small practices use the measure group reporting mechanism which is less burdensome and a meaningful mechanism for quality reporting for these practices. Another commenter requested that small practices be able to continue reporting measures groups on 20 patients. Some commenters stated by doing away with the measures group quality reporting option, CMS has actually made this category more difficult for many clinicians to meet, particularly those in small practices. Another commenter requested CMS retain the asthma and sinusitis measure groups as currently included in PQRS.

    Response: We did not propose the measures group option under MIPS because, as commenters noted, very few clinicians utilized this option under PQRS. Under the MIPS, we substituted what we believe to be a more relevant selection of measures through specialty-specific measure sets. Adopting this policy also enables a more complete picture of quality for specialty practices. We do not believe the specialty-specific measure set will pose an undue burden on small practices, and may make it easier for eligible clinicians, including those in small practices, to easily identify quality measures to report to MIPS. We will continue to assess this policy for enhancements in future rulemaking.

    Comment: Other commenters stated the quality requirements are ill-conceived and unworkable and the severity of illness calculations unfair (for example, if MIPS eligible clinicians do a good job preventing complications, they are punished with a low score).

    Response: We believe that the quality measures we are adopting for the MIPS program will appropriately incentivize high quality care, including care that prevents medical complications. However, we will monitor the MIPS program's effects on clinical practices carefully.

    Comment: Some commenters supported CMS' proposals to require MIPS eligible clinicians to report only six measures and to remove the NQS domain requirement for selecting measures as compared to the PQRS, but opposed CMS' proposed requirement that MIPS eligible clinicians report on outcomes and high priority measures. The commenters recommended that CMS incentivize outcomes based measures by assigning them more weight within MIPS. Additionally, the commenters were concerned that many specialties do not have access to outcome measures. The commenters opposed requiring patient experience and satisfaction measures for MIPS eligible clinicians, noting that evaluating patient experience is best done using confidential feedback to clinicians. The commenters would support CMS' use of the patient satisfaction surveys under the improvement activities performance category if performance was based only on administering a survey, evaluating results, and addressing the findings of the survey. The commenters encouraged CMS to give funding preference for development of measures to those specialties with limited measures. Another commenter recommended requiring the inclusion of patient centered measures that reflect the values and interests of patients, including patient reported outcome measures, patient experience of care, cross cutting measures, and clinical outcome measures.

    Response: We thank the commenters for their support. However, we do believe that outcome measures and high priority measures are critical to measuring health care quality, and are designated high priority for that reason. We thank the commenter also for their thoughts on patient satisfaction surveys, but we believe it is appropriate to measure and incentivize directly MIPS eligible clinicians' performance on patient experience surveys. We intend to continue working with stakeholders to improve available measures. We would like to explain for commenters that the CAHPS for MIPS survey is included under the quality performance category, as well as the improvement activities performance category as a high-weighted activity in the Patient Safety and Practice Assessment subcategory noted in Table H of the Appendix in this final rule with comment period.

    Comment: The commenters requested further clarification on the number of measures required when specialty-specific measure sets are used. For example, if a non-patient facing MIPS eligible clinician submits all measures from a specialty-specific measure set (in Table E of the Appendix), would they still be allowed to submit other measures applicable to their practice, such as cross-cutting measures? In a scenario where an MIPS eligible clinician submits all three available measures in a specialty-specific measure set and also submits one cross-cutting measure not listed in the a specialty-specific measure set (therefore submitting a total of four measures), will the MIPS eligible clinician be penalized for not submitting six total measures? The commenters requested that the final rule with comment period include specific requirements on the number of measures required for MIPS eligible clinicians who elect to submit measures from a specialty-specific measure set.

    Response: We would like to explain that our final policy for quality performance category is for the applicable continuous 90-day performance period during the performance period, or longer if the MIPS eligible clinician chooses, the MIPS eligible clinician or group will report one specialty-specific measure set, or the measure set defined at the subspecialty level, if applicable. If the measure set contains less than six measures, MIPS eligible clinicians will be required to report all available measures within the set. If the measure set contains six or more measures, MIPS eligible clinicians will be required to report at least six measures within the set. We note that generally, we define “applicable” to mean measures relevant to a particular MIPS eligible clinician's services or care rendered.

    Regardless of the number of measures that are contained in the measure set, MIPS eligible clinicians reporting on a measure set will be required to report at least one outcome measure or, if no outcome measures are available in the measure set, report another high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measures) within the measure set in lieu of an outcome measure. For the commenter's specific questions, there is no penalty or harm in submitting more measures than required. Rather, this can benefit the clinician because if more measures than the six required are submitted, we would score all measures and use only those that have the highest performance, which can result in a MIPS eligible clinician receiving a higher score. Lastly, we note that since we are not finalizing the requirement of cross-cutting measures in the quality performance category, there is no difference in requirements for patient facing and non-patient facing clinicians in the quality performance category.

    Comment: One commenter supported the flexibility provided for non-patient facing MIPS eligible clinicians; however, the commenter suggested that CMS continue to keep in mind that most measures across the MIPS components apply to patient-facing encounters. The commenter recommended that CMS work with medical specialty and subspecialty groups to determine how to best expand the availability of clinically relevant performance measures for non-patient facing MIPS clinicians, or ways to reweight MIPS scoring to provide these clinicians with credit for activities that more accurately align with their role in the treatment of a patient.

    Response: We appreciate the commenters' suggestions and will take them into consideration in future rulemaking. We would like to explain that we consistently work closely with specialty societies and intend to continue engaging with them on future MIPS policies.

    Comment: Several commenters supported the decision from CMS to reduce the number of mandatory quality measures for reporting from nine to six, and appreciated steps to clarify reporting requirements when fewer than six applicable measures are available. Some commenters believed that the best approach when directly applicable measures are not available is to minimize the number of measures required for reporting and focus instead on the measures that do apply to the clinician and patient. Additionally, these commenters stated there is value in the stratification of data across different identifiers, particularly for some gastrointestinal (GI) services with differential impacts across patient groups; however, the lack of existing data related to factors such as ethnicity and gender makes data stratification particularly difficult and often irrelevant. The commenters requested that CMS engage in an open dialogue once recommendations are received from the ASPE if they believe it necessary to move forward with proposals impacting GI care.

    Response: We appreciate the commenters support. We have an open dialogue and appreciate feedback from all federal agencies and stakeholders. We will closely examine the ASPE studies when they are available and incorporate findings as feasible and appropriate through future rulemaking. We look forward to working with stakeholders in this process.

    Comment: One commenter supported the goals for meaningful measurement but indicated that there are challenges to implementing policies to achieve them, including the proposed quality performance category which is overly complex, largely unattainable, lacks meaningful measures, lacks transparency and lacks appropriate risk-adjustment. The commenter recommended further collaboration with specialty societies to create policies which will engage surgeons, including surgeons who were unable to successfully participate in PQRS.

    Response: We appreciate the comment. As stated above, we consistently work closely with specialty societies to solicit measures and we intend to continue engaging with them on future MIPS policies.

    Comment: Some commenters requested that CMS allow flexibility around outcome measure reporting requirements and allow suitable alternatives where necessary, as many stakeholders still face barriers in the development of and use of meaningful outcome measures. The commenters discouraged CMS from assigning extra weight to outcome measures, as there is no standard methodology for reporting and risk-adjustment methodologies, which may unfairly disadvantage some MIPS eligible clinicians and advantage others. The commenters supported comprehensive measurement and consideration of measures in the IOM/NQS Quality Domains.

    Response: We appreciate the commenter's suggestions and will take them into consideration in the future. However, to address the commenter's concern regarding an unfair disadvantage for some eligible clinicians as it relates to the availability and reporting of outcome measures, we have provided flexibility of reporting for those eligible clinicians that do not have access to outcome measures by allowing eligible clinicians to report on high priority measures as well. Since high priority measures span all eligible clinician specialties, we do not believe some eligible clinicians will have an advantage of reporting over others.

    Comment: Another commenter asked CMS to clarify whether a measure type listed as an ‘intermediate outcome’ would count equally as an ‘outcome’ measure. Another commenter recommended that intermediate outcome measures should only be counted as outcome measures if there is a strong evidence base supporting the intermediate outcome as a valid predictor of outcomes that matter to patients.

    Response: We consider measures listed as an “intermediate outcome” measure to be outcome measures. In addition, it is important to note that if an applicable outcome measure is not available, a MIPS eligible clinician or group would be required to report one other high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measures) in lieu of an outcome measure.

    Comment: Another commenter requested clarity on whether a clinician is evaluated on the same six quality measures as the group they report in. The commenter wanted to know what happens if one of those group measures is not applicable to the clinician.

    Response: MIPS eligible clinicians that report as part of a group are evaluated on the measures that are reported by the group, whether or not the group's measures are specifically applicable to the individual MIPS eligible clinician. In addition, MIPS eligible clinicians who form a group, but have elected to report as individuals, will each be evaluated only on the measures they themselves report.

    Comment: Some commenters were concerned about group reporting of quality measures in multispecialty practices. Thus, the commenters recommended that CMS allow MIPS eligible clinicians in multi-specialty practices to report on measures that are meaningful to their specialty, and that each MIPS eligible clinician in a group be assessed individually, and all scores of the MIPS eligible clinicians reporting under the same TIN be aggregated to achieve one score for the entire practice.

    Response: We appreciate the commenter's suggestions. From the example provided, we would recommend that clinicians in this situation may find reporting as individual MIPS eligible clinicians favorable over reporting as a group. We will take these recommendations into consideration in for future rulemaking.

    Comment: One commenter recommended a cap of nine measures in the future if CMS believes that allowing more than the required six is needed.

    Response: We appreciate the commenter's suggestion. We will take this into consideration in the future.

    Comment: A few commenters applauded CMS's extensive efforts to include specialists in the quality component of MIPS. The commenters recommended that CMS determine which specialties do not have enough measures to select at least six that are not topped out and exempt those specialists from the quality category until enough measures become available. Some commenters were pleased that CMS recognized that very specialized MIPS eligible clinicians may not meet all six applicable measures.

    Response: We appreciate the commenters support. MIPS eligible clinicians who do not have enough measures to select at least six measures should choose all of the measures that do apply to their practice and report them. We will conduct a data validation process to determine whether MIPS eligible clinicians have reported all measures applicable to them if the MIPS eligible clinician does not report the minimum required 6 measures. As an alternative, the MIPS eligible clinician may choose a specialty-specific measure set. If the measure set contains fewer than six measures, MIPS eligible clinicians will be required to report all available measures within the set. If the measure set contains six or more measures, MIPS eligible clinicians will be required to report at least six measures within the set. Regardless of the number of measures that are contained in the measure set, MIPS eligible clinicians reporting on a measure set will be required to report at least one outcome measure or, if no outcome measures are available in the measure set, report another high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measures) within the measure set in lieu of an outcome measure. Generally, we define “applicable” to mean measures relevant to a particular MIPS eligible clinician's services or care rendered. MIPS eligible clinicians who do not have six individual measures available to them should select their appropriate specialty-specific measure set, because that pre-defines which measures are applicable to their specialty and provides protections to them. For the majority of MIPS eligible clinicians choosing the specialty-specific measure sets provides protections to MIPS eligible clinicians because we have pre-determined which measures are most applicable, based on the MIPS eligible clinicians specialty.

    We do intend to provide toolkits and educational materials to MIPS eligible clinicians that will reduce the burden on determining which measures are applicable. We do not believe, however, that it is appropriate to exempt specialties from the quality performance category if they have fewer than six measures or topped out measures. Rather these specialties are still able to report on quality measures, just a lesser the number of measures. We refer the readers to section II.E.6. of this final rule with comment period for the discussion of authority under 1848(q)(5)(F) to reweight category weights when there are insufficient measures applicable and available.

    Comment: A few commenters requested clarification on whether the measures are separate for each individual performance category such as quality, and advancing care information or whether one measure can apply to more than one category.

    Response: Each measure and activity applies only for the performance category in which it is reported. However, some actions might contribute to separately specified activities, such as reporting a quality measure through a QCDR, which may make it easier for the MIPS eligible clinician to perform an improvement activity that also involves use of a QCDR. However, it is important to note that the CAHPS for MIPS survey receives credit in the quality and improvement activities performance categories. In addition, certain improvement activities may count for bonus points in the advancing care information performance category if the MIPS eligible clinician uses CEHRT.

    Comment: One commenter stated that while CMS has provided CPT codes for consideration for PQRS in the past, it has not provided the type of CPT codes to be used for MIPS assessment.

    Response: The CPT codes that have historically been available under the PQRS program will be made available for the MIPS as part of the detailed measure specifications which will be posted prior to the performance period at QualityPaymentProgram.cms.gov. More information on the detailed measure specifications is available in section II.E.5.c. of this final rule with comment period.

    Comment: The commenter requested clarification as to whether a MIPS eligible clinician is obligated to report on measures if the procedures are performed in a surgery center or hospital.

    Response: Yes, in the instances where those procedures or services are billed under Medicare Part B or another payer that would have services that fall under the measure's denominator, MIPS eligible clinicians are required to report on measures where denominator eligible patients are designated within the measure specification.

    Comment: One commenter stated that in addressing CMS' question of whether to require one cross-cutting measure and one outcome measure, or one cross-cutting measure and one high priority measure (which is inclusive of the outcome measures), the commenter recommended that CMS allow MIPS eligible clinicians to select one cross-cutting and one high priority measure. The commenter noted that this approach gives MIPS eligible clinicians more flexibility and gives CMS time to develop additional outcome measures to choose from.

    Response: We appreciate the comment. However, we believe it is important to include the requirement to report at least one outcome measure if it is available given the importance of outcome measures on assessing health care quality. As noted above, we are finalizing our proposal to require one outcome measure, or if an outcome measure is not available, another high priority measure. We are not finalizing our proposal to require one cross-cutting measure.

    Comment: Some commenters did not support CMS' proposal to require the reporting of outcome/high priority measures in order to achieve the maximum quality performance category points. The commenters recommended that instead, CMS reward high priority measures with bonus points, but cap the bonus points CMS Web Interface users can earn. The commenters recommended their approach because more large practices can use the CMS Web Interface option, which includes several high priority measures, and this could favor these MIPS eligible clinicians over those in smaller practices. Another commenter expressed concern about CMS's requirement to report on high priority, including specific outcomes based, and cross-cutting measures, and stated that those standards are currently counterproductive due to inherent difficulty with tracking outcomes in cancer care, in part because meaningful outcomes often require years of follow-up, and because sample sizes of cancer patients may be very small at the clinician level. The commenter further noted that the vast majority of oncology measures existing today are process-based versus outcomes based, rendering an adjustment period for outcomes based measures in cancer care. The commenter recommended that CMS clearly state in the final rule with comment period that the outcomes measure reporting requirement does not apply to oncology clinicians until more meaningful quality measures are developed for oncology care.

    Response: We would like to explain that our proposals do include bonus points (subject to a cap) for reporting on high priority measures; we refer readers to section II.E.6.a.(2)(e) of this final rule with comment period. We believe that outcome measures and high priority measures are critical to measuring health care quality, and they are designated high priority for that reason. We intend to continue working with stakeholders to improve available measures.

    Comment: Other commenters believed that in order to allow and encourage MIPS eligible clinicians to report the highest quality data available, which includes outcomes measures in EHR and registry data, and support innovation, CMS should allow MIPS eligible clinicians to report at least one of the six required quality measures under MIPS through a QCDR. Some commenters strongly encouraged CMS to move toward a streamlined set of high priority measures that align incentives and actions of organizations across the health care system. The commenters also recommended that CMS give NQF-endorsed measures priority.

    Response: We thank the commenters for their feedback and intend to finalize our proposal that one of the six measures a MIPS eligible clinicians must report on is an outcome measure. We also understand the concerns that not all MIPS eligible clinicians may have a high priority measure available to them. However, we do believe that all MIPS eligible clinicians regardless of their specialty have a high priority measure available for reporting. Therefore, we intend to finalize that if a MIPS eligible clinician does not have an outcome measure available, they are required to report on a high priority measure. In addition, a QCDR is one of the data submission mechanisms available to a MIPS eligible clinician to report measures.

    Comment: A few commenters encouraged CMS to provide additional time for small or mid-sized practices to transition to CEHRT and QCDRs by ensuring that there are a sufficient number of measures available for claims-based reporting, particularly in the quality performance category, in the first several performance years under MIPS.

    Response: We appreciate the commenter's concerns, and while we do have the goal of ultimately moving away from the claims based submission mechanism, we do recognize that this mechanism must be maintained until electronic—based mechanisms of submission continue to develop and mature.

    Comment: One commenter wanted to ensure that the proposed reporting does not detract from the patient—clinician clinical visit because it is crucial for the patient-clinician relationship.

    Response: We agree that the patient—clinician encounter is paramount. Reporting can be captured through the EHR or through a registry at a later time.

    Comment: One commenter stated that the proposed guidelines cannot be applied to all of the specialties and sub-specialties uniformly.

    Response: We are assuming that the commenter is referring to the proposed data submission requirements for the quality performance category. We are providing flexibility on the submission mechanisms and selection of measures by MIPS eligible clinicians because we understand that varying specialties have differing quality measurement needs for their practices.

    Comment: Some commenters were concerned about lowering the threshold on measures and thought the measure criteria were insufficient. One commenter was also concerned that there was no requirement for reporting on a core set of measures for every primary care physician (PCP) and specialist.

    Response: We respectfully disagree with the commenter. Drawing from our experiences under the sunsetting programs, we believe that is more important to ensure that clinicians are measured on quality measures that are meaningful to their scope of practice as well as quality measures that emphasize outcome measurement or other high priority areas rather than a large quantity of measures.

    Comment: One commenter asked for clarification on whether six non-MIPS measures (QCDR) can be selected by a MIPS eligible clinician and be used to meet the reporting criteria.

    Response: Yes, this is allowable for reporting using QCDRs as long as one of the selected measures is an outcome measure, or another high priority measure if an outcome is unavailable.

    Comment: Some commenters urged CMS to ensure the proposed validation process to review and validate a MIPS eligible clinician's inability to report on the quality performance category requirements—similar to the Measure-Applicability Validation (MAV) process—is transparent. The commenters urged consultation with clinician stakeholders as CMS develops the new validation process, expressing concerns related to the MAV, including the lack of clarity in how the MAV actually functions. Another commenter recommended CMS develop a validation process that will review and validate a MIPS eligible clinician's or group's ability to report on a sufficient number of quality measures and a specialty-specific sample set—with a sufficient sample size—including both a cross-cutting and outcome measure. One commenter requested a timeframe for the validation process so they may prepare.

    Response: We agree with the commenters and intend to provide as much transparency into the data validation process for the quality performance category under MIPS as technically feasible. The validation process will be part of the quality performance category scoring calculations and not a separate process as the MAV was under PQRS. We refer readers to section II.E.6.a.(2) of this final rule with comment period for more information related to the quality performance scoring process. Lastly, we are working to provide additional toolkits and educational materials to MIPS eligible clinicians prior to the performance period that will ease the burden on identification of which measures are applicable to MIPS eligible clinicians. If the MIPS eligible clinician required assistance, they may contact the Quality Payment Program Service Center.

    Comment: Another commenter recommended delegating each medical specialty the task of choosing three highly desirable outcomes to focus on each year and rewarding those outcomes to promote quality in lieu of using 6-8 dimensions of meaningful use performance combined with numerous quality indicators.

    Response: We agree with the commenter that focusing on outcomes and outcome measurement is important, as we have indicated in this final rule with comment period. We are however required by statute to measure MIPS eligible clinician's performance on four performance categories, which quality and advancing care information are a part of.

    Comment: One commenter stated that claims data is misleading and may corrupt attempts to analyze information with “big data” approaches, because a significant proportion of claims data only captures the first four codes that a clinician enters into the medical record. The commenter further noted that many clinicians documented numerous diagnoses into the medical record, unaware that some vendors only accept the first four diagnoses and that some EHR systems arrange diagnoses in alphabetical order despite how the clinician entered them. The commenter suggested CMS mandate no restriction on the number of diagnoses entered into the 1500 Health Insurance claim form—or at least mandate the National Uniform Claim Committee (NUCC) recommendation to expand the maximum amount of diagnoses from four to eight.

    Response: Although the commenter's recommendation is outside the scope of the proposed rule, we note that we do not believe that this approach compromises either data mining or claims processing.

    Comment: One commenter requested CMS provide guidance regarding the treatment of measures that assess services that are not Medicare reimbursable, such as postpartum contraception. The commenter recommended that CMS adopt the measures in the Medicaid Adult and Child Core Sets that have been specified and endorsed at the clinician level.

    Response: We agree that working to align MIPS quality measures with Medicaid is important and intend to develop a “Medicaid measure set” that will be based on the existing Medicaid Adult Core Set (https://www.medicaid.gov/Medicaid-CHIP-Program-Information/By-Topics/Quality-of-Care/Downloads/Medicaid-Adult-Core-Set-Manual.pdf). Further, we believe it is important to have MIPS quality measure alignment with private payers and have engaged a Core Quality Measure Collaborative (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/QualityMeasures/Core-Measures.html) to develop measures to be used both by private payers and the MIPS program. Our strategic interest is a future state where measurement in multi-payer systems, Medicaid, and Medicare can be seamlessly integrated into CMS programs.

    After consideration of the comments regarding our proposal on submission criteria for quality measures excluding CMS Web Interface and CAHPS for MIPS, we are finalizing at § 414.1335(a)(1) that individual MIPS eligible clinicians submitting data via claims and individual MIPS eligible clinicians and groups submitting via all mechanisms (excluding CMS Web Interface, and for CAHPS for MIPS survey, CMS-approved survey vendors) are required to meet the following submission criteria. For the applicable period during the performance period as discussed in section II.E.5.b.(3) of this final rule with comment period, the MIPS eligible clinician or group will report at least six measures including at least one outcome measure. If an applicable outcome measure is not available, the MIPS eligible clinician or group will be required to report one other high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measures) in lieu of an outcome measure. If fewer than six measures apply to the individual MIPS eligible clinician or group, then the MIPS eligible clinician or group will be required to report on each measure that is applicable. We define “applicable” to mean measures relevant to a particular MIPS eligible clinician's services or care rendered.

    Alternatively, for the applicable performance period in 2017, the MIPS eligible clinician or group will report one specialty-specific measure set, or the measure set defined at the subspecialty level, if applicable. If the measure set contains fewer than six measures, MIPS eligible clinicians will be required to report all available measures within the set. If the measure set contains six or more measures, MIPS eligible clinicians will be required to report at least six measures within the set. Regardless of the number of measures that are contained in the measure set, MIPS eligible clinicians reporting on a measure set will be required to report at least one outcome measure or, if no outcome measures are available in the measure set, report another high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measures) within the measure set in lieu of an outcome measure. MIPS eligible clinicians may choose to report measures in addition to those contained in the specialty-specific measure set will not be penalized for doing so, provided such MIPS eligible clinicians follow all requirements discussed here.

    In accordance with § 414.1335(a)(1)(ii), MIPS eligible clinicians and groups will select their measures from either the list of all MIPS measures in Table A of the Appendix in this final rule with comment period, or a set of specialty-specific measure set in Table E of the Appendix in this final rule with comment period. Note that some specialty-specific measure sets include measures grouped by subspecialty; in these cases, the measure set is defined at the subspecialty level.

    We also are finalizing the definition of a high priority measure at § 414.1305 means an outcome, appropriate use, patient safety, efficiency, patient experience, or care coordination quality measures. These measures are identified in Table A of the Appendix in this final rule with comment period.

    We are not finalizing our proposal to require MIPS eligible clinicians and groups to report a cross-cutting measure because we believe we should provide flexibility during the transition year of the program as MIPS eligible clinicians adjust to MIPS. However, we are seeking comments on adding a requirement to our modified proposal that patient-facing MIPS eligible clinicians would be required to report at least one cross-cutting measure in addition to the high priority measure requirement for further consideration for MIPS year 2 and beyond. We are interested in feedback on how we could construct a cross-cutting measure requirement that would be most meaningful to MIPS eligible clinicians from different specialties and that would have the greatest impact on improving the health of populations.

    (ii) Submission Criteria for Quality Measures for Groups Reporting via the CMS Web Interface

    We proposed at § 414.1335 the following criteria for the submission of data on quality measures by registered groups of 25 or more MIPS eligible clinicians who want to report via the CMS Web Interface. For the applicable 12-month performance period, we proposed that the group would be required to report on all measures included in the CMS Web Interface completely, accurately, and timely by populating data fields for the first 248 consecutively ranked and assigned Medicare beneficiaries in the order in which they appear in the group's sample for each module/measure. If the pool of eligible assigned beneficiaries is less than 248, then the group would report on 100 percent of assigned beneficiaries. A group would be required to report on at least one measure for which there is Medicare patient data. We did not propose any modifications to this reporting process. Groups reporting via the CMS Web Interface are required to report on all of the measures in the set. Any measures not reported would be considered zero performance for that measure in our scoring algorithm.

    Lastly, from our experience with using the CMS Web Interface under prior Medicare programs we are aware groups may register for this mechanism and have zero Medicare patients assigned and sampled to them. We note that should a group have no assigned patients, then the group, or individual MIPS eligible clinicians within the group, would need to select another mechanism to submit data to MIPS. If a group does not typically see Medicare patients for which the CMS Web Interface measures are applicable, or if the group does not have adequate billing history for Medicare patients to be used for assignment and sampling of Medicare patients into the CMS Web Interface, we advise the group to participate in the MIPS via another reporting mechanism.

    As discussed in the CY 2016 PFS final rule with comment period (80 FR 71144), beginning with the 2017 PQRS payment adjustment, the PQRS aligned with the VM's beneficiary attribution methodology for purposes of assigning patients for groups that registered to participate in the PQRS Group Reporting Option (GPRO) using the CMS Web Interface (formerly referred to as the GPRO Web Interface). For certain quality and cost measures, the VM uses a two-step attribution process to associate beneficiaries with TINs during the period in which performance is assessed. This process attributes a beneficiary to the TIN that bills the plurality of primary care services for that beneficiary (79 FR 67960-67964). We proposed to continue to align the 2019 CMS Web Interface beneficiary assignment methodology with the measures that used to be in the VM: The population quality measures discussed in the proposed rule (81 FR 28188) and total per capita cost for all attributed beneficiaries discussed in proposed rule (81 FR 28188). As MIPS is a different program, we proposed to modify the attribution process to update the definition of primary care services and to adapt the attribution to different identifiers used in MIPS. These changes are discussed in the proposed rule (81 FR 28188). We requested comments on these proposals.

    The following is summary of the comments we received regarding our proposal on submission criteria for quality measures for groups reporting via the CMS Web Interface.

    Comment: Some commenters supported the general direction and intent of the proposed quality performance category, and particularly supported CMS's alignment between the CMS Web Interface measure set and the quality measure reporting and performance requirements for the Medicare Share Savings Program Tier 1 organizations. Another commenter supported national alignment of quality measures.

    Response: We thank the commenters for their support.

    Comment: Another commenter stated that CMS should either remove or modify some of the quality measures used as part of CMS Web Interface, as existing criteria make them difficult to achieve for large group practices and may not reflect current recommendations. The commenter provided examples of three specific measures and why they present challenges to practice in the context of large groups using CMS Web Interface. For example, the commenter stated that the depression remission measure (MH-1) measures the number of patients with major depression, as defined as an initial PHQ-9 score > 9, who demonstrate remission at 12 months, as defined as a PHQ-9 score < 5. The requirement for PHQ-9 use for evaluating patients combined with a follow-up evaluation is problematic for many large group practices. The measure must be recorded for 248 patients, a very difficult bar for large multi-specialty group practices which refer patients for treatment and follow-up to psychiatrists if they have a PHQ of 9. The measure seems to be designed for group practices that do not have this type of referral pattern to psychiatrists.

    Another problematic example the commenter provided was the medication safety measure (CARE 3). The commenter stated that the score includes all medications the patient is taking, including over-the-counter and herbal medications, and therefore relies on the patient recalling and accurately reporting this information. For each medication on the list, clinicians must include the dose, route (for example, by mouth or by injection), and frequency. This measure is difficult to meet, even if medication lists are substantially complete. According to the specifications, if a multi-vitamin is listed but “by mouth” is not recorded then the encounter(s) is scored as non-performance. Finally, the commenter believed that the blood pressure measure must be updated to reflect recent national consensus about appropriate blood pressure measurements. The commenter stated that a national consensus has developed that blood pressure should vary by age and diagnosis. However, the measure requires a strict policy of controlling to less than 140/90 for hypertensive patients, regardless of age, and 120/80 for screening purposes. These levels are not consistent with current medical evidence or opinion such as those noted in the Eighth Joint National Committee.

    Response: We do not believe it appropriate to remove or modify measures, including the three mentioned by the commenter, used in the CMS Web Interface reporting. On the three specific measures the commenter listed, we have been working with the multi-stakeholder workgroup for the Core Measure Quality Collaborative (CQMC). These measures are included in the CQMC measure set for ACO and certified patient-centered medical homes. To align with the CQMC set, CMS has included these measures within the CMS Web Interface. We believe all measures within the CMS Web Interface are appropriate for the data submission method and level of reporting.

    Comment: A few commenters recommended, to ensure comparability across reporting mechanisms, that CMS should allow groups reporting through the CMS Web Interface to select which six quality measures will be used to calculate the quality performance score. Currently, the CMS Web Interface requires 18 measures, so if a group performs highly on some CMS Web Interface measures but not others, their overall quality score will be lowered.

    Response: We thank the commenters for this feedback, but we believe that requiring groups to report all measures included in the CMS Web Interface provides us a more complete picture of quality at a given group practice. All of the measures reported on the CMS Web Interface will be used to determine an overall quality performance category score.

    Comment: Other commenters expressed that CMS Web Interface reporting should be coupled with useful reports for MIPS eligible clinicians including timely and actionable claims data in order to make value-based decisions.

    Response: We do not believe it to be operationally feasible to provide claims data as part of a report for the transition year of the MIPS; however, we will work to provide as much information to MIPS eligible clinicians as possible and will consider this request for future rulemaking.

    Comment: Some commenters suggested that CMS identify a minimum number of beneficiaries to report on through CMS Web Interface based on the number of MIPS eligible clinicians in the group.

    Response: We appreciate the comment, and in past years under the PQRS program there were different beneficiary sample sizes based on the size of the group, specifically a sample of 411 patients for groups 100+ and a sample of 248 patients for groups 25-99. However after additional data analysis, we found that the differing sample sizes made no impact on the group's performance, so we modified the sample to 248 patients in the CY 2015 final rule (79 FR 67789). We do not believe it reduces burden by issuing different sample sizes by groups. Rather, we believe that a larger sample size is more burdensome.

    Comment: Another commenter had concerns about the statistical accuracy of the requirement for reporting the first 248 patients. The commenter had particular concerns about regional and seasonal bias for larger groups because performance measures for large groups would be based on data from patients in the first few weeks of the year.

    Response: The methodology for sampling and assignment for the CMS Web Interface has been tested extensively, and we believe that the methodology appropriately controls for the biases the commenter suggests. However, we will monitor performance data reported via the CMS Web Interface.

    Comment: Some commenters recommended that in addition to the proposed CMS Web Interface used to submit quality measures, a transactional Electronic Data Interchange (EDI) capability be developed to achieve CMS' goal of permitting multiple methods for submission. The commenters believed multiple technologies have benefits in different situations for various stakeholders. The commenters also suggested that the CMS Web Interface should also become usable by Medicaid, other payers and purchasers on a voluntary basis.

    Response: We thank the commenters for these suggestions and will take them under consideration in the future as we continue implementing the MIPS program.

    Comment: Some commenters expressed concern with the proposal to limit reporting through the CAHPS for MIPS survey and the CMS Web Interface systems to groups of 25 clinicians or more. The commenters expressed that small practices would benefit greatly from the use of the CMS Web Interface, and limiting this option is a further burden upon solo and small practices who often do not have the resources to purchase more advanced health IT systems with more sophisticated reporting capabilities. The commenters recommended that CMS look at options that ensure solo and small practices have the same opportunities to succeed as larger groups. Another commenter proposed that CMS consider opening the CAHPS for MIPS survey reporting program to all patient-facing MIPS eligible clinicians with the exception of certain specialties such as psychiatry, addiction medicine, emergency medicine, critical care, and hospitalists.

    Response: The CAHPS for MIPS survey is available for all MIPS groups. The CMS Web Interface has been limited to groups of 25 or greater because smaller groups or individual MIPS eligible clinicians have not been able to meet the data submission requirements on the sample of the Medicare Part B patients we provide.

    Comment: One commenter recommended that a transactional Electronic Data Interchange (EDI) capability be developed so as to achieve CMS' goal of permitting multiple methods for submission. The commenter believed multiple technologies have benefits in different situations for various stakeholders and the industry should do the hard work now to support flexible technologies. The commenter also suggested that CMS Web Interface should also become usable by Medicaid, other payers and purchasers on a voluntary basis.

    Response: We appreciate the suggestions and will take them into consideration in future rulemaking.

    After consideration of the comments regarding our proposal on submission criteria for quality measures for groups reporting via the CMS Web Interface, we are finalizing the policies as proposed. Specifically, we are finalizing at § 414.1335(a)(2) the following criteria for the submission of data on quality measures by registered groups of 25 or more MIPS eligible clinicians who want to report via the CMS Web Interface. For the applicable 12-month performance period, the group will be required to report on all measures included in the CMS Web Interface completely, accurately, and timely by populating data fields for the first 248 consecutively ranked and assigned Medicare beneficiaries in the order in which they appear in the group's sample for each module or measure. If the sample of eligible assigned beneficiaries is less than 248, then the group will report on 100 percent of assigned beneficiaries. A group will be required to report on at least one measure for which there is Medicare patient data. Groups reporting via the CMS Web Interface are required to report on all of the measures in the set. Any measures not reported will be considered zero performance for that measure in our scoring algorithm.

    We are finalizing our proposal to continue to align the 2019 CMS Web Interface beneficiary assignment methodology with the measures that used to be in the VM: The population quality measure discussed in the proposed rule (81 FR 28188) and total per capita cost for all attributed beneficiaries discussed in the proposed rule (81 FR 28196). We are also finalizing our proposal to modify the attribution process to update the definition of primary care services and to adapt the attribution to different identifiers used in MIPS. These changes are discussed in the proposed rule (81 FR 28196).

    (iii) Performance Criteria for Quality Measures for Groups Electing to Report Consumer Assessment of Healthcare Providers and Systems (CAHPS) for MIPS Survey

    The CAHPS for MIPS survey (formerly known as the CAHPS for PQRS survey) consists of the core CAHPS Clinician & Group Survey developed by Agency for Health Care Research (AHRQ), plus additional survey questions to meet CMS's information and program needs. For more information on the CAHPS for MIPS survey, please see the explanation of the CAHPS for PQRS survey in the CY 2016 PFS final rule with comment period (80 FR 71142 through 71143). While we anticipate that the CAHPS for MIPS survey will closely align with the CAHPS for PQRS survey, we may explore the possibility of updating the CAHPS for MIPS survey under MIPS, specifically we may not finalize all proposed Summary Survey Measures (SSM).

    We proposed to allow registered groups to voluntarily elect to participate in the CAHPS for MIPS survey. Specifically, we proposed at § 414.1335 the following criteria for the submission of data on the CAHPS for MIPS survey by registered groups via CMS-approved survey vendor: For the applicable 12-month performance period, the group must have the CAHPS for MIPS survey reported on its behalf by a CMS-approved survey vendor. In addition, the group will need to use another submission mechanism (that is, qualified registries, QCDRs, EHR etc.) to complete their quality data submission. The CAHPS for MIPS survey would count as one cross-cutting and/or a patient experience measure, and the group would be required to submit at least five other measures through one other data submission mechanisms. A group may report any five measures within MIPS plus the CAHPS for MIPS survey to achieve the six measures threshold.

    The administration of the CAHPS for MIPS survey would contain a 6-month look-back period. In previous years the CAHPS for PQRS survey was administered from November to February of the reporting year. We proposed to retain the same survey administration period for the CAHPS for MIPS survey. Groups that voluntarily elect to participate in the CAHPS for MIPS survey would bear the cost of contracting with a CMS-approved survey vendor to administer the CAHPS for MIPS survey on the group's behalf, just as groups do now for the CAHPS for PQRS survey.

    Under current provisions of PQRS, the CAHPS for PQRS survey is required for groups of 100 or more eligible clinicians. Although we are not requiring groups to participate in the CAHPS for MIPS survey, we do still believe patient experience is important, and we therefore proposed a scoring incentive for those groups who report the CAHPS for MIPS survey. As described in the proposed rule (81 FR 28188), we proposed that groups electing to report the CAHPS for MIPS survey, would be required to register for the reporting of data. Because we believe assessing patients' experiences as they interact with the health care system is important, our proposed scoring methodology would give bonus points for reporting CAHPS data (or other patient experience measures). Please refer to the proposed rule (81 FR 28247), for further details. We solicited comments on whether the CAHPS for MIPS survey should be required for groups of 100 or more MIPS eligible clinicians or whether it should be voluntary.

    Currently, the CAHPS for PQRS beneficiary sample is based on Medicare claims data. Therefore, only Medicare beneficiaries can be selected to participate in the CAHPS for PQRS survey. In future years of the MIPS program, we may consider expanding the potential patient experience measures to all payers, so that Medicare and non-Medicare patients can be included in the CAHPS for MIPS survey sample. We solicited comments on criteria that would ensure comparable samples and on these proposals.

    The following is a summary of the comments we received regarding our proposed performance criteria for quality measures for groups electing to report the CAHPS for MIPS survey.

    Comment: One commenter recommended that CMS should require MIPS eligible clinicians in groups to report a standard patient experience measure.

    Response: We are not requiring groups to report the CAHPS for MIPS survey for the transition year of MIPS. We are aware that requiring a standard patient experience measure, such as the CAHPS for MIPS survey, can be cost-prohibitive for small groups. However, we do believe patient experience measures are important and are providing bonus points for the CAHPS for MIPS survey, as discussed in section II.E.6. of this final rule with comment period.

    Comment: Some commenters requested clarification about whether the CAHPS for MIPS survey would be required for groups of 100+ MIPS eligible clinicians, as it was under PQRS. Some commenters opposed mandatory CAHPS for MIPS survey reporting under MIPS and recommended that CMS allow reporting on the CAHPS for MIPS survey to be voluntary. Another commenter opposed making the CAHPS for MIPS survey a requirement for large groups because it is a survey tool to measure outpatient practices and is not useful for many facility based practices. The commenter stated that there will be significant confusion as large groups try to determine which parts of the survey apply to them.

    Response: We would like to explain that the CAHPS for MIPS survey is optional for MIPS eligible clinician groups. We recognize that while the CAHPS for MIPS survey is a standard tool used for large organizations, we know that there are challenges with the CAHPS for MIPS survey for certain specialty clinicians and clinicians who work in certain settings.

    Comment: A few commenters urged CMS to include the CAHPS for MIPS survey, as well as other non-CAHPS experience of care and patient reported outcomes measures and surveys (including those that are offered by QCDRs), under the improvement activities performance category rather than the quality performance category. One commenter stated that the CAHPS for MIPS survey should be counted as a high weight improvement activities. This commenter stated that this would simplify the program and ensure that specialists have the same opportunity as primary care clinicians to earn the maximum number of points in the quality performance category. The commenter was concerned that if CMS does not revise this proposal, specialists will be at a disadvantage as the CAHPS for MIPS survey is less relevant for specialists, especially surgeons, anesthesiologists, pathologists and radiologists. If CMS moves forward with the proposed quality requirements and bonus points for reporting on a patient experience measure, the commenter requested that CMS clarify whether the CAHPS for MIPS survey would automatically provide two bonus points or would count as the one required high priority measure that all MIPS eligible clinicians must report before bonus points are counted. The commenters recommended ensuring specialists have the same opportunity as primary care practices. Other commenters urged CMS to work closely with the transplant community and the American College of Surgeons to adopt a patient experience of care measure that is relevant to all surgeons, including transplant surgeons, and that adequately takes into account the team-based nature of transplantation and other complex surgery.

    Response: We would like to explain for commenters that the CAHPS for MIPS survey is included under the quality performance category, as well as the improvement activities performance category as a high weighted activity in the Patient Safety and Practice Assessment subcategory noted in Table H of the Appendix in this final rule with comment period. In addition, the CAHPS for MIPS survey measures complement other measures of care quality by generating information about aspects of care quality for which patients are the best or only source of information, such as the degree to which care is respectful and responsive to their needs (for example, “patient-centered”); therefore, these measures are well suited to the quality performance category. We do recognize that certain specialties such as surgeons, anesthesiologists, pathologists and radiologists that do not provide primary care services may not have patients to whom the CAHPS for MIPS survey could be issued and would therefore not be able to receive any bonus points for patient experience. However, these specialties do have the ability to earn bonus points for other high priority measures. We agree with the commenters that ensuring all specialties have the ability to earn full points for the quality performance category is important. We believe that we have constructed the quality category in a manner where this is true.

    Comment: Other commenters encouraged CMS to require for all MIPS eligible clinicians in groups to report the CAHPS for MIPS survey. One commenter suggested these CAHPS for MIPS survey measures transcend the core survey and include questions from the Cultural Competence supplement and the Health IT supplement. Another commenter was very concerned that the CAHPS for MIPS survey was optional under MIPS. They stated that the CAHPS for MIPS survey is the only standardized, validated tool available in the public domain to capture information about the experience of care from a patient's perspective. The commenter requested that CMS finalize this as a mandatory reporting requirement for groups of 100 or more. In addition, the commenter further requested that CMS consider developing an easier-to-administer version in the future. Another commenter stated that CMS should encourage the development and use of PROMs. Other commenters requested that CMS reconsider mandating the participation for practice groups of a certain size, such as 50 MIPS eligible clinicians.

    Response: We do not believe making the CAHPS for MIPS survey mandatory to be an appropriate policy at this time, but we will consider doing so for future MIPS performance years. Rather as we have indicated at the onset of this rule, we are removing as many barriers from participation as possible to encourage clinicians to participate in the MIPS. We are mindful of the reporting burden and expense associated with patient reported measures such as CAHPS for MIPS and do not want to add a cost or reporting burden to clinicians who prefer to choose other measures. We also believe that by providing bonus points for patient experience surveys, we believe that we are still able to emphasize that patient experience is an important component of quality measurement and improvement. We also appreciate the request to consider developing an easier to administer version and will take into consideration in the future.

    Comment: Other commenters urged CMS to continue exclusion of pathologists, as non-patient facing, from selection as “focal providers” about whom the CAHPS for MIPS survey asks.

    Response: We thank the commenters for their feedback on non-patient facing MIPS eligible clinicians and the CAHPS for MIPS survey. We agree that non-patient facing MIPS eligible clinicians should not be considered the clinician named in the survey who provided the beneficiary with the majority of the primary care services delivered by the group practice, that is, the “focal provider” for that survey.

    Comment: Several commenters supported CMS' proposal to no longer require that larger practices report on patient experience, explaining that, historically, this measure was not intended to target emergency clinicians, yet larger emergency practices were still required to go through the time and expense of contracting with a certified survey vendor before finding out whether they were exempt from the requirement. Another commenter supported voluntary reporting of the CAHPS for MIPS survey. The commenter stated the CAHPS for MIPS survey is too long and generates low response rates. The commenter urged CMS to work with MIPS eligible clinicians, AHRQ, CAHPS stewards, and other stakeholders to develop means for obtaining patient experience data. A few commenters stated that many MIPS eligible clinicians survey their patients' satisfaction in a variety of patient care areas, and these surveys are often electronic and allow timely submission of feedback that is valuable to the overall patient care experience. The commenters suggested that CMS consider allowing MIPS eligible clinicians to survey their patients through alternative surveys.

    Response: We thank the commenters for this feedback and acknowledge that there may be other potential survey methods. However, the CAHPS for MIPS survey is the only survey instrument with robust evidence support demonstrating a beneficial impact on quality. For a program of this scale that also has payment implications, we believe the CAHPS for MIPS survey is the most appropriate survey to utilize.

    Comment: Some commenters stated that small practices cannot afford to pay vendors to obtain the CAHPS for MIPS survey information for bonus points.

    Response: We would like to explain that the CAHPS for MIPS survey is optional for all MIPS eligible clinician groups, and that there are other ways to obtain bonus points, such as by reporting additional outcome measures.

    Comment: Other commenters encouraged CMS to invest resources in evolving CAHPS instruments—or creating new tools—to be more meaningful to consumers, more efficient and less costly to administer and collect, and better able to supply clinicians with real-time feedback for practice improvement. The commenters would like this to include continuing research and implementation efforts to combine patient experience survey scores with narrative questions.

    Response: We will take under advisement for future rulemaking.

    Comment: Another commenter supported the proposal to use all-payer data for quality measures and patient experience surveys. The commenter supported stratification by demographic characteristics to the degree that such stratification is feasible and appropriate and thinks CMS should make this data publicly available at the individual and practice level.

    Response: We thank the commenter for their support. We will take this recommendation into consideration for future rulemaking.

    Comment: A few commenters stated that the potential expansion of the CAHPS for MIPS survey to all-payer data should be optional, as this could make the survey more costly and lead to it being unaffordable to those who use it in its current form. Other commenters recommended that CMS expand the CAHPS for MIPS patient sample and survey process to include additional payers, in a process similar to that used by the HCAHPS, Hospice CAHPS, and the Outpatient and Ambulatory Surgery CAHPS surveys.

    Response: As we continue to evaluate the inclusion of all-payer data as part of the CAHPS for MIPS survey, we will consider the impact of implementation as well as viable options.

    Comment: One commenter was concerned about the patient satisfaction surveys, particularly in the context of team-based care delivery. The commenter noted that individual scoring of patient satisfaction is prone to misassignment of both good and bad quality. Another commenter expressed concern about the numerous patient surveys because, although patient feedback is important, this feedback must be balanced by acknowledging limitations to these surveys. The commenter mentioned that selection bias and survey fatigue may become a problem. Another commenter questioned whether the CAHPS for MIPS survey was an accurate reflection of the quality of care patients received, or whether it might be biased by superficial factors. The commenter also questioned the surveys statistical validity. The commenter encouraged CMS to explore alternative means of capturing patient experience, which is different from patient satisfaction.

    Response: The CAHPS for MIPS survey is optional for groups. However, because we believe assessing patients' experiences as they interact with the health care system is important, our proposed scoring methodology would give bonus points for reporting CAHPS data (or other patient experience measures). In addition, while patient experience may not always be associated with health outcomes, there is some evidence of a correlation between higher scores on patient experience surveys and better health outcomes. Please refer to http://www.ahrq.gov/cahps/consumer-reporting/research/index.html for more information on AHRQ studies pertaining to patient experience survey and better health outcomes.

    Comment: Another commenter stated that the CAHPS for MIPS survey should modify its wording to reflect that much work is done by a “care team” rather than a “clinician.”

    Response: We thank the commenter for this feedback, which we will take into consideration for future rulemaking.

    Comment: Some commenters believed that the CAHPS for MIPS survey should count for three measures, including one cross-cutting and one patient experience measure, noting that in the past, CMS has counted the CAHPS for PQRS survey as three measures covering one NQS domain. Another commenter encouraged CMS to require that MIPS eligible clinicians reporting CAHPS still submit an outcome measure, if one is available.

    Response: We recognize that under the PQRS program, CAHPS surveys counted as three quality measures rather than one quality measure. To simplify our scoring and communications we are only counting the CAHPS for MIPS survey as one measure. We do note, however, that the CAHPS for MIPS survey would fulfill the requirement to report on a high priority measure, in those instances when MIPS eligible clinicians do not have an outcome measure available.

    Comment: Other commenters believed that the CAHPS for MIPS survey is not designed for and is inappropriate for skilled nursing facility based MIPS eligible clinicians because in many situations the source of the information is not reliable due to the mental status of the patients being surveyed. Therefore, the commenters opposed applying bonuses and/or mandatory requirements to use such surveys in the quality performance category of MIPS until such surveys are available for MIPS eligible clinicians practicing in all settings of care.

    Response: To ensure meaningful measurement of patient experiences, we plan to include the CAHPS for MIPS survey as one way to earn bonus points since we believe this survey is important and appropriate for the Quality Payment Program. However, we would like to explain that the CAHPS for MIPS survey is optional for all MIPS eligible clinician groups, and that there are other ways for skilled nursing facilities to obtain bonus points, such as by reporting additional outcome measures or other high priority measures. We encourage stakeholders who are concerned about a lack of high priority measures to consider development of these measures and submit them for future use within the program. In addition, our strategy for identifying and developing meaningful outcome measures are in the quality measure development plan, authorized by section 102 of the MACRA (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf). The plan references how we plan to consider evidence-based research, risk adjustment, and other factors to develop better outcome measures.

    Comment: Some commenters urged CMS to work with other stakeholders to improve upon the CAHPS for MIPS survey and/or develop additional tools for measuring patient experience. The commenters also encouraged CMS to consider ways to make the CAHPS for MIPS survey easier for patients to complete, including different options for how it is administered and employing skip logic to reduce its redundancy, and to make it more meaningful to clinicians, such as by disaggregating by different types of patients. Other commenters recommended that CMS consider having MIPS eligible clinicians report the CAHPS for MIPS survey using an electronic administration of the instrument because such tools would be more efficient for administering the survey and would offer MIPS eligible clinicians real-time feedback for practice improvement. A few commenters recommended that CMS use short-form surveys, electronic administration, and alternative instrument as a means to reduce the burden of surveying while improving utility to patients and MIPS eligible clinicians.

    Response: We are exploring potential options available for the CAHPS for MIPS administration, including electronic modes of administration, for the future.

    Comment: One commenter requested that clinicians have the option to use other patient satisfaction surveys, such as the surgical CAHPS survey.

    Response: We thank the commenter for the suggestion and note that QCDRs would have the option to include the surgical CAHPS survey as one of their non-MIPS measures, if they so choose. We will however take this comment into consideration for future rulemaking.

    Comment: Another commenter recommended that CMS evaluate the CAHPS for MIPS survey and remove summary survey measures (SSMs) which make the survey less relevant for MIPS eligible clinicians and groups which are not delivering primary services, such as the “Access to Specialists” SSM, as the subsequent survey would be widely applicable to a large number of patient‐facing MIPS eligible clinicians.

    Response: We thank the commenter for the suggestion. We will continue to explore potential improvements to the CAHPS for MIPS survey in the future.

    Comment: Some commenters opposed implementing the changes to the Clinician and Group survey items that AHRQ has released as CG‐CAHPS 3.0, as a recent memorandum released by AHRQ indicates that the changes resulted in increased scores caused by the removal of low scoring questions and not an improvement in the experience of beneficiaries. A few commenters supported retaining lower performing CAHPS for MIPS questions as supplemental questions.

    Response: We appreciate the interest in retaining survey items that AHRQ has removed from version 3.0 of CG-CAHPS, and will take that interest into consideration as we finalize the survey implementation, scoring, and benchmarking procedures for CAHPS for MIPS. It is important to note that CAHPS for MIPS will include content in addition to CG-CAHPS core items, including but not limited to shared decision-making, access to specialist care, and health promotion and education.

    Comment: Other commenters recommended that the CAHPS for MIPS surveys be conducted closer to the time of a patient-clinician encounter to improve recall.

    Response: We will consider the commenter's recommendations in future rulemaking.

    Comment: One commenter requested that CMS limit additional CAHPS for MIPS questions and that the CAHPS for MIPS survey either remain the same as for PQRS or that the questions remain stable for the first few program years.

    Response: For the transition year of MIPS, the CAHPS for MIPS survey will primarily be the same as the current CAHPS for PQRS survey; however, as noted the survey contains additional questions to meet CMS's program needs. We would like to note that there may be updates made in regards to those questions that meet CMS's information and program needs. Further, we would like to note that in future years we do anticipate that we will revise the CAHPS for MIPS survey. We anticipate these revisions will not only improve the survey, but reduce burden.

    Comment: Another commenter requested clarification on how CMS can ensure the data are reliable to drive improvement when CAHPS for MIPS survey response rates are declining.

    Response: Response rates to CAHPS for PQRS (the precursor to CAHPS for MIPS) are comparable to those of other surveys of patient care experiences. Under CAHPS for MIPS, we will adjust reported scores for case mix, which allows the performance of groups to be compared against the same case mix of patients. Studies have not found evidence that response rates bias comparisons of case-mix adjusted patient experience scores.

    Comment: Some commenters recommended raising the threshold for the minimum number of patient CAHPS for MIPS survey responses to 30 to increase reliability.

    Response: We will consider the commenter's recommendations in future rulemaking.

    Comment: One commenter encouraged CMS to consider expanding the use of CAHPS for all clinicians as a tool in the quality measurement category of MIPS, with appropriate exclusions for rural and non-patient facing MIPS eligible clinicians. Additionally, the commenter encouraged CMS to expand the target population for such surveys to include the families of patients who have died, and to adapt questions from the hospice instrument so they can be used in CAHPS surveys of other settings to assess palliative care eligible clinicians and eligible clinicians who treat the patients facing the end of life in other settings other than hospice.

    Response: We appreciate the recommendation and will continue to look at ways to expand the CAHPS survey.

    After consideration of the comments regarding our proposed performance criteria for quality measures for groups electing to report the CAHPS for MIPS survey we are finalizing the policies as proposed. Specifically, we are finalizing at § 414.1335(a)(3) the following criteria for the submission of data on the CAHPS for MIPS survey by registered groups via CMS-approved survey vendor: For the applicable 12-month performance period, a group that wishes to voluntarily elect to participate in the CAHPS for MIPS survey measures must use a survey vendor that is approved by CMS for a particular performance period to transmit survey measures data to CMS. The CAHPS for MIPS survey counts for one measure towards the MIPS quality performance category and, as a patient experience measure, also fulfills the requirement to report at least one high priority measure in the absence of an applicable outcome measure. In addition, groups that elect this data submission mechanism must select an additional group data submission mechanism (that is, qualified registries, QCDRs, EHR etc.) in order to meet the data submission criteria for the MIPS quality performance category. The CAHPS for MIPS survey will count as one patient experience measure, and the group will be required to submit at least five other measures through one other data submission mechanisms. A group may report any five measures within MIPS plus the CAHPS for MIPS survey to achieve the six measures threshold. We will retain the survey administration period for the CAHPS for MIPS survey November to February. Groups that voluntarily elect to participate in the CAHPS for MIPS survey will bear the cost of contracting with a CMS-approved survey vendor to administer the CAHPS for MIPS survey on the group's behalf. Groups electing to report the CAHPS for MIPS survey will be required to register for the reporting of data. Only Medicare beneficiaries can be selected to participate in the CAHPS for MIPS survey.

    (b) Data Completeness Criteria

    We want to ensure that data submitted on quality measures are complete enough to accurately assess each MIPS eligible clinician's quality performance. Section 1848(q)(5)(H) of the Act provides that analysis of the quality performance category may include quality measure data from other payers, specifically, data submitted by MIPS eligible clinicians with respect to items and services furnished to individuals who are not individuals entitled to benefits under Part A or enrolled under Part B of Medicare.

    To ensure completeness for the broadest group of patients, we proposed at § 414.1340 the criteria below. MIPS eligible clinicians and groups who do not meet the proposed reporting criteria noted below would fail the quality component of MIPS.

    • Individual MIPS eligible clinicians or groups submitting data on quality measures using QCDRs, qualified registries, or via EHR need to report on at least 90 percent of the MIPS eligible clinician or group's patients that meet the measure's denominator criteria, regardless of payer for the performance period. In other words, for these submission mechanisms, we would expect to receive quality data for both Medicare and non-Medicare patients.

    • Individual MIPS eligible clinicians submitting data on quality measures data using Medicare Part B claims would report on at least 80 percent of the Medicare Part B patients seen during the performance period to which the measure applies.

    • Groups submitting quality measures data using the CMS Web Interface or a CMS-approved survey vendor to report the CAHPS for MIPS survey would need to meet the data submission requirements on the sample of the Medicare Part B patients CMS provides.

    We proposed to include all-payer data for the QCDR, qualified registry, and EHR submission mechanisms because we believe this approach provides a more complete picture of each MIPS eligible clinicians scope of practice and provides more access to data about specialties and subspecialties not currently captured in PQRS. In addition, we proposed the QCDR, qualified registry, or EHR submission must contain a minimum of one quality measure for at least one Medicare patient.

    We desire all-payer data for all reporting mechanisms, yet certain reporting mechanisms are limited to Medicare Part B data. Specifically, the claims reporting mechanism relies on individual MIPS eligible clinicians attaching quality information on Medicare Part B claims; therefore only Medicare Part B patients can be reported by this mechanism. The CMS Web Interface and the CAHPS for MIPS survey currently rely on sampling protocols based on Medicare Part B billing; therefore, only Medicare Part B beneficiaries are sampled through that methodology. We welcomed comments on ways to modify the methodology to assign and sample patients for these mechanisms using data from other payers.

    The data completeness criteria we proposed are an increase in the percentage of patients to be reported by each of the mechanisms when compared to PQRS. We believe the proposed thresholds are appropriate to ensure a more accurate assessment of a MIPS eligible clinician's performance on the quality measures and to avoid any selection bias that may exist under the current PQRS requirements. In addition, we would like to align all the reporting mechanisms as closely as possible with achievable data completeness criteria. We intend to continually assess the proposed data completeness criteria and will consider increasing these thresholds for future years of the program. We requested comments on this proposal.

    We were also interested in data that would indicate these data completeness criteria are inappropriate. For example, we could envision that reporting a cross-cutting measure would not always be appropriate for every telehealth service or for certain acute situations. We would not want a MIPS eligible clinician to fail reporting the measure in appropriate circumstances; therefore, we solicited feedback data and circumstances where it would be appropriate to lower the data completeness criteria.

    The following is summary of the comments we received regarding our proposed data completeness criteria.

    Comment: The majority of commenters recommended that CMS reduce the quality reporting thresholds to 50 percent, and not proceed with the proposals to increase the threshold for successfully reporting a measure to 80 percent via claims, and 90 percent via EHR, clinical registry, QCDR, or CMS Web Interface. The commenters cited numerous concerns and justifications for a modified threshold including: The 50 percent reporting rate allows those MIPS eligible clinicians just starting to report a quicker pathway to success and to gain familiarity with the program before such a high threshold is established, an advanced announcement of an increased threshold through future rulemaking provides those MIPS eligible clinicians already reporting sufficient time to implement changes to their practice to meet the higher threshold, and the proposed thresholds would present a significant administrative burden and make higher quality scores difficult to achieve. These commenters believed a majority of MIPS eligible clinicians would struggle to meet the proposed threshold of 90 percent and that the threshold is unrealistic. Another commenter opposed CMS's proposal to increase the reporting thresholds because this leaves MIPS eligible clinicians and third party data submission vendors with very little room for expected error.

    Response: We thank the commenters for their detailed feedback. Based on the overwhelming feedback received, we do not intend to finalize the data completeness thresholds as proposed. The numerous details the commenters cited on the increased burden the data completeness thresholds will impose on MIPS eligible clinicians is not intended. We agree with the commenters that some of the unintended consequences of having a higher data completeness threshold may jeopardize the MIPS eligible clinician's ability to participate and perform well under the MIPS. We want to ensure that an appropriate yet achievable level of data completeness is applied to all MIPS eligible clinicians. Based on stakeholder feedback, for the transition year of MIPS, we will finalize a 50 percent data completeness threshold for claims, registry, QCDR, and EHR submission mechanisms. This threshold is consistent with the current PQRS program. Additionally, for the second year of MIPS, for performance periods occurring in 2018, we are finalizing a 60 percent data completeness threshold for claims, registry, QCDR, and EHR submission mechanisms. We believe it is important to incorporate higher thresholds in future years to ensure a more accurate assessment of a MIPS eligible clinician's performance on the quality measures and to avoid any selection bias. We also believe that we are providing ample notice to MIPS eligible clinicians so they can take the necessary steps to prepare for this higher threshold for MIPS payment year 2020. Lastly, we anticipate that, in the 2021 MIPS payment year and beyond, for performance periods occurring in 2019 forward, as MIPS eligible clinicians gain experience with the MIPS we would further increase these thresholds over time.

    Comment: Another commenter cited specific concerns for QCDRs. The commenter believed the 50 percent threshold for QCDRs to report should be maintained for reporting and data completeness because of the proposed changes to QCDR functionality such as reporting additional performance categories and requiring MIPS eligible clinician feedback at least six times a year. Another commenter stated that the rule needs to maximize the role of QCDRs to ensure reporting and data submission are flexible, meaningful, and useful. The proposed QCDR requirement increasing from 50 to 90 percent will require reassuring MIPS eligible clinicians of the value of QCDR participation and reporting.

    Response: We appreciate the commenters concerns and as mentioned previously we are modifying the data completeness threshold for individual MIPS eligible clinicians and groups submitting data on quality measures using QCDRs. For the transition year, the MIPS eligible clinician will need to report on at least 50 percent of the MIPS eligible clinician or group's patients that meet the measure's denominator criteria, regardless of payer for the performance period. We do note that for the second year of MIPS, for performance periods occurring in 2018, we are increasing the data completeness threshold to 60 percent. We also anticipate, that in the third and future years of MIPS, for performance periods occurring in 2019 and forward, as MIPS eligible clinicians gain experience with the MIPS we would further increase these thresholds over time. Lastly, we also want to refer the commenter to section II.E.9.a. of this final rule with comment period where we discuss the requirements to become a QCDR under the MIPS.

    Comment: Another commenter stated that setting a data completeness threshold of 80 or 90 percent is not achievable for practices, especially given struggles trying to meet the requirement for reporting measures for 50 percent of Medicare patients under PQRS. The commenter expressed disappointment that average reporting threshold rates from 2014 PQRS Experience Report were not disclosed. The 80 or 90 percent requirement creates additional burden as well given inclusion of all-payer data requirement. The commenter also believed that vendors will not be able to meet these more stringent requirements, especially for first performance period. The commenter urged CMS to reduce data completeness threshold to 50 percent of applicable Medicare Part B beneficiary encounters via claims and 50 percent for reporting via registry, EHR and QCDR.

    Response: As noted above, for the transition year of MIPS, we will finalize a 50 percent data completeness threshold for claims, registry, QCDR, and EHR submission mechanisms. This threshold is consistent with the current PQRS program. While we can appreciate the concern raised by the commenter related to vendors' readiness, we do not anticipate that vendors will have difficulty in meeting the original proposed data completeness threshold or the modified data completeness threshold we are finalizing here. Lastly, we will include the average reporting threshold rates for future years of the PQRS Experience Report, as technically feasible.

    Comment: Another commenter urged CMS to apply consistent data reporting requirements regardless of the method of data submission, as the commenter disagreed with different measure submission requirements for clinicians using a QCDR, qualified registry, or EHR. The commenter stated this consistency would allow for fair comparisons among clinicians.

    Response: We agree with the commenter and would like to explain that we did not propose different data completeness threshold nor are we finalizing different data completeness thresholds across the QCDR, qualified registry, or EHR submission mechanisms.

    Comment: Another commenter stated it is necessary to maintain a 50 percent threshold until a certain level of interoperability for data exchange across registries, EHRs and other data sources has been achieved. This commenter believed that claims reporting is the most burdensome for MIPS eligible clinicians as quality data codes (QDCs) will need to be attached for each applicable claim.

    Response: As noted above we are finalizing a 50 percent data completeness threshold for the transition year of MIPS. However, we do not agree that we can remain at a 50 percent threshold until interoperability is achieved. Rather we believe by providing ample notice to MIPS eligible clinicians and third party intermediaries, we can increase the thresholds over time. It is important to note that for the second year of MIPS, for performance periods occurring in 2018, we are increasing the data completeness threshold to 60 percent. We also anticipate, that for performance periods occurring in 2019 and forward, as MIPS eligible clinicians gain experience with the MIPS we would further increase these thresholds over time. Lastly, we recognize that the differing submission mechanisms have varying levels of burden on the MIPS eligible clinicians, which is why we believe that having multiple submission mechanisms as options is an important component as clinicians gain experience with the MIPS.

    Comment: Other commenters recommended a 50 percent threshold to ensure quality performance category scoring does not favor large practices. The commenters were concerned that CMS' proposed scoring favors large practices that submit data through the CMS Web Interface. The commenters noted that MIPS eligible clinicians using CMS Web Interface to submit data automatically achieve all of the requirements (plus bonus points) to potentially earn maximum points, and only need to report on a sampling of patients rather than the high percentage of patients needed for other data submission methods, and that this provides an advantage for these MIPS eligible clinicians over MIPS eligible clinicians in smaller practices.

    Response: While we do not agree that the MIPS quality scoring methodologies favor large practices that submit data using the CMS Web Interface, we can agree that small practices may require additional flexibilities under the MIPS. Therefore, as noted previously, we are finalizing flexibilities for smaller practices throughout this final rule with comment period, such as reduced improvement activities requirements.

    Comment: A few commenters indicated that the proposed thresholds would create an environment with little room for error, does not account for potential vendor, administrative or other problems, and will jeopardize MIPS eligible clinicians' success. These commenters noted that MIPS eligible clinicians may be deterred from reporting high priority and outcome measures and from reporting via electronic means due to the administrative burden posed by the high thresholds. The commenters stated that a 50 percent threshold still requires MIPS eligible clinicians to report on a majority of patients, and that this threshold does not encourage “gaming”: Once MIPS eligible clinician workflows are in place, it is onerous to deviate from them simply to pick and choose which patients to include in which measure. The commenter stated that the higher threshold is especially burdensome for small practices without the resources to hire a full-time or part-time employee to collect and document such information.

    Response: We did not intend to increase the burden on MIPS eligible clinicians or deter MIPS eligible clinicians from submitting data on high priority measures. While we can agree with the commenters that modifying existing clinical workflows can be burdensome, we believe that once these workflows are established, performing the quality actions for the denominator eligible patients becomes part of the clinical workflow and is not unduly burdensome. For the transition year of MIPS, we will finalize a 50 percent data completeness threshold for claims, registry, QCDR, and EHR submission mechanisms. This threshold is consistent with the current PQRS program. Additionally, for the second year of MIPS, for performance periods occurring in 2018, we are finalizing a 60 percent data completeness threshold for claims, registry, QCDR, and EHR submission mechanisms. We believe it is important to incorporate higher thresholds in future years to ensure a more accurate assessment of a MIPS eligible clinician's performance on the quality measures and to avoid any selection bias. We also believe that we are providing ample notice to MIPS eligible clinicians so they can take the necessary steps to prepare for this higher threshold in the second year of the MIPS. We anticipate that, for performance periods occurring in 2019 and forward, as MIPS eligible clinicians gain experience with the MIPS we would further increase these thresholds over time.

    Comment: Another commenter stated the reporting requirement of at least 90 percent of all patients (not just Medicare) is not possible and that this is equivalent to requiring MIPS eligible clinicians to report on more than six individual quality measures and is a substantial change from the 20 patient requirement for measures groups under the current PQRS rule. The commenter's stated that their group performs thousands of general and vascular surgeries each year and that devoting the time and cost to review every hospital chart, operative note and call every patient at least once 30 days post operation simply is not possible. Another commenter stated that the data completeness criteria are onerous and require MIPS eligible clinicians to report on such a high percentage of their patients limits the types of measures physicians will be able to report (for example, MIPS eligible clinicians will prefer non-resource-intensive outcome measures).

    Response: We appreciate the commenters concerns and did not intend for the data completeness thresholds to limit the types of patients MIPS eligible clinicians would submit data on. We are finalizing a 50 percent threshold for the transition year, and a 60 percent threshold for the second year of the MIPS, for performance periods occurring in 2018. We do believe, however, it is important to incorporate higher thresholds in future years to ensure a more accurate assessment of a MIPS eligible clinician's performance on the quality measures and to avoid any selection bias. We also believe that we are providing ample notice to MIPS eligible clinicians so they can take the necessary steps to prepare for this higher threshold in the second year of the MIPS. We anticipate that, for performance periods occurring in 2019 and forward, as MIPS eligible clinicians gain experience with the MIPS we would further increase these thresholds over time. We will however monitor these policies to ensure that these data completeness thresholds do not become overly burdensome that they deter MIPS eligible clinicians from submitting data on their appropriate patient population.

    Comment: One commenter, a small mental health clinic, cited numerous reasons for concern including clients not tolerating significant time to ask assessment questions, difficulty in finding applicable measures, medical staff's limited time with clients, difficulty in getting measures from clients seen in their homes, clinical inappropriateness of spending entire first or second appointments gathering PQRS measures, issues with PHQ9 score improvement, and other reporting requirements including California's Medi-Cal and Mental Health Service Act requirements. The commenter suggested the continued use of the 50 percent reporting requirement under PQRS.

    Response: We can appreciate the concerns raised by the commenter. We are continuing to use a 50 percent data completeness threshold similar to what was used under PQRS. We do note however that under MIPS the data completeness threshold applies for both Medicare and non-Medicare patients.

    Comment: One commenter also requested that CMS release data demonstrating that raising the reporting rate is feasible for all MIPS eligible clinicians. This commenter noted the 2017 and 2018 PQRS and VBPM policies required 50 percent completeness and was a decrease from previous years, acknowledging feedback from clinicians. The commenter stated that issuing a drastic increase as clinicians shift to a new system will be problematic, and the commenter suggested remaining at 50 percent for the first few years and consider phasing in increases if it is found that 50 percent is feasible.

    Response: We thank the commenters for their detailed feedback. Based on the overwhelming feedback received, we do not intend to finalize the data completeness thresholds as proposed. The numerous details the commenters cited on the increased burden the data completeness thresholds will impose on clinicians is not intended. We want to ensure that an appropriate yet achievable level of data completeness is applied to all MIPS eligible clinicians. Based on stakeholder feedback for the transition year of MIPS, we will finalize a 50 percent data completeness threshold for claims, registry, QCDR, and EHR submission mechanisms. This threshold is consistent with the current PQRS program. However, we continue to target a 90 percent reporting requirement as MIPS eligible clinicians gain experience with the MIPS we would further increase these thresholds over time.

    Comment: Another commenter agreed with the proposal to include at least 90 percent of patients regardless of payer to CMS in order to provide the most complete picture of the MIPS eligible clinician's quality, especially for specialists.

    Response: We thank the commenter for their support. However, based on stakeholder feedback, for the transition year of MIPS, we will finalize a 50 percent data completeness threshold for claims, registry, QCDR, and EHR submission mechanisms.

    Comment: A few commenters believed that a 100 percent review is not feasible because their practice performs 10,000 procedures annually. The commenters believed that review of 25-30 procedures is more practical.

    Response: Based on the overwhelming feedback received, we do not intend to finalize the data completeness thresholds as proposed. The numerous details the commenters cited on the increased burden the data completeness thresholds will impose on MIPS eligible clinicians is not intended. We want to ensure that an appropriate yet achievable level of data completeness is applied to all MIPS eligible clinicians. After consideration of stakeholder feedback, for the transition year of MIPS, we are modifying our proposal and will finalize a 50 percent data completeness threshold for claims, registry, QCDR, and EHR submission mechanisms.

    Comment: Other commenters requested that CMS consider using other reporting options that do not involve collecting data from a certain percentage of patients, such as requiring clinicians to report on a certain number of consecutive patients. The commenters believed the consecutive case approach could minimize the reporting burden while allowing for the collection of information to assess performance.

    Response: In the early years of PQRS we required EPs to report on a certain number of consecutive patients if the clinician was reporting a measures group. Our experience was that many EPs failed to meet the reporting requirements as they missed one or more patients in the consecutive sequence.

    Comment: A few commenters supported the proposal to give scores of zero if MIPS eligible clinicians can, but fail to, report on the minimum number of measures.

    Response: We thank the commenter for their support of our proposal.

    Comment: Another commenter supported CMS's proposal in the quality performance category to recognize a measure as being submitted and not assign a clinic zero points for a non-reported measure when a measure's reliability or validity may be compromised due to unforeseen circumstances, such as data collection problems. The commenter recommended that CMS notify affected MIPS eligible clinicians and groups by mail if in the future a data collection or vendor submission issue arises.

    Response: We intend to make every effort to notify affected MIPS eligible clinicians if data collection issues arise.

    Comment: Many commenters disagreed with the proposal to include all-payer data. . Several commenters believed that requiring MIPS eligible clinicians to report all-payer data goes beyond the scope of CMS's programmatic authority and need, violates clinicians' ethical duties to patient confidentiality, and violates patients' privacy rights.' Other commenters stated the federal government should not be able to access the medical information of patients who are not CMS beneficiaries. Another commenter believed that MIPS eligible clinicians may be discouraged from reporting through registries, QCDRs, and EHRs due to the requirement that they report on all of their patients regardless of payer. One commenter urged CMS to remove the requirement to report all patients when reporting via registry.

    Another commenter noted that MIPS eligible clinicians reporting outcomes should document all factors affecting outcomes, especially adversely affecting outcomes. The commenter stated that socioeconomic status, family support systems, cognitive dysfunction and mental health issues affect compliance and outcomes. Therefore, coding for some of these factors can be misleading, even if there are available options for diagnostic coding. The commenter noted that open access to all physician notes would jeopardize proper documentation of these issues. The commenter added that diagnostic coding must not inhibit documentation of issues and concerns for physicians, and that there must be proper acuity adjustment in measuring physician or team performance. The commenter suggested that all charts have certain areas of restricted protected access to allow documentation of such issues, and that this type of charting must be available to physicians who are not categorized as mental health professionals.

    Response: We have received numerous previous comments noting that it can be difficult for clinicians to separate Medicare beneficiaries from other patients, and our intention with seeking all-payer data is to make reporting easier for MIPS eligible clinicians. We note that section 1848(q)(5)(H) of the Act authorizes the Secretary to include, for purposes of the quality performance category, data submitted by MIPS eligible clinicians with respect to items and services furnished to individuals who are not Medicare beneficiaries. Furthermore, we believe that all-payer data makes it easier for MIPS eligible clinicians to obtain a complete view of their quality performance without focusing on one subset or another of their patient populations. We do not believe that collection of this data constitutes a violation of patient privacy. We do not believe that the collection of all-payer data will decrease MIPS eligible clinicians' utilization of registries, QCDRs, and EHRs. It is important to note that MIPS eligible clinicians may elect to report information at the aggregate level which does not have any patient-identifiable information. We agree that documentation related to outcomes is challenging and we continue to work to identify the impact of socio-demographic status on patient outcomes.

    Comment: Other commenters supported the proposal to use all-payer data for quality measures and also for patient experience surveys, recognizing that these data will create a more comprehensive picture of a MIPS eligible clinician's performance. Another commenter was supportive of the proposal to require MIPS eligible clinicians reporting quality data via qualified registries or EHR to report on both Medicare and non-Medicare patients. The commenter favored the proposal because it would be administratively easier and because quality of care affects all patients, not just those covered by Medicare.

    Response: We thank the commenters for the support.

    Comment: A few commenters recommended that CMS phase-in the requirement to include all-payer data for the QCDR, qualified registry, and EHR submission mechanisms and suggests that for year 1 of the program, requiring only Medicare data would be a more appropriate first step.

    Response: Third party intermediaries were required to utilize all payer data in PQRS. Therefore, we do not believe it should be a burden as they have already been meeting this requirement.

    Comment: Other commenters asked whether reporting all-payer data is optional year 1 of the program, whether there is a minimum percentage of Medicare Part B patients required, where the benchmarks will come from, and how it will be ensured that the benchmarks are comparable across the industry. Some commenters recommended that reporting on other payers be optional and that MIPS eligible clinicians not be penalized for activities related to payers other than Medicare. The commenters stated that the law does not require reporting data on other payers' patients. The commenters believed that reporting on all payers may skew data in favor of MIPS eligible clinicians with large private payer populations over physicians with large Medicare patient populations. A few commenters expressed concern that some practices will be required to submit data that represents all payers because Medicare populations are very different from those covered by other payers. This may create an inequitable assessment of quality performance.

    Response: We would like to explain that reporting all-payer data is not optional for the transition year of MIPS. We desire all-payer data for all reporting mechanisms, yet certain reporting mechanisms are limited to Medicare Part B data. Specifically, the claims reporting mechanism relies on individual MIPS eligible clinicians attaching quality information on Medicare Part B claims; therefore, only Medicare Part B patients can be reported by this mechanism. The CMS Web Interface and the CAHPS for MIPS survey currently rely on sampling protocols based on Medicare Part B billing; therefore, only Medicare Part B beneficiaries are sampled through that methodology. In regards to the commenters concern that using all-payer data would create an inequitable assessment of the MIPS eligible clinicians' performance on quality, we respectfully disagree. Rather, we believe that utilizing all-payer data will provide a more complete picture of the MIPS eligible clinicians' performance.

    Comment: A few commenters suggested that rather than collecting data from all-payers for the quality performance category under MIPS, CMS should consider the federated data model, which would allow for different datasets to feed into a single virtual dataset that would organize the data. The commenters stated this would allow analysis and comparisons across datasets without structuring all of the source databases.

    Response: We thank the commenters for this feedback and will take into consideration for development in future rulemaking.

    Comment: Other commenters stated that the practice of medicine will be compromised by linking payment to collection of private patient data and making it available to CMS through electronic medical records.

    Response: We believe that MIPS eligible clinicians will continue to uphold the highest ethical standards of their professions and that medical practice will not be compromised by the MIPS program. Clinicians may elect to report information at the aggregate level which does not have any patient-identifiable information.

    Comment: Other commenters were very concerned that increasing the reporting threshold for quality data from 50 percent or more of Medicare patients to 90 percent or more of all patients regardless of payer is a major change that should be approached more gradually to give clinicians a chance to adapt. The commenters suggested a more gradual change, at least in the first few years, such as keeping the patient base and threshold as is (50 percent or more of the Medicare population) or even a smaller increase in threshold (maybe 60 or 75 percent of patients) but only for Medicare beneficiaries rather than all payers. Another commenter requested reporting go from 50 to 75 percent and be applied to Medicare patients only (as opposed to private insurance patients).

    Response: We are modifying our proposal and finalizing a 50 percent threshold for individual MIPS eligible clinicians or groups submitting data on quality measures using QCDRs, qualified registries, via EHR, or Medicare Part B claims. In addition, we are finalizing our approach of including all-payer data for the QCDR, qualified registry, and EHR submission mechanisms because we believe this approach provides a more complete picture of each MIPS eligible clinician's scope of practice and provides more access to data about specialties and subspecialties not currently captured in PQRS.

    Comment: Some commenters questioned CMS's ability to validate data completeness criteria for all-payer data under the quality performance category. They stated that because of this, all-payer completeness criteria function more like a request than a requirement. The commenters also requested information on what the auditing, notification, and appeal (targeted review) process will be specific to all-payer data completeness.

    Response: We recognize that our data completeness criteria are different since we are now requiring all-payer data. However, we do not currently have the optimal capability to validate data completeness for all-payer data. Please note validation of all-payer data will therefore continue to be reviewed based on the data submission mechanism used. For example, if the quality measure data is submitted directly from an EHR for an electronic Clinician Quality Measure (eCQM), we expect completeness from EHR reports will cover all of the patients that meet the inclusion criteria for the measure, to include all-payer data found within the EHR data set for the population attributed to that measure. If the quality data is submitted via the CMS Web Interface, we will provide the sample of patients that must be reported on to CMS, though more may be included given the all-payer allowance under MIPS. For the transition year of MIPS we expect that MIPS eligible clinicians, and especially third party intermediaries, will comply fully with the requirements we are adopting.

    Comment: Another commenter was supportive of the proposal to require MIPS eligible clinicians reporting quality data via qualified registries or EHR to report on both Medicare and non-Medicare patients. The commenter favored the proposal because it would be administratively easier and because quality of care affects all patients, not just those covered by Medicare.

    Response: We thank the commenter for their support.

    Comment: Some commenters agreed that CMS should include all-payer data in order to push quality improvement throughout the entire health care system. The commenters were concerned, however, that including all-payer data, combined with the amount of flexibility some clinicians have in choosing which quality measures to report, may end up obscuring the quality of care actually received by Medicare beneficiaries. The commenters recommended CMS implement additional requirements or safe guards for the inclusion of all-payer data. The commenters also supported CMS raising the data completeness thresholds above what was required under PQRS and increasing these thresholds even higher in future years of MIPS. Some commenters recommended that CMS continue to encourage the creation of databases across the payer community but treat this as a long-term goal rather than yet another operational item with uncertain implications. Although commenters supported all-payer databases conceptually, they believed that operationally the United States is far from this reality.

    Response: We agree that there is potential for further quality improvement by utilizing all-payer data. We also believe the MIPS program's flexibility in measure selection is an asset. We will monitor the MIPS program's impacts on care quality carefully, particularly for Medicare beneficiaries.

    Comment: Some commenters suggested changing the 90 percent of patients' measures group reporting requirement to 25 patients per surgeon and suggested this will achieve statistical validity and is achievable level of data collection. The surgery measures groups as defined in the proposal would then provide the commenter's practice with highly valuable information that could benefit all patients as the MIPS eligible clinicians review ways to operate more safely, efficiently and at a lower cost. Another commenter recommended that CMS update patient sampling requirements over time.

    Response: We are modifying our proposal and finalizing a 50 percent threshold for the transition year of MIPS for individual MIPS eligible clinicians or groups submitting data on quality measures using QCDRs, qualified registries, via EHR, or Medicare Part B claims. In addition, we are finalizing our approach of including all-payer data for the QCDR, qualified registry, and EHR submission mechanisms because we believe this approach provides a more complete picture of each MIPS eligible clinician's scope of practice and provides more access to data about specialties and subspecialties not currently captured in PQRS. We have removed the measures groups referenced in the comment and replaced them with specialty-specific measure sets.

    Comment: A few commenters sought clarification on scoring when a MIPS eligible clinician fails to submit data for the required 80 or 90 percent data completeness threshold; that is, where a MIPS eligible clinician reports on less than the 80 or 90 percent of patients but has a greater than zero performance rate.

    Response: We appreciate the commenter seeking clarification. As discussed, we are reducing the threshold for the data completeness requirement as outlined below for the transition year of MIPS. In addition, we proposed that measures that fell below the data completeness threshold to be assessed a zero; however, in alignment with the goal to provide as many flexibilities to MIPS eligible clinicians as possible, for the transition year, MIPS eligible clinicians whose measures fall below the data completeness threshold would receive 3 points for submitting the measure. We will revisit data completeness scoring policies through future rulemaking. It is important to note that we are also finalizing to ramp up the data completeness threshold to 60 percent for MIPS, for performance periods occurring in 2018, for data submitted on quality measures using QCDRs, qualified registries, via EHR, or Medicare Part B claims. In addition, these thresholds for data submitted on quality measures using QCDRs, qualified registries, via EHR, or Medicare Part B claims will increase for MIPS for performance periods occurring in 2019 and forward.

    As a result of the comments regarding our proposal on data completeness criteria we are not finalizing our policy as proposed. Rather we are finalizing at § 414.1340 the data completeness criteria below for MIPS during the 2017 performance period.

    • Individual MIPS eligible clinicians or groups submitting data on quality measures using QCDRs, qualified registries, or via EHR must report on at least 50 percent of the MIPS eligible clinician or group's patients that meet the measure's denominator criteria, regardless of payer for the performance period. In other words, for these submission mechanisms, we expect to receive quality data for both Medicare and non-Medicare patients. For the transition year, MIPS eligible clinicians whose measures fall below the data completeness threshold of 50 percent would receive 3 points for submitting the measure.

    • Individual MIPS eligible clinicians submitting data on quality measures data using Medicare Part B claims, would report on at least 50 percent of the Medicare Part B patients seen during the performance period to which the measure applies. For the transition year, MIPS eligible clinicians whose measures fall below the data completeness threshold of 50 percent would receive 3 points for submitting the measure.

    • Groups submitting quality measures data using the CMS Web Interface or a CMS-approved survey vendor to report the CAHPS for MIPS survey must meet the data submission requirements on the sample of the Medicare Part B patients CMS provides.

    We are also finalizing to ramp up the data completeness threshold to 60 percent for MIPS for performance periods occurring in 2018 for data submitted on quality measures using QCDRs, qualified registries, via EHR, or Medicare Part B claims. We note that these thresholds for data submitted on quality measures using QCDRs, qualified registries, via EHR, or Medicare Part B claims will increase for performance periods occurring in 2019 and onward. As noted in our proposal, we believe higher thresholds are appropriate to ensure a more accurate assessment of a MIPS eligible clinician's performance on the quality measures and to avoid any selection bias. In addition, we would like to align all the reporting mechanisms as closely as possible with achievable data completeness criteria.

    We are finalizing our approach of including all-payer data for the QCDR, qualified registry, and EHR submission mechanisms because we believe this approach provides a more complete picture of each MIPS eligible clinician's scope of practice and provides more access to data about specialties and subspecialties not currently captured in PQRS. In addition, those clinicians who utilize a QCDR, qualified registry, or EHR submission must contain a minimum of one quality measure for at least one Medicare patient.

    We are not finalizing our proposal that MIPS eligible clinicians and groups who do not meet the proposed submission criteria noted below would fail the quality component of MIPS. Instead, those MIPS eligible clinicians who fall below the data completeness thresholds would have their specific measures that fall below the data completeness threshold not scored for the transition year of MIPS. The MIPS eligible clinicians would receive 3 points for measures that fall below the data completeness threshold.

    (c) Summary of Data Submission Criteria

    Table 5 of the rule, reflects our final Quality Data Submission Criteria for MIPS:

    Table 5—Summary of Final Quality Data Submission Criteria for MIPS Payment Year 2019 via Part B Claims, QCDR, Qualified Registry, EHR, CMS Web Interface, and CAHPS for MIPS Survey Performance period Measure type Submission
  • mechanism
  • Submission criteria Data completeness
    A minimum of one continuous 90-day period during CY2017 Individual MIPS eligible clinicians Part B Claims Report at least six measures including one outcome measure, or if an outcome measure is not available report another high priority measure; if less than six measures apply then report on each measure that is applicable. MIPS eligible clinicians and groups will have to select their measures from either the list of all MIPS Measures in Table A or a set of specialty-specific measures in Table E 50 percent of MIPS eligible clinician's Medicare Part B patients for the performance period. A minimum of one continuous 90-day period during CY2017 Individual MIPS eligible clinicians or Groups QCDR Qualified Registry EHR Report at least six measures including one outcome measure, or if an outcome measure is not available report another high priority measure; if less than six measures apply then report on each measure that is applicable. MIPS eligible clinicians and groups will have to select their measures from either the list of all MIPS Measures in Table A or a set of specialty-specific measures in Table E 50 percent of MIPS eligible clinician's or groups patients across all payers for the performance period. Jan 1-Dec 31 Groups CMS Web Interface Report on all measures included in the CMS Web Interface; AND populate data fields for the first 248 consecutively ranked and assigned Medicare beneficiaries in the order in which they appear in the group's sample for each module/measure. If the pool of eligible assigned beneficiaries is less than 248, then the group would report on 100 percent of assigned beneficiaries Sampling requirements for their Medicare Part B patients. Jan 1-Dec 31 Groups CAHPS for MIPS Survey CMS-approved survey vendor would have to be paired with another reporting mechanism to ensure the minimum number of measures are reported. CAHPS for MIPS Survey would fulfill the requirement for one patient experience measure towards the MIPS quality data submission criteria. CAHPS for MIPS Survey will only count for one measure Sampling requirements for their Medicare Part B patients.
    (4) Application of Quality Measures to Non-Patient Facing MIPS Eligible Clinicians

    Section 1848(q)(2)(C)(iv) of the Act provides that the Secretary must give consideration to the circumstances of non-patient facing MIPS eligible clinicians and may, to the extent feasible and appropriate, take those circumstances into account and apply alternative measures or activities that fulfill the goals of the applicable performance category to such clinicians. In doing so, the Secretary must consult with non-patient facing MIPS eligible clinicians.

    In addition, section 1848(q)(5)(F) to the Act allows the Secretary to re-weight MIPS performance categories if there are not sufficient measures and activities applicable and available to each type of MIPS eligible clinician. We assume many non-patient facing MIPS eligible clinician will not have sufficient measures and activities applicable and available to report and will not be scored on the quality performance category under MIPS. We refer readers to the proposed rule (81 FR 28247) to the discussion on how we address performance categories weighting for MIPS eligible clinicians for whom no measures exist in a given performance category.

    In the MIPS and APMs RFI, we solicited feedback on how we should apply the four MIPS performance categories to non-patient facing MIPS eligible clinicians and what types of measures and/or improvement activities (new or from other payments systems) would be appropriate for these MIPS eligible clinicians. We also engaged with seven separate organizations representing non-patient facing MIPS eligible clinicians in the areas of anesthesiology, radiology/imaging, pathology, and nuclear medicine, specifically cardiology. Organizations we spoke with representing several specialty areas indicated that Appropriate Use Criteria (AUC) can be incorporated into the improvement activities performance category by including activities related to appropriate assessments and reducing unnecessary tests and procedures. AUC are distinct from clinical guidelines and specify when it is appropriate to use a diagnostic test or procedure—thus reducing unnecessary tests and procedures. Use of AUC is an important improvement activities as it fosters appropriate utilization and is increasingly used to improve quality in cardiovascular medicine, radiology, imaging, and pathology. These groups also highlighted that many non-patient facing MIPS eligible clinicians have multiple patient safety and practice assessment measures and activities that could be included, such as activities that are tied to their participation in the Maintenance of Certification (MOC) Part IV for improving the clinician's practice. One organization expressed concern that because their quality measures are specialized, some members could be negatively affected when comparing quality scores because they did not have the option to be compared on a broader, more common set of measures. The MIPS and APMs RFI commenters noted that the emphasis should be on measures and activities that are practical, attainable, and meaningful to individual circumstances and that measurement should be as outcomes-based to the extent possible. The MIPS and APMs RFI commenters emphasized that improvement activities should be selected from a very broad array of choices and that ideally non-patient facing MIPS eligible clinicians should help develop those activities so that they provide value and are easy to document. For more details regarding the improvement activities performance category refer to the proposed rule (81 FR 28209). The comments from these organizations were considered in developing these proposals.

    We understand that non-patient facing MIPS eligible clinicians may have a limited number of measures on which to report. Therefore, we proposed at § 414.1335 that non-patient facing MIPS eligible clinicians would be required to meet the otherwise applicable submission criteria, but would not be required to report a cross-cutting measure.

    Thus we would employ the following strategy for the quality performance criteria to accommodate non-patient facing MIPS eligible clinicians:

    • Allow non-patient facing MIPS eligible clinicians to report on specialty-specific measure set (which may have fewer than the required six measures).

    • Allow non-patient facing MIPS eligible clinicians to report through a QCDR that can report non-MIPS measures.

    • Non-patient facing MIPS eligible clinicians would be exempt from reporting a cross-cutting measure as proposed at § 414.1340.

    We requested comments on these proposals.

    The following is summary of the comments we received regarding our proposals on the application of quality measures to non-patient facing MIPS eligible clinicians:

    Comment: Several commenters supported the proposed exemption from reporting a cross-cutting quality measure for non-patient facing MIPS eligible clinicians as these measures may not be reliable, developmentally feasible, or clinically relevant as well as the allowance for non-patient facing MIPS eligible clinicians to report on specialty-specific measure sets.

    Response: We agree, however, as we have noted earlier in this rule we do not intend to finalize the cross-cutting measure requirements for all MIPS eligible clinicians, including those that are determined to be non-patient facing MIPS eligible clinicians.

    Comment: Another commenter wanted more details on CMS's considerations for non-patient facing MIPS eligible clinicians under the quality performance category.

    Response: We thank the commenter for their question. As we are not finalizing our proposal for cross-cutting measures, we do not need to finalize our proposal for a separate designation for non-patient facing MIPS eligible clinicians at this time. We refer readers to section II.E.1.b. of this final rule with comment period for more information on non-patient facing MIPS eligible clinicians.

    Comment: Other commenters proposed that CMS remove the quality measure requirement related to patient outcomes for non-patient facing MIPS eligible clinicians.

    Response: We proposed to provide an exception for non-patient facing MIPS eligible clinicians from the requirement to report cross-cutting measures, but we believe that outcome measures are of critical importance to quality measurement. Therefore, we do not believe an additional exception is appropriate.

    After consideration of the comments received regarding our proposals on application of the quality category to non-patient facing MIPS eligible clinicians we are not finalizing as proposed. As previously noted in this rule, we are not finalizing the criteria proposed at § 414.1335 that MIPS eligible clinicians that are considered patient facing must report a cross-cutting measure. The only distinction within the quality performance for non-patient facing MIPS eligible clinicians as proposed at § 414.1335 is that they were not required to report a cross-cutting measure. We are therefore finalizing at § 414.1335 that non-patient facing MIPS eligible clinicians would be required to meet the otherwise applicable submission criteria that apply for all MIPS eligible clinicians for the quality performance category.

    (5) Application of Additional System Measures

    Section 1848(q)(2)(C)(ii) of the Act provides that the Secretary may use measures used for payment systems other than for physicians, such as measures used for inpatient hospitals, for purposes of the quality and cost performance categories. The Secretary may not, however, use measures for hospital outpatient departments, except in the case of items and services furnished by emergency physicians, radiologists, and anesthesiologists.

    In the MIPS and APMs RFI, we sought comment on how we could best use this authority. Some facility-based commenters requested a submission option that allows the MIPS eligible clinician to be scored based on the facility's measures. These commenters noted that the care they provide directly relates to and affects the facility's overall performance on quality measures and that using this score may be a more accurate reflection of the quality of care they provide than the quality measures in the PQRS or the VM program.

    We will consider an option for facility-based MIPS eligible clinicians to elect to use their institution's performance rates as a proxy for the MIPS eligible clinician's quality score. We are not proposing an option for the transition year of MIPS because there are several operational considerations that must be addressed before this option can be implemented. We requested comment on the following issues: (1) whether we should attribute a facility's performance to a MIPS eligible clinician for purposes of the quality and cost performance categories and under what conditions such attribution would be appropriate and representative of the MIPS eligible clinician's performance; (2) possible criteria for attributing a facility's performance to a MIPS eligible clinician for purposes of the quality and cost performance categories; and (3) the specific measures and settings for which we can use the facility's quality and cost data as a proxy for the MIPS eligible clinician's quality and cost performance categories; and (4) if attribution should be automatic or if a MIPS eligible clinician or group should elect for it to be done and choose the facilities through a registration process. We may also consider other options that would allow us to gain experience. We solicited comments on these approaches.

    The following is summary of the comments we received regarding our approaches to application of additional system measures:

    Comment: The majority of commenters that discussed the potential use of facility performance supported our proposal to attribute a facility's performance to a MIPS eligible clinician for purposes of the quality and cost performance categories. Several commenters urged CMS to implement a CMS hospital quality program measure reporting option for hospital-based clinicians in the MIPS as soon as possible. Other commenters believed that using hospital measure performance in the MIPS would help clinicians and hospitals better align quality improvement goals and processes across the care continuum and reduce data collection burden. One commenter thought that attributing facility performance for the purposes of the quality and cost performance categories could encourage harmony between the performance agendas of clinicians and their facilities. Another commenter supported a streamlined measurement approach for MIPS reporting for hospital based clinicians and alignment of MIPS measures with hospital measures.

    One commenter believed that hospital quality reporting should substitute for MIPS quality reporting for hospital based clinicians. While another commenter specified that hospital measures should only be used for the quality performance category, not for the cost performance category. Another commenter strongly recommended CMS either allow hospital based clinicians to use hospital quality measures for MIPS reporting, or exempt hospital based clinicians from the quality performance category until there is substantial alignment of clinician and hospital measures. This commenter requested that such exemption be the same as the hospital based clinician exemption under the advancing care information performance category.

    Response: We agree that using hospital measure performance may promote more harmonized quality improvement efforts between hospital-based clinicians and hospitals and promote care coordination across the care continuum. We are considering appropriate attribution policies for facility-based measures and will take commenter's suggestions into account in future rulemaking.

    Comment: Several commenters opposed using a facility's quality and cost performance as a proxy for MIPS eligible clinicians. A few commenters did not support inclusion of other system measures at this time and stated that this could potentially create an additional burden for vendors to provide additional reporting measures which they had not previously developed or mapped out workflows for. One commenter did not support attributing a facility's performance to a MIPS eligible clinician for the quality and cost performance categories, noting that facility-level performance would not be appropriate or representative of the MIPS eligible clinician's individual performance. One commenter expressed concern that this approach would potentially benefit MIPS eligible clinicians with lower individual performance and would be a detriment for those with higher performance, for whom being assessed based on facility performance could potentially lead to lower ratings. Another commenter expressed concern that MIPS eligible clinicians substituting their institution's performance for their own might give an unfair advantage to MIPS eligible clinicians from larger systems. This commenter also requested that CMS pilot system measures prior any implementation of facility performance attribution under MIPS.

    Another commenter opposed our proposed use of facility level measures for accountability at the individual level as facility performance as they believed it is not within the control of individual clinicians. Another commenter requested that facility-based MIPS eligible clinicians leverage continued expansion of specialty-specific measure sets through QCDRs and qualified registries instead of using facility-based scores. Another commenter noted that adding an additional group reporting option for facility-based MIPS eligible clinicians on top of the existing group reporting option is confusing. The commenter therefore recommended CMS remove this reporting option from the proposal. One commenter encouraged revisiting this proposal in future years.

    Response: The commenter is correct that many quality measures are not designed for team-based care in the inpatient setting, and we intend to examine how best to measure care provided by hospitalists and other team-based MIPS eligible clinicians in the future. We believe that facility-based quality measures have the potential to harmonize quality improvement efforts between hospital-based clinicians and hospitals, and promote care coordination across the care continuum. We agree that it is important to develop a thoughtful attribution policy that captures the eligible clinician's contribution and intend to develop appropriate attribution policies for facility-based measures.

    Comment: One commenter requested clarification on how CMS would expect reporting of facility-based measures to work under MIPS in instances where hospitals, their practices, and their EDs all use separate EHRs. This commenter also requested clarification on CEHRT/certification requirements and what vendors would be required to do under such a scenario. Another commenter wanted to know whether MIPS eligible clinicians would be subject to a facility's performance score for quality and cost if facility-based measures were to be integrated into MIPS in future years. One commenter recommended CMS make additional information available regarding the use of facility measures for the cost performance category and publish information about the extent to which this option may improve participation by clinicians who are predicted to be unable to participate in the cost performance category of MIPS. Another commenter requested clarification on the specific MIPS eligible clinicians that would be considered facility-based MIPS eligible clinicians.

    Response: We recognize that there are challenges associated with health information exchange within institutions and should we adopt policies for facility-based measures in future rulemaking, we would provide more information via subregulatory guidance. We believe that it is important to develop a thoughtful attribution policy that captures the MIPS eligible clinician's contribution and intend to develop appropriate attribution policies for facility-based measures.

    Comment: One commenter requested CMS develop MIPS participation options that apply to hospital's quality and cost performance category measures to their employed clinicians and that CMS should seek input from hospitals, clinicians, and other stakeholders to establish processes and design implementation of this option. Another commenter recommended that prior to implementing any facility-level measures into the MIPS program, CMS should work with measure stewards and applicable specialties to ensure that measure specifications are appropriately aggregated to the clinician level and are reflective of those factors within the clinician's control.

    Response: We appreciate the suggestions and intend to work closely with stakeholders as we examine how best to measure care provided by hospitalists and other team-based MIPS eligible clinicians in the future. We believe that it is important to develop a thoughtful attribution policy that captures the MIPS eligible clinician's (including those employed by hospitals) contribution and intend to develop appropriate attribution policies for facility-based measures.

    Comment: One commenter suggested CMS use active membership on a hospital's medical staff or proof of an employment contract that is effective for the measurement period as evidence of an existing relationship between the clinician and a facility, which will be needed in order to verify a clinician's eligibility to use facility-based measures. However, several commenters believed that claims data elements could provide sufficient proof of such a relationship. Another commenter recommended CMS use specific claims data elements such as inpatient and hospital outpatient department place-of-service codes as evidence. One commenter suggested that CMS could consider adopting some of the following criteria: the facility-based MIPS eligible clinician or group is an employee of the facility; the facility-based MIPS eligible clinician or group is not an employee of the facility, but has a contract with the facility or the privileges needed to perform services at the facility; and the MIPS eligible clinician or group is an owner, co-owner, and/or investor of the facility and performs medical services in the facility.

    The same commenter proposed the following options for attribution: Option 1: The facility-based MIPS eligible clinician performed a plurality of his or her services at the facility in the performance period. This proposed method for attribution generally aligned with the Value-Based Payment Modifier two-step attribution methodology for purposes of MIPS quality and cost measurement proposed in other parts of the MACRA rule, which attributes a given patient to a clinician if the clinician has performed a plurality of the primary care services for a patient in the performance period. Option 2: The facility-based MIPS eligible clinician or group would have a payment amount threshold or patient count threshold at the facility that meets the payment amount threshold or patient count threshold finalized for purposes of eligibility to participate in an Advanced APM.

    Another commenter mentioned that in adopting additional system measures, CMS should ensure that attribution is appropriate and relevant to clinicians, to consider a methodology that enables proportional attribution that is as close a proxy for a group as possible, and to ensure that clinician performance is captured across settings.

    Response: We will continue to seek opportunities to improve our attribution process including the consideration of claims based codes with place-of-service modifiers among the array of options to best attribute eligible clinicians.

    Comment: The majority of commenters that supported the use of additional systems measures supported them only in cases where the facility-based clinician could elect use of the facility-based measures. They did not support automatic attribution of facility based measures. Some commenters believed that the MIPS eligible clinician should be able to elect to be attributed to the facility and also choose the appropriate facility through a registration process. One commenter noted that many MIPS eligible clinicians see patients at multiple facilities, and thus should be able to choose so which facility would most accurately align with their actual practice patterns.

    One commenter recommended CMS explore the possibility of allowing some clinicians to report their skilled nursing facility (SNF) scores as their MIPS scores. Another commenter urged as much flexibility as possible in the program and believed that SNF-based measurement should always be an optional approach, particularly for those who practice in a single facility. Another commenter recommended that quality and cost performance measures under MIPS always be attributed to the SNF TIN, as incentive payment adjustments would only be applicable at the facility TIN level. Furthermore, the commenter stated that the attribution to the SNF TIN would need to be automatic for clinicians working in facility-based outpatient environments. One commenter recommends self-nomination at the TIN level because this would allow a group to attest that it is apprised of primarily hospital-based clinicians. This commenter noted that it would ensure that only the clinicians who wish to have this level of facility alignment are included in the program. It will also permit clinicians to select which hospitals are appropriate for alignment, allows for the inclusion of multiple hospitals, and would allow for the fact that many hospitalist groups practice in multiple locations. They also stated that this option would allow clinicians to align their performance on selected measures with their hospitals, which would support the drive towards team-based, coordinated care.

    One commenter noted the challenges faced by clinicians and groups that provide care across multiple facilities and recommended hospital-level risk-adjusted outcome measurement that is attributable to the principal clinician or group responsible for the primary diagnosis. Another commenter stated that as an alternative to substituting facility measures under the MIPS program, facility-based clinicians ought to be given the option of being treated as participating in an Advanced APM.

    One commenter requested further clarification on the proxy scoring using facility's quality reporting. This commenter requested examples of proxy scoring, and wanted to see quality performance category scoring in practice before making a recommendation. Another commenter urged CMS to allow the use of PCHQR scores as a proxy for quality performance, for clinicians at PPS-exempt cancer hospitals. A couple of commenters urged CMS to make nearly all of the measures from CMS's hospital quality reporting and pay-for-performance programs available for use in hospital-based clinician reporting options. One commenter proposed the following criteria for evaluating measures: clinicians could use quality and cost measures for patient conditions and episode groups (currently under development) for which CMS has assigned them a clearly defined and clinically meaningful relationship under the patient relationship assignment methodology (currently under development). This commenter suggested that each evidence-based quality measure would be counter-balanced with an appropriate cost measure and that measures potentially could focus on patient safety, high quality care delivery, patient-centered care, communication, care coordination, and cost efficiency.

    Several commenters suggested measures to be adopted. One commenter suggested the following: PCP notification at admission, PCP notification at discharge, percentage of beneficiaries with appointment with a PCP within 7 days, and percentage of beneficiaries with appointment with PCP within 30 days. This commenter believed that facility based MIPS eligible clinicians' play a valuable and underutilized role in care coordination and that Medicare stakeholders will benefit by MIPS eligible clinician inclusion versus exclusion. This commenter further recommended that facility based MIPS eligible clinicians have the ability to submit via institutional metrics and suggested PCP measures. Another commenter suggested several payment and costs measures such as: The Medicare Spending Per Beneficiary Measure; Pneumonia Payment per Episode of Care; the Cellulitis Clinical Episode-based Payment Measure; the Kidney/UTI Clinical Episode-based Payment Measure; and the Gastrointestinal Hemorrhage Clinical Episode-based Payment Measure. Another commenter recommended the following measures: (1) Severe Sepsis and Sepsis Shock: Management Bundle; (2) HCAHPS (physician questions and 3-Item Care Transition Measure); (3) Hospital-wide All-Cause Unplanned Readmission; (4) NHSN Measures (including CAUTI, CLABSI, CDI, And MRSA); (4) COPD Measures (COPD 30-Day Mortality Rate and COPD Readmission Rate); (5) Pneumonia Measures (Pneumonia 30-Day Mortality Rate, Pneumonia 30-Day Readmission Rate, and Pneumonia Payment per Episode of care); (6) Heart Failure Measures (Heart Failure 30-Day Mortality Rate, Heart Failure 30-Day Readmission Rate, Heart Failure Excess Days); (7) Payment Measures (MSPB); and (8) Chart Abstracted Clinical Measures (Influenza Immunization and Admit Decision Time to ED Departure Time for Admitted Patients).

    One commenter believed that clinicians who are MIPS eligible clinicians, and work primarily in either an outpatient or inpatient site—or both, as cancer care clinicians often do—should have the ability to choose the measures most relevant to them. A commenter recommended that MIPS eligible clinicians be able to align with hospitals, surgery centers, or other types of institutions to utilize patient experience survey metrics that are already collected as part of other quality reporting programs, in order to enable these metrics to be used as facility-based measures. Another commenter believed it was important for CMS to ensure that only visits, medications, tests, surgeries, and other components of maintenance for a disease that are ordered by a MIPS eligible clinician are attributed to the MIPS eligible clinician's quality and cost scores.

    One commenter urged CMS to enable a transplant surgeon and other members of the transplant team to elect to use their institution's performance rates under the outcomes requirements set forth at 42 CFR 482.80(c) and 482.82(c) as a proxy for their quality performance category score. This commenter believed that a transplant surgeon or other MIPS eligible clinician or group's election to use their institutions performance data should not be automatic but the clinician's choice. Another commenter noted that a facility-based performance option would be beneficial to those clinicians involved in palliative care, and requested CMS allow for measures such as those used under the Hospice Quality Reporting Program to be considered facility-based measures under MIPS.

    Response: We would like to explain that under section 1848(q)(5)(H) of the Act we may include data submitted by MIPS eligible clinicians with respect to items and services furnished to individuals who are not individuals entitled to benefits under part A or enrolled under part B. We will take these suggestions into consideration as we move towards implementing these additional flexibilities in the future.

    We will take these comments into consideration in future rulemaking.

    (6) Global and Population-Based Measures

    Section 1848(q)(2)(C)(iii) of the Act provides that the Secretary may use global measures, such as global outcome measures, and population-based measures for purposes of the quality performance category.

    Under the current PQRS program and Medicare EHR Incentive Program quality measures are categorized by domains which include global and population-based measures. We identified population and community health measures as one of the quality domains related to the CMS Quality Strategy and the NQS priorities for health care quality improvement discussed in the proposed rule (81 FR 28192). Population-based measures are also used in the Medicare Shared Savings Program and for groups in the VM Program. For example, in 2015, clinicians were held accountable for a component of the AHRQ population-based, Ambulatory Care Sensitive Condition measures as part of a larger set of Prevention Quality Indicators (PQIs). Two broader composite measures of acute and chronic conditions are calculated using the respective individual measure rates for VM Program calculations. These PQIs assess the quality of the health care system as a whole, and especially the quality of ambulatory care, in preventing medical complications that lead to hospital admissions.

    In the CY 2015 PFS final rule with comment period (79 FR 67909), Medicare Payment Advisory Commission (MedPAC) commented that we should move quality measurement four ACOs, Medicare Advantage (MA) plans, and FFS Medicare in the direction of a small set of population-based outcome measures, such as potentially preventable inpatient hospital admissions, ED visits, and readmissions. In the June 2014 MedPAC Report to the Congress: Medicare and the Health Care Delivery System, MedPAC suggests considering an alternative quality measurement approach that would use population-based outcome measures to publicly report on quality of care across Medicare's three payment models, FFS, Medicare Advantage, and ACOs.

    In creating policy for global and population-based measures for MIPS we considered a more broad-based approach to the use of “global” and “population-based” measures in the MIPS quality performance category. After considering the above we proposed to use the acute and chronic composite measures of AHRQ PQIs that meet a minimum sample size in the calculation of the quality measure domain for the MIPS total performance score; see Table B of the Appendix in this final rule with comment period. MIPS eligible clinicians would be evaluated on their performance on these measures in addition to the six required quality measures discussed previously and summarized in Table A of the Appendix in this final rule with comment period. Based on experience in the VM Program, these measures have been determined to be reliable with a minimum case size of 20. Average reliabilities for the acute and chronic measures range from 0.64 to 0.79 for groups and individual MIPS eligible clinicians. We intend to incorporate a clinical risk adjustment as soon as feasible to the PQI composites and continue to research ways to develop and use other population-based measures for the MIPS program that could be applied to greater numbers of MIPS eligible clinicians going forward. In addition to the acute and chronic composite measure, we also proposed to include the all-cause hospital readmissions (ACR) measure from the VM Program as we believe this measure also encourages care coordination. In the CY 2016 Medicare PFS final rule (80 FR 71296), we did a reliability analysis that indicates this measure is not reliable for solo clinicians or practices with fewer than 10 clinicians; therefore, we proposed to limit this measure to groups with 10 or more clinicians and to maintain the current VM Program requirement of 200 cases. Eligible clinicians in groups with 10 or more clinicians with sufficient cases would be evaluated on their performance on this measure in addition to the six required quality measures discussed previously and summarized in Table A of the Appendix of this final rule with comment period.

    Furthermore, the proposed claims-based population measures would rely on the same two-step attribution methodology that is currently used in the VM Program (79 FR 67961 through 67694). The attribution focuses on the delivery of primary care services (77 FR 69320) by both primary care physicians and specialists. This attribution logic aligns with the total per capita measure and is similar to, but not exactly the same, as the assignment methodology used for the Shared Savings Program. For example, the Shared Savings Program definition of primary care services can be found at § 425.20 and excludes claims for certain Skilled Nursing Facility (SNF) services that include the POS 31 modifier). In the proposed rule (81 FR 28199), we proposed to exclude the POS 31 modifier from the definition of primary care services. As described in the proposed rule (81 FR 28199), the attribution would be modified slightly to account for the MIPS eligible clinician identifiers. We solicited comments on additional measures or measure topics for future years of MIPS and attribution methodology. We requested comments on these proposals.

    The following is summary of the comments we received regarding our proposal on global and population-based measures:

    Comment: Several commenters supported the importance of including sociodemographic factor risk adjustments in the quality and cost measures used to determine payments to MIPS eligible clinicians. One commenter stated that risk adjustment is a widely accepted approach to account for factors outside of the control of clinicians. Another commenter supported adjusting quality measures to reflect sociodemographic status (SDS), when appropriate, because measurement systems that do not incorporate such factors into evaluation can shift resources away from low-income communities through penalties. The commenter requested CMS adopt adjustments to quality measures that are affected by SDS, such as readmission within 30 days of discharge. Another commenter stated that sociodemographic issues, such as the inability to purchase medication and lack of family support, can increase cost related to future MIPS eligible clinician visits, and emergency room visits and readmissions. The commenter requested a level of protection for situations beyond a clinician's control that can play a major role in an individual's health outcome.

    A few commenters supported the inclusion of risk adjustment in measures and suggested that CMS examine ASPE's future recommendations. One commenter recommended that CMS examine ASPE's recommendations to consider other strategies as well such as stratification. Other commenters stated that the stakeholders affected by these decisions should have an opportunity to review the risk adjustment findings once issued by ASPE, and comment on how CMS proposes to incorporate the ASPE findings into its quality metrics.

    Several commenters urged CMS to work with the National Quality Forum (NQF) on how best to proceed with risk adjustment of quality and cost measures for sociodemographic status. One commenter recommended CMS adopt the NQF recommendation to consider risk adjustment for measures that have a conceptual relationship between sociodemographic factors and outcomes.

    Response: We appreciate the feedback on the role of socioeconomic status in quality measurement. We continue to evaluate the potential impact of social risk factors on measure performance. One of our core objectives is to improve beneficiary outcomes. We want to ensure that complex patients as well as those with social risk factors receive excellent care. While we believe the MIPS measures are valid and reliable, we will continue to investigate methods to ensure all clinicians are treated as fairly as possible within the program. Under the Improving Medicare Post-Acute Transformation (IMPACT) Act of 2014, ASPE has been conducting studies on the issue of risk adjustment for sociodemographic factors on quality measures and cost, as well as other strategies for including SDS evaluation in CMS programs. We will closely examine the ASPE studies when they are available and incorporate findings as feasible and appropriate through future rulemaking. We look forward to working with stakeholders in this process. We will also monitor outcomes of beneficiaries with social risk factors, as well as the performance of the MIPS eligible clinicians who care for them to assess for potential unintended consequences such as penalties for factors outside the control of clinicians.

    We additionally note that the National Quality Forum (NQF) is currently undertaking a 2-year trial period in which new measures and measures undergoing maintenance review will be assessed to determine if risk adjusting for sociodemographic factors is appropriate. This trial entails temporarily allowing inclusion of sociodemographic factors in the risk-adjustment approach for some performance measures. At the conclusion of the trial, NQF will issue recommendations on inclusion of sociodemographic factors in risk adjustment. We intend to continue engaging in the NQF process as we consider the appropriateness of adjusting for sociodemographic factors in our MIPS measures.

    Comment: Several commenters recommended that CMS develop the three population health measure benchmarks in the quality performance category by specialty and region to ensure more accurate, appropriate comparisons for the measures. The commenters noted this approach would help facilitate comparisons and improve the relevance of information for patients. The commenters stated the MACRA law does not preclude CMS from considering specialties that practice in settings such as nursing homes, assisted living, or home health and treating them in a different manner, but stated it is inappropriate to assume they can be compared to other internal medicine/family physicians that practice in the ambulatory settings. Other commenters supported the proposed three population-based measures that will be calculated using claims.

    Response: We appreciate the commenters' support. We continue to analyze the best means of assessing and comparing facility based clinicians in nursing homes, assisted living, or home health environments versus more routine ambulatory care settings. We will consider the feasibility of adopting disparate benchmarks for the population health measures and regional adjustments for the population health measures in the future. We appreciate the commenters support. However, as discussed in section II.E.5.b.(3) of this final rule with comment period, for the transition year the MIPS, we are not finalizing our proposal to require MIPS eligible clinicians and groups to report a cross-cutting measures because we believe we should provide flexibility for MIPS eligible clinicians during the transition year to adjust to the program.

    Comment: Another commenter requested that CMS simplify the scoring methodology in the quality performance category by removing the “population health” measures and avoiding creating different scoring subcategories—in particular creating subcategories for MIPS eligible clinicians in practices of 9 or fewer, which appears to create different definitions of “small practices” throughout the MIPS program. The commenter recommended that at a minimum, CMS should provide accommodations for MIPS eligible clinicians based on the statute's definition of a small practice—meaning 15 or fewer professionals.

    Response: We have examined the global and population-based measures closely and have decided to not finalize these measures as part of the quality performance category score. Specifically, we are not finalizing the acute and chronic composite measures of AHRQ PQIs. We will, however, calculate these measures for all MIPS eligible clinicians and provide feedback for informational purposes as part of the MIPS feedback.

    Comment: Some commenters believed that system level and population-based measures should be applicable to MIPS eligible clinicians, such as pathologists, who typically furnish services that do not involve face-to-face interaction with patients. The commenters stated that activities such as blood utilization, infection control, and test utilization activities, including committee participation, should be credited to the whole group as pathology practices typically function as one unit with different members of the group having different roles. The commenters urged CMS to be flexible and not to focus exclusively on measures and activities that involve face-to-face encounters, as these would have an unfair and negative impact on the MIPS final scores of non-patient facing MIPS eligible clinician's specialties.

    Response: We agree that non-patient facing MIPS eligible clinicians need quality measures that are applicable to their practice. We encourage commenters to suggest specific additional measures that we should consider in the future.

    Comment: Other commenters believed the population-based measures would be difficult without prospective enrollment that informs MIPS eligible clinicians in advance of patients that are attributed to them.

    Response: We will make every effort to provide as much information as possible to MIPS eligible clinicians about the patients that will be attributed to them. However, we do not believe prospective enrollment to be feasible at this time.

    Comment: Several commenters recommended that CMS use its discretion to make proposed global and population-based measures optional under the improvement activities performance category, rather than including these VM Program measures into the MIPS quality performance category as population-based health measures: The acute composite, chronic composite, and ACR measure. The commenters were concerned that these measures are primarily intended to be used and reported at the metropolitan area or county level and have not been adequately tested, rigorously assessed for appropriate sample sizes, or risk adjusted for application at the clinician or group level. The commenters stated that the method by which reliability rates are arrived at must be transparent, and urged CMS to publicize the data supporting the proposal statement that based on the VM Program, the acute and chronic composites had an average reliability range of 0.64-0.79. The commenters recommended that if CMS moves forward with the three population health measures and does not make them optional, MIPS eligible clinician performance on any administrative claims measure should not be used for payment or be publicly reported unless they have a reliability of 0.80, which is generally considered by statisticians and researchers to be sufficiently reliable to make decisions about individuals based on their observed scores. The commenters recommended that in addition, the risk adjustment model should be developed, tested, and released for comment prior to implementation of the measures. Another commenter did not support the measures that are reliable with a minimum case size of 20 and with an average range of 0.64 and 0.79 because the commenter stated that anything less than 0.9 is unreliable. The commenter requested that CMS not implement this criterion until a risk adjustment can be implemented. Another commenter recommended CMS reconsider its use of a minimum sample size of 20 for calculating the cost measures, as extensive work has been done on both quality measures and cost measures pointing to the need of a sample size no smaller than 100 to achieve statistical stability.

    Response: We have examined the global and population-based measures closely and have decided to not finalize these measures as part of the quality performance category score. Specifically, we are not finalizing to use the acute and chronic composite measures of AHRQ PQIs. We agree with commenters that additional enhancements need to be made to these measures for inclusion of risk adjustment. We will, however, calculate these measures for all MIPS eligible clinicians and provide feedback for informational purposes as part of the MIPS feedback.

    Comment: One commenter opposed CMS' proposal to score population based measures during the transition year of MIPS. The commenter requested CMS phase-in population-based measures during the first 2 years of MIPS as test measures with feedback (but not scored) so that MIPS eligible clinicians and CMS can learn how population level measures will impact the MIPS program.

    Response: We agree with the commenter that further testing and enhancements is required for some of these measures prior to inclusion in the MIPS for payment purposes. Therefore, we are no longer requiring two of the three population health measures and are only requiring the ACR measure for groups of more than 15 instead of our proposed approach of groups of 10 or more, assuming the case minimum of 200 cases has been met, as discussed in section II.E.6. of this final rule with comment period. If the case minimum of 200 cases has not been met, we will not score this measure. The MIPS eligible clinician will not receive a zero for this measure and this measure will not apply to the MIPS eligible clinician's quality performance category score. We will, however, calculate these measures for all MIPS eligible clinicians and provide feedback for informational purposes as part of the MIPS feedback.

    Comment: Another commenter recommended assessing the ACR measure over a longer time period as the comparable measure used for hospitals is found to be reliable and valid only when using a 3-year rolling average. The commenter appreciated that this measure is limited to groups with 10 or more MIPS eligible clinicians and requires 200 cases.

    Response: We believe that the measure's limitation to groups with 16 or more MIPS eligible clinicians, as well as the requirement for at least 200 cases, ensures that the measure is sufficiently reliable for MIPS purposes. To explain, we will not apply the ACR to solo practices or small groups (groups of 15 or less). We will apply the ACR measure to groups of more than 15 who meet the case volume.

    Comment: Another commenter recommended that the population-based measures only be applied to MIPS groups.

    Response: We attempted to structure the MIPS program to be as inclusive as possible for quality measurement purposes. Our intention was to ensure that as many MIPS eligible clinicians as possible could report on as many measures as possible.

    Comment: Other commenters stated that MIPS is designed to determine aggregate population-based outcome measures across clinicians in a local area sharing the same hospitals and clinicians. The commenters proposed that CMS share with MIPS participants average MIPS final scores by clinician categories and cross reference comparative advanced APM performance.

    Response: We do not believe MIPS is designed to determine aggregate population-based outcome measures. However, we have discretion to pursue this approach if we deem appropriate. We will consider these suggestions as we develop appropriate feedback forms for MIPS eligible clinicians. Our intention is to provide as much information as possible to MIPS eligible clinicians to assist with quality improvement efforts.

    Comment: Other commenters disagreed with the proposed use of the 30-day ACR measure because they believed that doing so will potentially penalize clinicians who care for the most complex patients and those of lowest SES. They also indicated that the measure is generally inappropriate given the lack of MIPS eligible clinician control over some of the factors that lead to readmission. Another commenter believed MIPS eligible clinicians are penalized for readmissions, but not rewarded for successfully keeping people out of the hospital completely. Other commenters expressed concern for the use of the ACR measure because there are a multitude of factors that contribute to readmission making it a difficult outcome to measure. The commenters believed that there needs to be more studies prior to using the measure at the MIPS eligible clinician level, including the impact on MIPS eligible clinicians who serve disadvantaged populations. In addition, the commenters believed that the measure requires risk-adjustment for SDS factors, community factors, and the plurality of care/care coordination. The commenters sought clarity on how the triggering of an index episode and attribution of ACR to any particular MIPS eligible clinician or group larger than 10 will be relevant. Other commenters opposed the ACR measure due to concern that it is not risk adjusted by severity level or tertiary care facility. The commenters were also concerned that MIPS eligible clinicians and hospitals are trimming back on SNF transfers to decrease bundled costs, increasing readmission rates. Some commenters recommended using National Committee for Quality Assurance's (NCQA's) ACR measure and not the ACR measure which is specified for hospitals. Other commenters urged CMS to reconsider requiring the use of the ACR measure, as they were concerned with the reliability and validity levels associated with applying the measure to a single clinician in a given year. They noted that the comparable measure for hospitals requires a 3-year rolling average to mitigate potential variability, and therefore, requested CMS explore assessing the measure over a longer time period.

    Response: We appreciate the commenters' concerns and suggestions. However, we have examined the ACR measures closely and have decided to finalize the ACR measure from the VM for groups with 16 or more eligible clinicians, as part of the quality performance category for the MIPS final score. Readmissions are a potential cause for patient harm, and we believe it necessary to incentivize their reduction. We believe measuring and holding MIPS eligible clinicians accountable for readmissions is important for quality improvement, particularly given the harm that patients face when readmitted. We hold hospitals and post-acute care facilities accountable for readmissions as well; holding all clinicians accountable for readmissions incentivizes better coordination of care across care settings and clinicians.

    We would like to explain that the all-cause hospital readmission measure from VM uses 1 year of inpatient claims to identify eligible admissions and readmissions, as well as up to 1 year prior of inpatient data to collect diagnoses for risk adjustment. The measure reports a single composite risk-standardized rate derived from the volume-weighted results of hierarchical regression models for five specialty cohorts. Each specialty cohort model uses a fixed, common set of risk-adjustment variables. It is important to note a couple features of the risk adjustment design developed for CMS by the Yale School of Medicine Center for Outcomes Research & Evaluation (CORE). First, the ACR measure involves estimating separate risk adjustment models for seven different cohorts of medical professionals (general medicine, surgery/gynecology, cardiorespiratory, cardiovascular, neurology, oncology, and psychiatry because conditions typically cared for by the same team of clinicians are likely to reflect similar levels of readmission risk. The risk-adjusted readmission rates for each cohort that are then combined into a single adjusted rate. Second, for each cohort, the risk adjustment models control for age, principal diagnoses, and a broad range of comorbidities (identified from the patient's clinical history over the year preceding the index admission, not just at the time of the hospitalization). Please note that the measure has been included for the last several years in the Annual Quality Resource and Use Reports so clinician groups and clinicians can find out how they perform on the measure and use the data in the reports to improve their performance. We will not apply the readmission measure to solo practices or small groups (groups of 15 or less). We will apply the readmission measure to groups of more than 15 who meet the case volume of 200 cases. In addition, we continually reassess reliability and will monitor MIPS eligible clinicians' performance under the MIPS for unintended consequences.

    It is important to note that for the VM Program, an index episode for the readmission measure is triggered when a beneficiary who has been attributed to a TIN is hospitalized with an eligible hospital admission for the measure. Note that the index admission is not directly attributed to a TIN as in the case of an episode for the Medicare Spending per Beneficiary measure; rather, index admissions are tied to the beneficiaries attributed to the TIN per the two-step methodology. Regarding evidence for whether the measure incentivizes reductions in readmissions, we refer readers to The New England Journal of Medicine article available at http://www.nejm.org/doi/full/10.1056/NEJMsa1513024 which concluded that readmission trends are consistent with hospitals' responding to incentives to reduce readmissions, including the financial penalties for readmissions under the Affordable Care Act. With respect to SDS factors, we refer readers to our discussion above of the NQF's 2-year trial and ASPE's ongoing research. We will continue to assess the measure's results and will consider the commenter's feedback in the future.

    Comment: Another commenter believed that global outcome measures and population-based measures should not be included in the MIPS quality score until there is further understanding of the reliability of volume of measurement for 20 patients, assigning accountability to the MIPS eligible clinicians who have control, how conditions that are not treated by the surgeon will be included or excluded, how population-based measures will be used at the MIPS eligible clinician level, the reliability and validity of measures if modified, the need for risk-adjustment of the composite measures, if adjustments for safety data sheets will be considered and the potential unintended consequences for including resource utilization.

    Response: We advocate the continued implementation of population-based measures and will continue to work with stakeholders to improve and expand them over time. We note that these measures have been used in other programs, such as the Medicare Shared Savings Program and for groups in the VM Program, and are aligned with the National Quality Strategy.

    Comment: Some commenters urged CMS to not maintain administrative claims-based measures, which were developed for use at the community or hospital level, and often result in significant attribution issues. The commenters stated these measures tend to have low statistical reliability when applied at the individual clinician level, and at times at the group level. They are also calculated with little transparency, which confuses and frustrates MIPS eligible clinicians. The commenters stated that scores on these particular measures do not provide actionable feedback to MIPS eligible clinicians on how they can improve.

    Response: We believe administrative claims-based measures are a necessary option to minimize reporting burden for MIPS eligible clinicians. The ACR measure has been used in both the Shared Savings Program and the VM Program for several years. We would like to note that at the minimum case sizes applied for the VM, average reliability for the ACSC composite measures exceed 0.40 even for TINs with one EP.

    We can understand why commenters see these measures as less transparent and actionable compared to the PQRS process measures. However, this is largely driven by risk adjustment and shrinkage (in the case of the ACR measure), both of which are attempts to protect clinicians from “unfair” outcomes, albeit at the cost of decreased transparency. In the context of the QRURs, we have provided supplementary tables to the QRUR containing patient level information on admissions, including reason for admission (principal diagnosis) and whether it was followed by an unplanned readmission, to support both more transparency as well as actionability. We intend to work with MIPS eligible clinicians and other stakeholders to continue improving available measures and reporting methods for MIPS.

    We continually reassess measures and this is why we have worked with measure owner and stakeholders to improve the risk adjustment methodology for these measures. In addition, we have used these measures under the VM Program and have provided feedback to groups and individual clinicians for the last several years. Further, we apply case minimums to ensure measures are reliable for groups and individual clinicians. The measures are outcome focused and are calculated on behalf of the clinician using Medicare claims and other administrative data. In addition, they are low burden with the goal for groups and individual clinicians to invest in care redesign activities to improve outcomes for patients where good ambulatory coordination reduces avoidable admissions.

    Comment: Another commenter had concerns about the proposal to include population health and prevention measures for all MIPS eligible clinicians, stating that some specialists and sub-specialists have no meaningful responsibility for population or preventive services.

    Response: We believe that all MIPS eligible clinicians, including specialists and subspecialists, have a meaningful responsibility to their communities, which is why we have focused on population health and prevention measures for all MIPS eligible clinicians. Individuals' health relates directly to population and community health, which is an important consideration for quality measurement generally and MIPS specifically. It is important to note that we are no longer requiring two of the three population health measures and are only requiring the ACR measure for groups of more than 15 instead of our proposed approach of groups of 10 or more, assuming the case minimum of 200 cases has been met, as discussed in section II.E.6. of this final rule with comment period. If the case minimum of 200 cases has not been met, we will not score this measure. Thus, the MIPS eligible clinician will not receive a zero for this measure, but rather this measure will not apply to the MIPS eligible clinician's quality performance category score. We believe the ACR measure for groups of more than 15 is appropriate and will provide meaningful measurement.

    Comment: Another commenter opposed using the same attribution method that was originally used for ACOs and is currently used for the VM Program for CMS' proposal to score MIPS eligible clinicians on two or three (depending on practice size) additional `global' or `population based' quality measures to be gathered from administrative claims data. The commenter believed these measures potentially hold MIPS eligible clinicians, especially specialists such as ophthalmologists, responsible for care they did not provide. The measures—acute and chronic care composites and ACR—focus on the delivery of primary care, which does not apply to ophthalmology or a variety of other specialties. Therefore, specialists should be exempt from these additional measures and evaluated only on the six measures they choose to report.

    Response: As noted above, the ACR and ACSC measures have been used in both the Shared Savings Program and the VM Program for several years. The ACR measure involves estimating separate risk adjustment models for seven different cohorts of medical professionals (general medicine, surgery/gynecology, cardiorespiratory, cardiovascular, neurology, oncology, and psychiatry) because conditions typically cared for by the same team of clinicians are likely to reflect similar levels of readmission risk. The measure reports a single composite risk-standardized rate derived from the volume-weighted results of hierarchical regression models for five specialty cohorts. Each specialty cohort model uses a fixed, common set of risk-adjustment variables. We believe this measure is representative of most MIPS eligible clinicians.

    In addition, we have examined the global and population-based measures closely and have decided to not finalize two of these measures as part of the quality performance category score. Specifically, we are not finalizing use of the acute and chronic composite measures of AHRQ PQIs. We agree with commenters that additional enhancements need to be made to these measures for inclusion of risk adjustment. We will, however, calculate these measures for all MIPS eligible clinicians and provide feedback for informational purposes as part of the MIPS feedback.

    Comment: Other commenters requested that if the three claims-based measures were instead reported by a QCDR or quality registry and included total patient population, regardless of payer, the MIPS eligible clinicians' patient population would be better represented and overall scores more accurate. The commenters also believed this would reduce administrative burden on CMS for the calculation of these metrics and beneficiary attribution. The commenters believed that since this is calculated by CMS and represents up to a third of the quality score, QCDRs and qualified registries would have limited ability to give MIPS eligible clinicians insight into their performances and provide benchmarking data back to MIPS eligible clinicians throughout the year, assisting with clinician's ability to judge how they are performing relative to other organizations within the registry. The commenters noted that QCDRs and qualified registries serve a critical component to MIPS eligible clinicians, allowing them to receive more timely feedback on their rates and how their rates compare to others using the same QCDR or qualified registry, so when up to a third of the quality score is based on data not calculated by the QCDR or qualified registry, it becomes challenging for that entity to provide meaningful feedback and benchmarking to the MIPS eligible clinicians on how they are performing in the overall quality category, which amounts to 50 percent of their MIPS final score.

    Response: We appreciate the suggestion but we believe it is important to use CMS claims data which we know to be valid and to calculate these measures in the way with which providers are familiar, at the outset of the MIPS program. We would consider future refinements to the measure, including exploring how a registry or QCDR might be able to participate in the claims-based measures' calculation.

    Comment: Some commenters supported the inclusion of ACR measure rates in the proposed global and population health measurement, and the use of telehealth to achieve goals.

    Response: We thank the commenters for their support. Regarding the commenters reference to telehealth, we note telehealth can help to support better health and care at the patient and population levels. As indicated in the Federal Health IT Strategic Plan 2015-2020 (Strategic Plan) which can be found at http://www.hhs.gov/about/news/2015/09/21/final-federal-health-it-strategic-plan-2015-2020-released.html#, telehealth can further the goals of: transforming health care delivery and community health; enhancing the nation's health IT infrastructure; and, advancing person-centered and self-managed health.

    Comment: Other commenters stated that population-based measures had low statistical reliability for practice groups smaller than hospitals. The commenters requested that specialists and small MIPS eligible clinicians be exempt from reporting population-based measures. Another commenter stated attributing population-based measure outcomes to specific MIPS eligible clinicians is inappropriate. Further, the commenter stated MIPS eligible clinicians should only be scored on measures they choose within the quality performance category. A few commenters requested that population-based measures be removed from quality reporting, because these measures were developed for use in the hospital setting and would be unreliable when applied at the individual MIPS eligible clinician's level. Another commenter stated that global and population-based measures (PQIs specifically) should not be used until they were appropriately risk adjusted for patient complexity and socio-demographic status.

    Response: We have examined the global and population-based measures closely and have decided to not finalize the acute and chronic composite measures of AHRQ PQI. Therefore, we are no longer requiring two of the three population health measures and are only requiring the ACR measure for groups of more than 15 instead of our proposed approach of groups of 10 or more, assuming the case minimum of 200 cases has been met, as discussed in section II.E.6. of this final rule with comment period. If the case minimum of 200 cases has not been met, we will not score this measure. Thus, the MIPS eligible clinician will not receive a zero for this measure, but rather this measure will not apply to the MIPS eligible clinician's quality performance category score. We believe the ACR measure for groups of more than 15 is appropriate and will provide meaningful measurement. Therefore, we respectfully disagree with the commenter's statement that MIPS eligible clinicians should only be scored on measures they choose within the quality performance category.

    Comment: Some commenters did not want CMS to use global and population-based measures for accountability. The commenters remarked that CMS has not provided enough evidence that these measures have any impact on quality. The commenters found global and population-based measures confusing and frustrating because MIPS eligible clinicians have no control over appropriate measures for accountability.

    Response: The purpose of the global and population-based measures is to encourage systemic health care improvements for the population being served by MIPS eligible clinicians. We note further that we have found the PQI measures to be reliable in the VM Program with a case count of at least 20. As we noted in our proposal, we intend to incorporate clinical risk adjustment for the PQI measures as soon as feasible.

    Comment: Other commenters supported the use of global and population-based measures, and supported CMS's inclusion of the acute and chronic composite measures and the ACR measure. A few commenters supported the proposal to use population-based measures from the acute and chronic composite measures and the ACR measure or AHRQ PQIs with a minimum case size of 20 and urged CMS to add a clinical risk adjustment as soon as feasible.

    Response: We thank the commenters for their support.

    Comment: A few commenters requested that the denominator for the quality performance category be adjusted as appropriate to reflect the inapplicability of the global and population-based measures to certain MIPS eligible clinician's practices (the commenter specifies that these measures are inappropriate for hospitalists). Another commenter requested population-based measures be removed from quality reporting, because these measures were developed for use in the hospital setting and would be unreliable when applied at the individual MIPS eligible clinicians' level. Other commenters stated that global and population-based measures (PQIs specifically) should not be used until they were appropriately risk adjusted for patient complexity and socio demographic status.

    Response: We believe these measures are important for all MIPS eligible clinicians, because their purpose is to encourage systemic health care improvements for the population being served by MIPS eligible clinicians. We believe that hospitalists are fully capable of supporting that objective. Additionally, we are using the same two-step attribution methodology that we have adopted in the VM Program, and that methodology focuses on the delivery of primary care services both by MIPS eligible clinicians who work in primary care and by specialists.

    Comment: Some commenters expressed support for including more global, population-based measures that are not specialty-specific or limited to addressing specific conditions in the program, but noted that the level of accountability for population-based measures is best at the health system and community level—where the numbers are large enough—rather than at the MIPS eligible clinician level.

    Response: We thank the commenters for the feedback. We will take the suggestions into consideration in future rulemaking.

    Comment: Another commenter believed that the population-based measures included in the proposal were appropriate for population measurement, but could go further with respect to measuring outcomes. One commenter outlined necessary readmission scenarios to prevent graft rejection for transplant patients and urged CMS to remove the population-based measures, which indirectly include hospital readmissions, from consideration under the quality component of MIPS.

    Response: We believe the ACR measure for groups of more than 15 is appropriate and will provide meaningful measurement. Please refer to the discussion above regarding the ACR measure. In addition, we have examined the global and population-based measures closely and have decided to not finalize the acute and chronic composite measures of AHRQ PQIs.

    Comment: Several commenters recommended that CMS not require the submission of administrative claims-based population-based measures and stated that they tend to have low reliability at both the MIPS eligible clinicians individual and group levels. The commenters recommended that CMS make the measures optional in the improvement activities performance category or exempt small practices from the measures.

    Response: We believe that claims-based measures are sufficiently reliable for value-based purchasing programs, including MIPS. We note that the quality measures and improvement activities are not interchangeable. We will consider other measures that could potentially replace claims-based measures in the future. We note that the administrative claims-based population-based measures are calculated based on Part B claims, and are not separately submitted by MIPS eligible clinicians, so do not have administrative burden associated with them.

    Comment: Other commenters expressed concern that the proposal included administrative claims-based population-based measures that were previously part of the VM Program because these measures are specified for the inpatient and outpatient hospital setting and are less reliable when applied to individual MIPS eligible clinicians and groups. The commenters requested CMS decrease the threshold levels for quality reporting measures, expand exemptions, and develop payment modifier measures that have a higher reliability at the MIPS eligible clinician level. Another commenter had concerns about taking measures from other organizational settings (for example, hospitals) for MIPS as the underlying theory and concepts, technical definitions, and parameters of use might be different in different contexts.

    Response: We would like to explain that some measures are geared toward facilities and some are attributable to individuals. Please refer to the Table A of the Appendix in this final rule with comment period for the applicable measures. We have worked to adopt only MIPS eligible clinician individual or group-based measures in the MIPS program.

    Comment: Another commenter recommended aligning measures for hospitals and hospitalists and limiting those measures to the quality performance category. The commenter further recommended maintaining the voluntary application of hospital measures (specifically those that could reflect the influence of hospitalists) to MIPS eligible clinicians. Some commenters encouraged CMS to align quality measures with current hospital measures because hospital staff require time and effort to maintain and report MIPS and APM data due to small staffing levels. The commenters stated aligning hospital and MIPS eligible clinician measures would reduce potential for reporting error and allow them to pursue common goals to improve quality of care delivery. Another commenter recommended that hospital, ACO, and pay for performance data be used to measure MIPS performance.

    Response: We appreciate the commenter's feedback and will consider it in future years of the program.

    After consideration of the comments regarding our proposal on global and population-based measures we are not finalizing all of these measures as part of the quality score. Specifically, we are not finalizing our proposal to use the acute and chronic composite measures of AHRQ PQIs. We agree with commenters that additional enhancements, including the addition of risk adjustment, needed to be made to these measures prior to inclusion in MIPS. We will, however, calculate these measures for all MIPS eligible clinicians and provide feedback for informational purposes as part of the MIPS feedback.

    Lastly, we are finalizing the ACR measure from the VM Program as part of the quality measure domain for the MIPS total performance score. We are finalizing this measure with the following modifications as proposed. We will not apply the ACR measure to solo practices or small groups (groups of 15 or less). We will apply the ACR measure to groups of 16 or more who meet the case volume of 200 cases. A group would be scored on the ACR measure even if it did not submit any quality measures, if it submitted in other performance categories. Otherwise, then the group would not be scored on the readmission measure. In our transition year policies, the readmission measure alone would not produce a neutral to positive MIPS payment adjustment since in order to achieve a neutral to positive MIPS payment adjustment, a MIPS eligible clinician or group must submit information to one of the three performance categories as discussed in section II.E.7. of the final rule with comment period. In addition, the ACR measure in the MIPS transition year CY 2017 will be based on the performance period (January 1, 2017, through December 31, 2017). However, for MIPS eligible clinicians who do not meet the minimum case requirements the ACR measure is not applicable.

    c. Selection of Quality Measures for Individual MIPS Eligible Clinicians and Groups (1) Annual List of Quality Measures Available for MIPS Assessment

    Under section 1848(q)(2)(D)(i) of the Act, the Secretary, through notice and comment rulemaking, must establish an annual list of quality measures from which MIPS eligible clinicians may choose for purposes of assessment for a performance period. The annual list of quality measures must be published in the Federal Register no later than November 1 of the year prior to the first day of a performance period. Updates to the annual list of quality measures must be published in the Federal Register not later than November 1 of the year prior to the first day of each subsequent performance period. Updates may include the removal of quality measures, the addition of new quality measures, and the inclusion of existing quality measures that the Secretary determines have undergone substantive changes. For example, a quality measure may be considered for removal if the Secretary determines that the measure is no longer meaningful, such as measures that are topped out. A measure may be considered topped out if measure performance is so high and unvarying that meaningful distinctions and improvement in performance can no longer be made. Additionally, we are not the measure steward for most of the proposed quality measures available for inclusion in the MIPS annual list of quality measures. We rely on outside measure stewards and developers to maintain these measures. Therefore, we also proposed to give consideration to removing measures that measure stewards are no longer able to maintain.

    Under section 1848(q)(2)(D)(ii) of the Act, the Secretary must solicit a “Call for Quality Measures” each year. Specifically, the Secretary must request that eligible clinician organizations and other relevant stakeholders identify and submit quality measures to be considered for selection in the annual list of quality measures, as well as updates to the measures. Although we will accept quality measures submissions at any time, only measures submitted before June 1 of each year will be considered for inclusion in the annual list of quality measures for the performance period beginning 2 years after the measure is submitted. For example, a measure submitted prior to June 1, 2016 would be considered for the 2018 performance period. Of those quality measures submitted before June 1, we will determine which quality measures will move forward as potential measures for use in MIPS. Prior to finalizing new measures for inclusion in the MIPS program, those measures that we determine will move forward must also go through notice-and-comment rulemaking and the new proposed measures must be submitted to a peer review journal. Finally, for quality measures that have undergone substantive changes, we propose to identify measures including but not limited to measures that have had measure specification, measure title, and domain changes. Through NQF's or the measure steward's measure maintenance process, NQF-endorsed measures are sometimes updated to incorporate changes that we believe do not substantively change the intent of the measure. Examples of such changes may include updated diagnosis or procedure codes or changes to exclusions to the patient population or definitions. While we address such changes on a case-by case basis, we generally believe these types of maintenance changes are distinct from substantive changes to measures that result in what are considered new or different measures.

    In the transition year of MIPS, we proposed to maintain a majority of previously implemented measures in PQRS (80 FR 70885-71386) for inclusion in the annual list of quality measures. These measures could be found in Table A of the Appendix of the proposed rule: Proposed Individual Quality Measures Available for MIPS Reporting in 2017 (81 FR 28399 through 28446). Also included in the Appendix in Table B of the proposed rule (81 FR 28447) was a list of proposed quality measures that do not require data submission, some of which were previously implemented in the VM (80 FR 71273-71300), that we proposed to include in the annual list of MIPS quality measures. These measures can be calculated from administrative claims data and do not require data submission. We also proposed measures that were not previously finalized for implementation in the PQRS program. These measures and their draft specifications are listed in Table D of the Appendix in the proposed rule (81 FR 28450 through 28460). The proposed specialty-specific measure sets are listed in Table E of the Appendix in the proposed rule (81 FR 28460 through 28522). As we continue to develop measures and specialty-specific measure sets, we recognize that there are many MIPS eligible clinicians who see both Medicaid and Medicare patients and seek to align our measures to utilize Medicaid measures in the MIPS quality performance category. We believe that aligning Medicaid and Medicare measures is in the interest of all clinicians and will help drive quality improvement for our beneficiaries. For future years, we solicited comment about the addition of a “Medicaid measure set” based on the Medicaid Adult Core Set (https://www.medicaid.gov/medicaid-chip-program-information/by-topics/quality-of-care/adult-health-care-quality-measures.html). We also sought to include measures that were part of the seven core measure sets that were developed by the Core Quality Measures Collaborative (CQMC). The CQMC is a collaborative of multiple stakeholders that is convened by America's Health Insurance Plans (AHIP) and co-led with CMS. The purpose of the collaborative is to align measures and develop consensus on core measure sets across public and private payers. Measures we proposed for removal can be found in Table F of the Appendix in the proposed rule (81 FR 28522 through 28531) and measures that will have substantive changes for the 2017 performance period can be found in Table G of the Appendix in the proposed rule (81 FR 28531 through 28569). In future years, the annual list of quality measures available for MIPS assessment will occur through rulemaking. We requested comment on these proposals. In particular, we solicited comment on whether there are any measures that commenters believe should be classified in a different NQS domain than what was proposed or that should be classified as a different measure type (for example, process vs. outcome) than what was proposed.

    The following is a summary of the comments we received on our proposals regarding the Annual List of Quality Measures Available for MIPS Assessment.

    Comment: One commenter wanted to know via what mechanism stakeholders will be made aware of the public comment period and final measure publications associated with quality measure changes under MIPS (for example, the PFS rule) in advance of the proposed annual update, and if CMS plans to do measure updates specific to MIPS. Another commenter requested clarity on when the measures and measure sets will be released.

    Response: The final measure sets can be found in the Appendix of this final rule with comment period. We intend to make updates to the list of quality measures annually through future notice and comment rulemaking as necessary. At this time, we cannot provide more specificity on our rulemaking schedule, but intend to announce availability of the proposed and final measure sets through stakeholder outreach, listservs, online postings on qualitypaymentprogram.cms.gov, and other communication channels that we use to disseminate information to our stakeholders.

    Comment: One commenter asked that all measures be published in a sortable electronic format, such as MS Excel or a comma-delimited format compatible with Excel.

    Response: We intend to post the measures and their specifications on the Quality Payment Program Web site (qualitypaymentprogram.cms.gov). We are striving to design the Web site with user needs in mind so that users will have easy access to the information that they need.

    Comment: One commenter requested clarification on the methodology for publishing, reviewing, benchmarking, and giving feedback on measures.

    Response: As discussed in section II.E.5.c. of this final rule with comment period, we select measures through a pre-rulemaking process, which includes soliciting public comments, and adopt those measures through notice-and-comment rulemaking. We then collect measure data, establish performance benchmarks based on a prior period or the performance period, score MIPS eligible clinicians based on their performance relative to the benchmarks, and provide feedback to MIPS eligible clinicians on their performance. Also, as discussed further in section II.E.10. of this final rule with comment period, we intend to publicly post performance information on the Physician Compare Web site.

    Comment: One commenter requested that any proposed introduction of additional inpatient or hospital measures be published in the same place that other MIPS quality measure proposed changes are published.

    Response: We agree with the commenter and will strive to ensure that all MIPS policy changes occur together. However, other rulemaking vehicles may be necessary for the Program's implementation in the future.

    Comment: One commenter did not support the Quality Payment Program, believing quality measures should be developed on a state level by the physicians in the state.

    Response: The Quality Payment Program is required by statute. In addition, we note that the vast majority of the measures that are being finalized were developed by the physician community.

    Comment: A few commenters cautiously supported the proposal that CMS release measures by November 1 the year in advance of the performance period, noting that ideally physicians would have more time. However, numerous commenters stated that November 1 is too late in the year for quality measures to be published in the Federal Register to be implemented by January 1 of the following year and encouraged CMS to publish the final list of approved measures earlier to allow clinicians and vendors sufficient time to prepare for the performance period. A few commenters specifically noted the need to give EHR software vendors adequate time to update their software and establish workflows to match measures. This process takes several months, and many vendors do not update their systems with new measures until June.

    Response: We understand the commenters' concern. As described above, the process for selecting MIPS quality measures entails multiple steps that begins with an annual call for measures and culminates with the publication of the annual list of quality measures in a final rule. While we strive to release the final list of quality measures as soon as feasible, we cannot do so until we have completed all of the requisite steps. With respect to commenters' statement that software developers need more adequate time to update their software to capture measures, we will work to assure that measures have been appropriately reviewed and release measures as early as possible. In future years, CMS will release specifications for eCQMs well in advance of November 1 of the year preceding a given performance period. For example, for the 2017 performance period, we released specifications for all eCQMs that may be considered for implementation into MIPS in April 2016. We are open to commenters' suggestions for other ways that we can streamline the measure selection process to enable us to release the annual list of quality measures and/or measure specifications sooner than November 1st.

    Comment: A few commenters were concerned with CMS's plan to update quality measures on a yearly basis. The commenters recommended that measures be considered in “test/pilot” mode before they are included in CMS's quality programs and rigorously evaluated for validity and accuracy during the pilot period. Further, the commenters suggested that measures should be maintained for more than 1 year, to ensure the agency has a reasonable understanding of how clinicians have performed and improved over time, as well as to determine whether CMS's priorities have been reasonably met, with respect to included quality measures.

    Response: For measures that are NQF-endorsed, measures must be tested for reliability and validity. For measures that are not NQF-endorsed, we consider whether and to what extent the measures have been tested for reliability and validity. We do not take the decision to remove a measure lightly and agree with the commenters that we should take into consideration how clinicians have performed and improved over time, among other factors, when deciding to remove a quality measure from the program.

    Comment: Several commenters recommended separate timelines for new measures as opposed to updated specifications and suggested that when changes to the list of MIPS quality measures are made, those changes should not be implemented until at least 18 months after they are announced and finalized. One commenter suggested that 12 months are needed for vendor implementation, and another 6 months allocated for real-world beta testing of measures to identify and resolve defects and inconsistencies in a measure update for implementation the following year. The commenter further requested a minimum of 6 months' notice prior to any reporting period for implementation of revised measures. Some commenters recommended more time, at least 6 months, to implement a new metric before being scored to allow time to work out reporting issues with vendors. Other commenters requested that specific measure definitions be published at least 120 days prior to the start of the reporting period.

    Response: We do not believe it is necessary to develop unique timelines for measures that we will consider for the program. Although we understand the commenters' point that new measures require additional consideration beyond simple changes to measure specifications, we believe we account for those considerations when developing our proposals and in consulting with the stakeholder community during the measure development process. We describe our process in detail in our Quality Measure Development Plan (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf).

    Comment: One commenter expressed discontent with measures specifications that change in mid-season. The commenter requested that the measures be accepted based on the new or the old specifications and that neither submission be scored.

    Response: We would like to note that measure specifications do not change during the performance period. Prior to the beginning of the performance period, measure specifications are shared, and only change for the next performance period or at another time indicated in rulemaking. We cannot accept multiple versions of quality measure data, so we can only accept one version of a measure's specifications during a performance period.

    Comment: One commenter requested that CMS quickly notify clinicians when measures are introduced and retired. Further, other commenters were concerned about the proposed changes in quality measures. The commenters stated that this will require more resources and time to sort through all the changes.

    Response: We agree and will make every possible effort to notify clinicians when we propose and adopt measures for MIPS, and will similarly notify clinicians as quickly as possible if and when we retire measures from the program, which is also done through rulemaking. Our intention is to keep clinicians as informed as possible about the quality criteria on which they will be measured, something we have done within the PQRS and other quality reporting programs.

    Comment: One commenter recommended that to avoid concerns regarding uneven opportunities for clinicians, registries, and health IT vendors, CMS should require all measures planned for inclusion in its quality reporting programs to include specifications such that any organization that would want to use those measures may do so.

    Response: Measure specifications will be available on the Quality Payment Program Web site (qualitypaymentprogram.cms.gov). Additionally, to provide clarity to MIPS eligible clinicians when they select their quality measures we also will publish the numerical baseline period benchmarks prior to the performance period (or as close to the start of the performance period as possible) in the same location as the detailed measure specifications. These measure benchmarks will be published for those quality measures for which baseline period data is available. For more details on our quality performance category benchmarks, please refer to section II.E.6. of this final rule with comment period.

    Comment: One commenter recommended that CMS implement a review process when it considers measures for use at a different level than the measure's intended use (for example, the clinician level). The commenter recommended this process include, but not be limited to: Convening a technical expert panel and a public comment period, and a review of measure specifications to ensure measures are feasible and scientifically acceptable in all environments and at all intended levels of measurement.

    Response: As part of our measure selection process, stakeholders have multiple opportunities to review measure specifications and on whether or not they believe the measures are applicable to clinicians as well as feasible, scientifically acceptable, and reliable and valid at the clinician level. As we discussed in section II.E.5.c of this final rule with comment period, the annual Call for Measures process allows eligible clinician organizations and other relevant stakeholder organizations to identify and submit quality measures for consideration. Presumably, stakeholders would not submit measures for consideration unless they believe that the measure is applicable to clinicians and can be reliably and validly measured at the individual clinician level. The NQF convened Measure Application Partnership (MAP) provides an additional opportunity for stakeholders to provide input on whether or not they believe the measures are applicable to clinicians as well as feasible, scientifically acceptable, and reliable and valid at the clinician level. Furthermore, we must go through notice and comment rulemaking to establish the annual list of quality measures, which gives stakeholders an additional opportunity to review the measure specifications and provide input on whether or not they believe the measures are applicable to clinicians as well as feasible, scientifically acceptable, and reliable and valid at the clinician level. Additionally, we are required by statute to submit new measures to an applicable, specialty-appropriate peer-reviewed journal.

    Comment: Several commenters suggested providing a 3-year phase out period for measures being proposed for removal. CMS should provide measure owners with more detailed analysis on the use of their measures so that they can work to develop the next generation of measures and/or improve performance with measures.

    Response: We allow the public to comment on any proposals for measure removals, but we do not intend to adopt a general 3-year phase-out policy at this time. We believe the MIPS program must be flexible enough to accommodate changes in clinical practice and evidence as they occur.

    Comment: A few commenters commended and supported CMS for its proposal to remove unneeded measures and reduce administrative burden while still providing meaningful rewards for high quality care provided by MIPS eligible clinicians in small practices. Commenters recommended that CMS remove topped out measures, duplicative measures, and measures of basic standards of care. Another commenter suggested that CMS establish a mechanism for expeditiously changing quality measures that are no longer consistent with published best practices. Further, another commenter noted that patients are better served when eligible clinicians are able to dedicate their time and effort to recording data that is pertinent and specific to patient issues and care, and thus, the commenter recommended that CMS remove irrelevant quality measures and redundant quality measures in order to align MIPS eligible clinicians with CMS' goal to improve reporting efficiency.

    Response: We intend to ensure that measures are not duplicative, and we believe that the need for some measures of basic care standards is still present given the clinical gaps evidenced by the performance rate. Measures must be removed through notice-and-comment rulemaking and are thus not expeditiously removed. Measures are reviewed in accordance with the removal criteria discussed in the proposed rule (81 FR 28193) and a determination is made to retain or to propose for removal.

    Comment: A few commenters opposed removing measures as topped out, stating that high performance on a measure should be rewarded and incentivized. Other commenters recommended that CMS consider adopting new measures addressing similar concepts to ensure that there are no gaps in measurement in distinct disease areas before removing topped out measures.

    Response: We agree that we should not automatically remove measures that are topped out without considering other factors, such as whether or not removing the measure could lead to a worsening performance gap. We consider additional factors when removing measures on the basis of being “topped out.” For instance, if the variance of performance on the measure indicates that there is no identified clinical performance gap, this also impacts the decision to remove measures on the basis of being “topped out.” We will continue to look at topped out criteria in addition to performance gaps when selecting measures to remove. We recognize that topped out measures no longer provide information that permits the meaningful comparison of clinicians.

    Comment: One commenter did not support the selection of quality measures, as the commenter believed the quality measures are surrogates for measuring true value as a clinician and lack validity.

    Response: We believe quality measurement is critical to ensuring that Medicare beneficiaries and all patients receive the best care at the right time. We note further that we are required by statute to collect quality measures information, and we believe quality measurement is an opportunity for MIPS eligible clinicians to demonstrate the quality of care that they provide to their patients.

    Comment: One commenter proposed that instead of the list of self-selected quality measures, CMS could establish a measure set that the agency could calculate on behalf of clinicians using administrative claims, QCDR data, and potentially other clinical data that clinicians report with their claims or through EHRs. These administrative claims-based measures should include some measures that apply to a broad scope of clinicians, and also some overuse measures (for example, imaging for non-specific low back pain). Further, the commenter suggested that CMS also could include measures from other settings, such as inpatient hospitals, because some clinicians, such as hospitalists, may be best measured through hospital quality measures (for example, hospital readmissions). The commenter also suggested that through this approach CMS also would have more complete information to remove topped-out measures, and to prioritize measures based on performance gaps.

    Response: We note that we proposed three administrative claims-based measures, and that we do accept information electronically and through QCDRs. We are researching the best way to attribute care to clinicians within facilities. We are also looking into the best method to identify topped-out measures and to quantify a decision to remove measures from the program. Finally, measures have been identified based on specialty.

    Comment: Numerous commenters disagreed with the elimination of measures group reporting and asked that CMS reconsider the removal of measures groups, in order to reduce reporting burden. Further, commenters noted that measures groups are designed to provide an overall picture of patient care for a particular condition or set of services and provide a valuable means of reporting on quality. Measure groups ensure that specialties, individual physicians, and small practices have access to meaningful measures that allow physicians to focus on procedures and conditions that represent a majority of his or her practice. Another commenter expressed belief that the removal of measure groups will skew quality reporting further in favor of large group practices because the CMS Web Interface allows for reporting on a sampling of patients.

    Response: We agree that there are measures to which specialists should have access to that are meaningful for their specialty, which is why we proposed replacing measure groups with specialty measure sets to ensure simplicity in reporting for specialists. We believe that the specialty measure sets are a more appropriate way for MIPS to incorporate measures relevant to specialists than measures groups. Further, we proposed specialty measures sets in an effort to align with the CQMC.

    Comment: One commenter agreed with efforts to streamline the process of reviewing and identifying applicable quality measures, and supported the inclusion of specialty measure sets in Table E of the Appendix in this final rule with comment period.

    Response: We appreciate the support.

    Comment: One commenter encouraged CMS to move rapidly to a core set of measures by specialty or subspecialty because the commenter believes an approach using high-value measures would enable direct comparison between similar clinicians, and would provide assurance that the comparison is based on a consistent and sufficiently comprehensive set of quality indicators. The commenter believed a core measure set should include measures of outcomes, appropriate use, patient safety, efficiency, patient experience, and care coordination.

    Response: We agree that a core set of measures by specialty would be optimal when comparing similar eligible clinicians and we did incorporate the measures that were included in the core sets developed by the CQMC. CMS will continue to evaluate a core set of measures by specialty to ensure each set is diverse and indicative of CMS priorities of quality care.

    Comment: One commenter recommended use of specialty- and subspecialty-specific core measure sets that would provide reliable comparative information about clinician performance than the 6 measure approach. The commenter believed that advancing the current state of performance measurement should be a top priority in MACRA implementation, and toward that end, the commenter supported using the improvement activities category to reward development of high-value measures, and in particular patient-reported outcomes.

    Response: We will consider any new measure sets in the future, and welcome commenters' and other stakeholders' feedback on what measure sets we should consider in the future for MIPS. We agree that advancing performance measurement should be a top priority for MIPS, and we thank the commenter for their support of improvement activities.

    Comment: One commenter recommended identifying quality measures that are specialty specific and germane to what is practiced. Another commenter recommended that CMS apply a standardized approach to ensure that measures included in the specialty measure sets are clinically relevant and aligned with updates occurring in the measure landscape.

    Response: We appreciate the comment and note that identification of quality measures that are germane to clinical practice is our intent. We are adopting quality measure sets that are specialty-specific and clinically relevant to that particular specialty.

    Comment: Several commenters supported the concept of measure sets, but had some concerns with the construction of the proposed measure sets. Some of the measures included in the specialty sets are not appropriate for some specialties or subspecialties. The commenters believed the proposed rule represents more of a primary care practice focus. Further, the commenters were concerned that reporting requirements may not always reflect real differences in specialized practices. Commenters suggested these issues reflect a need that all of the measure sets should be more closely vetted by clinicians from the specialty providing the service.

    Response: We worked with specialty societies to develop measure sets and will continue to work with specialty societies to further improve the existing specialty measure sets and also develop new specialty measure sets for more specialty types.

    Comment: Some commenters believed the quality measures are not relevant to certain specialties. Further, one commenter expressed concern about the proposed MIPS quality measures because the commenter believed the quality measures do not reflect the unique care provided by geriatricians for their elderly patients, but rather were developed for non-elderly patient care. The commenter believed this would unfairly disadvantage geriatricians who care for sicker, older patients; who are without the resources and technology incentives to develop new, more relevant measures, and frequently practice in settings that do not have health IT infrastructure.

    Response: We believe that the quality measures adopted under the Quality Payment Program are relevant to clinicians that offer services to Medicare beneficiaries, including elderly patients. We tried to align certain measures to specialty-specific services, and we welcome commenters' feedback on additional measures or specialties that we should consider in the future.

    Comment: A few commenters stated that not every physician and specialty fits CMS's measure molds and that there is a lack of specialty measure sets. Further, commenters suggested that CMS identify an external stakeholder entity to maintain the proposed specialty-specific measure sets.

    Response: We have identified specialty sets based on the ABMS (American Board of Medical Specialties) list. Although we realize that all specialties or sub-specialties are not covered under these categories, we encourage clinicians to report measures that are most relevant to their practices, including those that are not within a specialty set.

    Comment: A few commenters stated that specialists with fewer options will be required to report on topped out measures which do not award full credit, resulting in a disadvantage. Another commenter was concerned that as groups choose the six quality measures on which they perform best, those popular measures will become inflated and quickly become “topped out.” Further, commenters stated that there is little value in reporting on measures already close to being “topped out,” just for the sake of reporting. One commenter suggested that CMS continue to develop more clinically relevant measures and remove those that have been topped out.

    Response: As measures become topped out, we will review each measure and make a determination to retain or remove the measure based on several factors including whether the measure is a policy priority and whether its removal could have unintended impact on quality performance. We refer the commenters to section II.E.6.a. of this final rule with comment period for additional details on our approach for identifying and scoring topped out measures.

    Comment: One commenter suggested that CMS carefully consider all of the specialties that will be engaged in the MIPS program in future years as measure requirements are expanded and to develop policies that provide flexibility for those physician types who may have limited outcomes measures to report. Another commenter recommended CMS ensure the availability of high priority MIPS quality measures for specialists. The commenter requested that CMS closely track whether the number of high priority MIPS measures available to specialists approximates the number available to primary care physicians. Should the measures available to specialists be considerably lower, they recommended that CMS expedite the creation of specialty specific high priority measures within its measure development process to assure parity in reporting opportunity across specialties.

    Response: We are aware of the limitations in the pool of measures, and we will continue to work with stakeholders to include more measures for specialties without adequate metrics.

    Comment: One commenter stated that it is difficult to evaluate the long-term negative impact the proposed rule may have because there was no information on how CMS intends to incorporate new measures into the quality category. Commenter encouraged information sharing on the intended process to evaluate newly proposed measures.

    Response: As part of the PQRS Call for Measures process, we have historically outlined the criteria that we will use to evaluate measure submissions. We anticipate continuing to do so for the annual MIPS Call for Measures process as well. To the extent measures that are submitted under the annual Call for Measures process meet these criteria, we would then propose to include them in the MIPS quality measure set through notice and comment rulemaking.

    Comment: A few commenters supported continued use of PQRS measures. In addition, one commenter acknowledged and expressed appreciation for CMS's addition of a comprehensive list of measures.

    Response: We thank the commenters for their support and believe that the continued use of PQRS measures will help ease the transition into MIPS for many MIPS eligible clinicians. Further, the statute provides that PQRS measures shall be included in the final measure list unless removed.

    Comment: Some commenters requested evidence based measures that are proven to improve quality of care, improve outcomes, and/or lower the cost of care. Further, they stressed that CMS must continue to improve measures for greater clinical relevance, clinical and patient centered measures, and avoid unintended consequences. A few commenters stated that the PQRS measures have no relevance or benefit to their practice. In addition, one commenter stated that the majority of PQRS measures do not show an evidence-based rationale or justify implementation.

    Response: We believe that the measures that we have adopted fulfill the goals the commenters suggest. We further believe that any metrics that capture activities beyond the clinician's control reflect systemic quality improvements to which MIPS eligible clinicians contribute. We note further that most measures that are being implemented have gone through consensus endorsement by a third-party reviewing organization (NQF) prior to their adoption. As part of this endorsement process, the measures are evaluated for validity, reliability, feasibility, unintentional consequences, and expected impact on clinician quality performance. Furthermore, MIPS eligible clinicians also have the option of working with QCDRs to submit measures that are not included in the MIPS measure set but that may be more appropriate for their practices.

    Comment: A few commenters expressed concern about the robustness of the proposed quality measures. The commenters thought that many of the measures lack demonstrated improvement in patient care, create administrative burden for the eligible clinician to track, and will not capture quality of care provided.

    Response: Most of the CMS measures are submitted by measure stewards and owners from the medical community. We continue to encourage stakeholders to submit measures for consideration during our annual call for measures. Further, we realize that measures are not the only indication of quality care. However, they are one objective way to assess quality of care patients receive. We believe this indicator will become more effective and reliable as the measure set is expanded and refined over the years.

    Comment: One commenter stated that none of the 465 options for reporting measures in the proposed rule are based on scientific method. They recommended that each of the 465 options should meet three criteria. First, it should be based on scientific method. Second, there should be a plan to review and act on the data that is reported to CMS on the measure. Third, the reporting of such quality measures should be an automated function of the electronic medical record system and not impair, slow down or distract physicians participating directly in patient care.

    Response: As stated previously, most of the proposed measures have been endorsed by the NQF. The endorsement process evaluates measures on scientific acceptability, among other criteria. Depending on the policy priority of the measure, CMS may include measures without NQF endorsement. All of our measures, regardless of endorsement status, are thoroughly reviewed, undergo rigorous analysis, presented for public comment, and have a strong scientific and clinical basis for inclusion.

    Comment: One commenter indicated that many proposed measures have not been tested, the proposed thresholds for reliability and validity are very low, and the proposed rule does not provide specific benchmark for measures. The commenter recommended extra time to test and implement measures across programs, with an emphasis on simplicity, transparency and appropriate risk-adjustment.

    Response: Most MIPS measures are NQF-endorsed, which means they have been evaluated for feasibility, reliability, and validity, or in the absence of NQF-endorsement, the measures are required to have an evidence-based focus. All of our measures, regardless of endorsement status, are thoroughly reviewed, undergo rigorous analysis, presented for public comment, and have a strong scientific and clinical basis for inclusion. In addition, as discussed in section II.E.6. of this final rule with comment period, we intend to publish measure-specific benchmarks prior to the start of the performance period for all measures for which prior year data are available.

    Comment: One commenter recommended rigorous review and updating of quality measures, including addressing how measures are related to outcomes.

    Response: CMS does annual reviews of all measures to ensure they continue to be clinically relevant, appropriate, and evidence based. In the event that we determine that a measure no longer meets these criteria, then we may consider removing them from the MIPS quality measure set for future years through notice and comment rulemaking.

    Comment: One commenter asked CMS to offer time-limited adoption for any MIPS measures that are not fully tested and have not been through a rigorous vetting process, as this offers four benefits: MIPS eligible clinicians will have expedited access to a greater selection of measures; measure developers could have access to a larger data set for measure testing; we will gain earlier insight into appropriateness and relevance of such measures; and MIPS eligible clinicians will gain valuable experience with the measures before performance benchmarks are established.

    Response: We believe that we must ensure that all MIPS measures are clinically valid and tested prior to their use in a value-based purchasing program. All of our measures, are thoroughly reviewed, undergo rigorous analysis, presented for public comment, and have a strong scientific and clinical basis for inclusion including testing for validity, reliability, feasibility, unintentional consequences, and the expected impact on clinician quality performance.

    Comment: One commenter supported the Quality Payment Program rewarding MIPS eligible clinician performance as measured by quality metrics, but expressed concern that there are few outcomes measures, particularly regarding assessment of quality of care provided across settings and providers, linking clinical quality and efficiency to a team. The commenter recommended the Quality Payment Program develop and include quality measures that reflect performance of eligible clinicians as part of a team, perhaps through composite measure groups, which would take into account various components of quality that move toward the desired outcome. Alternatively, or in addition to such a measure, the commenter recommended that CMS work toward establishing clear associations between the clinician level measures in MIPS, facility level measures in the Hospital OQR and other provider level measures such as home health agency measures, so that all clinicians could see how one set of quality activities feeds into another, thus driving improvement across settings and providers for a given population.

    Response: We would encourage the commenter to submit measures for possible inclusion under MIPS through the Call for Measures process. Further, it may be advantageous for the commenter to report through a QCDR or report as a group. We are committed to developing outcome measures and intend to work with interested stakeholders through our Quality Measurement Development Plan which describes our approach.

    Comment: One commenter requested that the requirement for measures be reduced to encourage meaningful engagement and improvement in patient care. The current set of measures are not relevant to all clinicians, especially given the diversity of procedures, patient population and geographic location of clinicians. The commenter also believes that the quality measures do not align with the advancing care information, cost or improvement activities performance categories, and recommended alignment of quality and cost measures to provide information needed to increase value.

    Response: We have worked to adopt numerous measures that apply to as many clinicians as possible, and we have specified in other sections of this final rule with comment period how clinicians with few or no measures applicable to their practice will be scored under the program. We believe that the measures we are adopting will encourage meaningful engagement and quality improvement, and we do not agree that reducing the number of required measures will make those goals easier for physicians to pursue. However, following the principle that the MIPS performance categories should be aligned to enhance the program's ability to improve care and reduce participation burden, we will consider additional ways to align the quality and cost performance category measures in the future as well as ways to further quality improvement through the advancing care information and improvement activities performance categories.

    Comment: One commenter suggested limiting the available measures to three detailed measures per medical discipline. The commenter suggested that the criteria for choosing measures should be that they are related to a public health goal and will ensure that patients with a chronic or life-threatening condition are given a high level of care.

    Response: We believe that performance should be measured on measures that are most relevant and meaningful to clinicians. To that end, we need to balance parsimony with ensuring that there are relevant and meaningful measures available to the diverse array of MIPS eligible clinicians.

    Comment: One commenter expressed concern that there is a 30-month gap between the selection of quality measures and when they are used; commenter believes Core Quality Measure Collaborative (CQMC) core measure sets need immediate integration into the final rule with comment period.

    Response: Measures that are to be implemented in the program must undergo notice-and-comment rulemaking, as required by statute. Nearly all of the measures that are a part of the CQMC core measure sets are being finalized for implementation.

    Comment: Several commenters stated that all measures used must be clinically relevant, harmonized, and aligned among all public and private payers and minimally burdensome to report. The commenters stated the goal of such alignment would be to reduce measure duplication and improve harmonization and, ultimately, build a national quality strategy. Commenters recommended that CMS use measure sets developed by the multi-stakeholder Core Quality Measures Collaborative, as well as ensure that specialists are well represented in the effort to align quality measures.

    Response: Specialty societies are among the stakeholders that participate in the Core Measures Collaborative, and we will continue to work with specialists to align quality measures in the future. Further, nearly all of the measures that are a part of the CQMC core measure sets are being finalized for implementation.

    Comment: One commenter supported the consideration of Pioneer ACO required quality measures for use in MIPS. Another commenter requested we allow quality reporting measures to be differentiated between primary care and specialty physicians. For instance, we could use the same quality reporting structure as the Pioneer ACO Model for MIPS, and allow flexibility in measures when considering reporting by an APM.

    Response: MIPS eligible clinicians have the opportunity to report by the CMS Web Interface if they are part of a group of at least 25 MIPS eligible clinicians. Pioneer ACOs were also required to use the CMS Web Interface to submit their quality measures. In addition, many of the quality measures that are included in the CMS Web Interface are available for other data submission methods as well. Therefore, MIPS eligible clinicians could report these same measures through other data submission methods if they so choose or report measures from one of the specialty-specific measure sets. If a MIPS eligible clinician participates in an APM, then the APM Scoring Standard for MIPS Eligible Clinicians Participating in MIPS APMs applies. As discussed further in section II.E.5.h of this final rule with comment period, the APM Scoring Standard outlines how the MIPS quality performance category will be scored for MIPS eligible clinicians who are APM participants.

    Comment: A few commenters disagreed with being rated on things over which the commenters have no control (for example, A1c or Blood Pressure). Further, other commenters asked CMS to use quality metrics that captured activities under the physician's control and had been shown to improve quality of care, enhance access-to-care, and/or reduce the cost of care.

    Response: Clinicians have the option to report measures that are more relevant where they have control of the outcome and what is being reported. We further believe that clinicians have the opportunity to influence patients' actions and outcomes on their selected metrics, which reflect systemic quality improvements of which MIPS eligible clinicians are a part.

    Comment: One commenter requested patient acuity measures to modify the measures, which also alters clinician capability.

    Response: We believe that the commenter is referring to the need to risk adjust measures for patient acuity. We note that we allow for risk adjustment if the measures have risk adjusted variables and methodology included in their specifications.

    Comment: One commenter requested clear instructions from CMS as to how to choose quality measures since the concepts are extremely confusing. Another commenter sought clarification regarding the quality measures and submission of quality measures so that clinicians can submit the measures with highest performance. The commenter requested that CMS clearly define which measures are cross-cutting measures and which are outcomes measures.

    Response: We created the specialty sets to assist MIPS eligible clinicians with choosing quality measures that are most relevant to them. Other resources to help MIPS eligible clinicians choose their quality measures will also be available on the CMS Web site. In addition, we would encourage MIPS eligible clinicians to reach out to their specialty societies for further assistance. We would also like to note that the measure tables do indicate by use of a symbol which measures are outcomes. We are not finalizing the cross-cutting measure requirement.

    Comment: One commenter recommended adequately testing new eCQMs to confirm they are accurate, valid, efficiently gathered, reflects the care given, and successfully transports using the quality reporting document architecture format. Additionally, eCQMs should be endorsed by NQF and undergo an electronic specification testing process.

    Response: Thank you for your comments. We ensure that validity and feasibility testing are part of the eCQM development process prior to implementation. Although we strive to implement NQF-endorsed measures when available, we note that lack of NQF endorsement does not preclude us from implementing a measure that fulfills a gap in the measure set.

    Comment: A few commenters requested only non-substantive changes in eCQM measure sets and specifications, which do not require corresponding changes in clinician workflow, should be made through annual IPPS rulemaking while substantive changes (for example, a new CQM or a change in a current CQM that requires a workflow change) should be published in MIPS rulemaking and not go live until 18 months after publication.

    Response: We note that section 1848(q)(2)(D)(i)(II)(cc) of the Act requires the Secretary to update the final list of quality measures from the previous year (and publish such updated list in the Federal Register) annually by adding new quality measures and determining whether or not quality measures on the final list of quality measures that have gone through substantive changes should be included in the updated list. It is unclear why the commenters are suggesting that non-substantive changes to MIPS eCQM measure sets and specifications should be made through the annual IPPS rulemaking vehicle since the IPPS proposed and final rules typically address policy changes for hospital clinicians. We would use rulemaking for the MIPS program in the future to address substantive changes to measures in the future.

    Comment: A few commenters supported the development of a robust de-novo measure set of eCQMs for use by specialty MIPS eligible clinicians that are designed specifically to capture eCQM data as part of an EHR-enabled care delivery for use in future iterations of the CMS Quality Payment Program. One commenter believed eCQMs should be developed for specialties to measure process improvement and improved outcomes where data is not available in a standardized format and no national standard has been codified.

    Response: We encourage stakeholders to submit new electronically-specified specialty measures for consideration during the annual call for measures.

    Comment: Some commenters encouraged closer alignment between MACRA and EHR Incentive Program eCQM specifications and recommended using the same version specifications for the same performance year for MIPS and the EHR Incentive Program.

    Response: We appreciate the comments; however, we note that there is no overlap between the MIPS performance periods and the reporting period for the Medicare EHR Incentive Program for EPs. We note that a subset of the eCQMs previously finalized for use in the Medicare EHR Incentive Program for EPs are being finalized as quality measures for MIPS for the 2017 performance period.

    Comment: One commenter disagreed with the overall complexity of the quality performance category measures because the current available EHR software offerings do not easily automate the work of capturing measures.

    Response: We understand that not all quality measurement may yet be automated and share the concerns expressed. CMS and ONC also have received similar feedback in response to its CQM certification criteria within the ONC Health IT Certification Program.

    Based on this feedback, ONC has added a requirement to the 2015 Edition “CQM—record and export” and “CQM—import and calculate” criteria that the export and import functions must be executable by a user at any time the user chooses and without subsequent developer assistance to operate. This is an example of one way ONC is incentivizing more automated quality measurement through regulatory requirements. In addition, CMS and ONC will continue to work with health IT vendors and health IT product and service vendors, as well as the stakeholders involved in measure development to support the identification and capture of data elements, and to test and improve calculations and functionality to support clinicians and other health care providers engaged in quality reporting and quality improvement.

    Comment: One commenter wanted to know if CMS plans to continue adding and removing measures from the group of 64 e-measures, as these measures have not been modified for several years. They noted that adding new measures to this set will require much more than 2 months' notice in order for developers to implement them, especially given the 90 percent data completeness criteria placed on EHRs.

    Response: We may propose to remove measures from the e-measures group if they meet our criteria for removal from the MIPS. We are lowering the data completeness criteria to 50 percent for the first MIPS performance period. As new eCQMs are developed and are ready for implementation, we will evaluate when they can be implemented into MIPS and will consider developer implementation timeframes as well.

    Comment: One commenter requested that CMS not significantly reduce the number of available eCQMs as many small practices adopted EHRs for their ability to capture and report quality data and lack sufficient resources to invest in another reporting tool.

    Response: We are revising the list of eCQMs for 2017 to reflect updated clinical standards and guidelines. A number of eCQMs have not been updated due to alignment with the EHR Incentive Program in the past. This has resulted in a number of measures no longer being clinically relevant. We believe the updated list, although smaller, is more reflective of current clinical guidelines.

    Comment: One commenter noted that CMS is proposing removal of 9 EHR measures, and that while removal may be warranted, in some cases the act of removal means that there are potential gaps for those who plan to report quality using eCQMs. The commenter therefore recommended CMS encourage measure developers to help fill these gaps.

    Response: We would encourage measure developers to continue to submit new electronically-specified measures for potential inclusion in MIPS through the Call for Measure process.

    Comment: One commenter wanted to know whether the number of measures will be expanded for electronic reporting or whether the additional measures are going to only be offered in Registry/QCDR reporting option.

    Response: In subsequent years, we expect more measures to be available by electronic reporting but that will depend partly on whether or not electronic measures are submitted via the annual Call for Measures process.

    Comment: One commenter supported the creation of a computer adaptive quality measure portfolio and believed measures should be an area of significant focus in the final rule with comment period, including portability.

    Response: We thank the commenter and agree that measures are an area of significant focus in this final rule with comment period. We look forward to learning more about private sector innovations in quality measurement in the future.

    Comment: A few commenters supported the option, but not the requirement, that physicians select facility-based measures that are aligned with physician's goals and have a direct bearing on the physicians' practice. A commenter noted the challenge of clinicians and groups which functions across multiple facilities and recommends hospital-level risk-adjusted outcome measurement attributable to the principal physician or group responsible for the primary diagnosis.

    Response: We thank the commenters for their support and the suggestion. We will consider proposing policies on this topic in the future.

    Comment: Some commenters supported the distinction between hospitalists and other hospital-based clinicians from community clinicians and recommended that CMS develop a methodology for the second year of MIPS that will give facility-based clinicians the choice to use their institution's performance rates as the MIPS quality score. Another commenter recommended evaluation of 20 existing measures that represent clinical areas of relevance to hospitalists and could be adapted for MIPS, and indicates that the commenter's organization is ready to work with CMS to develop facility-alignment options.

    Response: We will take this feedback into account in the future.

    Comment: One commenter stated that quality measures that apply to primary care physicians should not be the same measures applied to consulted physicians.

    Response: We would like to note that there is a wide variety of measures, and they do vary between those applicable to primary care physicians and to other physicians, and that all participants may select the measures that are most relevant to them to report.

    Comment: Several commenters requested that CMS accept Government Performance and Results Act (GPRA) measures that Tribes and Urban Indian health organizations are already required to report as quality measures to cut down on the reporting burden.

    Response: There are many GPRA measures that are similar to measures that already exist within the program. In addition, some GPRA measures are similar to measures that are part of a CQMC core measure set. We strive to lessen duplication of measures and to align with measures used by private payers to the extent practicable. If there are measures reportable within GPRA that are not duplicative of measures within MIPS, we recommend the commenters work with measure owners to submit these measures during our annual Call for Measures.

    Comment: One commenter recommended CMS provide options for specialties without a sufficient number of applicable measures such as: determining which quality measures are applicable to each MIPS eligible clinician and only holding them accountable for those measures; addressing measure validity concerns with non-MAP, non-NQF endorsed measures; establishing “safe harbors” for innovative approaches to quality measurement and improvement by allowing entities to register “test measures” which clinicians would not be scored on but would count as a subset of the 6 quality measures with a participation credit; and allowing QCDRs flexibility to develop and maintain measures outside the CMS selection process.

    Response: We have intentionally not mandated that MIPS eligible clinicians report on a specific set of measures as clinicians have varying needs and specific areas of care. MIPS eligible clinicians should report the measures applicable to the service they provide. All measures, including those that are NQF endorsed, go through notice-and-comment rulemaking. In regards to non-MAP and non-NQF endorsed measures, we would like to note that these measures were reviewed by the CQMC, an independent workgroup, which includes subject matter experts in the field. Further, we would like to note that over 90 percent of the measures have gone through the MAP.

    Comment: Another commenter suggested that CMS require that outcomes-based measures constitute at least 50 percent of all quality measures and that CMS accelerate the development and adoption of such clinical outcomes-based measures, including patient survival. Some commenters also suggested that CMS utilize measures that have already achieved the endorsement of multiple stakeholders and have been evaluated to ensure their rigor (for instance, through processes like the National Quality Forum (NQF) endorsement).

    Response: We encourage stakeholders to submit new specialty measures for consideration during the annual call for measures. We welcome specialty groups to submit measures for review to CMS that have received previous endorsement. Furthermore, we are committed to developing outcome measures and intend to work with interested stakeholders through our Quality Measurement Development Plan which describes our approach.

    Comment: One commenter stated that it is concerning that the proposed quality performance categories fail to explicitly mention health equity as a priority. A few commenters recommended stratified reporting on quality measures by race & ethnicity, especially quality measures related to known health disparities. One commenter specifically supported stratification by demographic data categories that are required for Office of National Coordinator (ONC) for Health Information Technology-certified electronic health records (EHRs). Stratification allows for the examination of any unintended consequences and impact of specific quality performance measures on safety net eligible clinicians and essential community clinicians for potential beneficiary/patient-based risk adjustment. Further, commenters stated that stand-alone health equity quality measures should be developed and incentivized with bonus points as high priority measures. Commenter recommended patient experience to be kept as a priority measure for a bonus point in the final rule with comment period.

    Response: We thank the commenter for this feedback on high-priority measures and bonus point awarded for them. It is our intent that measures actually examine quality for all patients, and some of our measures have been risk-adjusted and stratified. We look forward to continuing to work with stakeholders to identify appropriate measures of health equity.

    Comment: Several commenters supported adding the Medicaid Adult Core Set, which is particularly important for people dually enrolled in Medicare and Medicaid who have greater needs and higher costs.

    Response: We thank the commenters for their support, and would like to note that we are working to align the Medicaid core set with MIPS in future years.

    Comment: One commenter requested that CMS engage state Medicaid leaders to maximize measure alignment across Medicare and Medicaid, and articulate the functional intersection of various measure sets and measure set development work (§§ 414.1330(a)(1) and 414.1420(c)(2) and the Appendix in this final rule with comment period). The commenter specifically encouraged alignment efforts to focus on measures where there is a clear nexus between Medicare and Medicaid populations (§§ 414.1330(a)(1) and 414.1420(c)(2) and Appendix in this final rule with comment period). With respect to specific measures, the commenter had a particular interest in MIPS measures that relate to the avoidance of long-term skilled care in the elderly and disabled. The commenter believed that this is an area of nexus between the two programs, as the majority of newly eligible elderly in nursing facilities were unknown to the Medicaid program in the timeframe immediately leading up to the long-term care stay. The commenter believed this is a high priority for state Medicaid leaders and federal partners to engage around quality measure alignment.

    Response: We intend to align quality measures among all CMS quality programs where possible, including Medicaid, and will take this comment into account in the future.

    Comment: One commenter suggested that CMS engage states to maximize measure alignment across Medicare and existing State common measure sets.

    Response: We work with regional health collaboratives and other stakeholders where possible, and we will consider how best to align with other measure sets in the future.

    Comment: A few commenters proposed that CMS align a set of quality measures to Medicare Advantage measures to be able to compare performance between APMs, FFS, and MAOs. Other commenters supported ensuring that quality measures are aligned across reporting programs, and build from the HVBP measures set when incorporating home health into quality reporting programs.

    Response: We will take these suggestions into account for future consideration.

    Comment: One commenter encouraged CMS to adopt measures in the quality performance category that align with existing initiatives focused on delivering care in a patient-centric manner. In particular, the commenter suggested that CMS make sure the quality measures align with the clinical quality improvement measures used in the Transforming Clinical Practice Initiative by the Practice Transformation Networks.

    Response: We purposely aligned the measures in the Transforming Clinical Practice Initiative with those used in CMS' quality reporting programs and value-based purchasing programs for clinicians and practices. We will continue to work on alignment across such programs as they evolve in the future.

    Comment: One commenter noted that CMS might also look to align with other measure sets that may be outside the health care sector such as with other local health assessment and community or state health improvement activities.

    Response: We work with regional health collaboratives and other stakeholders where possible, and we will consider how best to align with other measure sets in the future.

    Comment: One commenter believed that the Quality performance category should include a reasonable number of measures that truly capture variance in patient populations and that CMS should continue to review these measures on an annual basis to ensure that they are clinically relevant and address the needs of the general patient population.

    Response: It is within our process that we review the measures that we are adopting for clinical relevance on an annual basis, and we appreciate commenters' focus on ensuring that measures remain clinically relevant.

    Comment: One commenter did not believe current quality metrics reflect metrics that are meaningful to physicians or patients.

    Response: We respectfully disagree. Most of the current quality measures have been developed by clinician organizations that support the use of thoughtfully constructed quality metrics. We continue to welcome recommendations or submissions of new measures for consideration.

    Comment: One commenter noted that in order for small, private independent practices to demonstrate improved outcomes, the metrics system must be designed to account for their successes.

    Response: We are committed to developing outcome measures and intend to work with interested stakeholders following the approach outlined in our Quality Measurement Development Plan. While many existing outcome measures are focused on institution level improvement (such a tracking hospital readmissions), we believe there is an opportunity to develop clinician practice outcome measures that are designed to reflect the quality of large group, small group, and individual practice types. We welcome submissions of new outcome measures for consideration.

    Comment: A few commenters suggested that CMS collect SES data for race, ethnicity, preferred language, sexual orientation, gender identity, disability status and social, psychological and behavioral health status, to stratify quality measures and aid in eliminating disparities. One commenter noted that use of 2014 and 2015 edition CEHRT would reduce burden on clinicians to collect this data.

    Response: The CMS Office of Minority Health (OMH) works to eliminate health disparities and improve the health of all minority populations, including racial and ethnic minorities, people with disabilities, members of the lesbian, gay, bisexual, and transgender (LGBT) community, and rural populations. In September 2015, CMS OMH released the Equity Plan for Improving Quality in Medicare (CMS Equity Plan), which provides an action-oriented, results-driven approach for advancing health equity by improving the quality of care provided to minority and other underserved Medicare beneficiaries.

    The CMS Equity Plan is based on a core set of quality improvement priorities that target the individual, interpersonal, organizational, community, and policy levels of the United States health system in order to achieve equity in Medicare quality. It includes six priorities that were developed with significant input and feedback from national and regional stakeholders and reflect our guiding framework of understanding and awareness, solutions, and actions. They provide an integrated approach to build health equity into existing and new efforts by CMS and stakeholders.

    Priority 1 of the CMS Equity Plan focuses on expanding the collection, reporting, and analysis of standardized demographic and language data across health care systems. Though research has identified evidence-based guidelines and practices for improving the collection of data on race, ethnicity, language, and disability status in health care settings, these guidelines are often not readily available to health care providers and staff. Preliminary research has been conducted to determine best practices for collecting sexual orientation and gender identity information in some populations, but currently there are no evidence-based guidelines to standardize this collection.

    We will facilitate quality improvement efforts by disseminating best practices for the collection, reporting, and analysis of standardized data on race, ethnicity, language, sexual orientation, gender identity, and disability status so that stakeholders are able to identify and address the specific needs of their target audience(s) and monitor health disparities.

    Comment: One commenter stated that quality measures vary between populations depending on practice location due to different outcomes. Different outcomes are due to nutrition, reliable transportation, drug addiction, safe living space, and more. Comparison between practices is difficult.

    Response: We understand the commenter's concern that any single measure cannot capture the unique circumstances of a clinician's community including some of the sociodemographic factors mentioned. Our aim, however, is to drive quality improvement in all communities and we believe thoughtfully constructed measures can help all clinician practice types improve. Further, we will continue to investigate methods to ensure all clinicians are treated as fairly as possible within the program and monitor for potential unintended consequences such as penalties for factors outside the control of clinicians.

    Comment: A few commenters suggested that CMS commit to measures for a set amount of time (for instance, 2-3 years) before making substantial changes. One commenter suggested that CMS adopt a broader policy of maintaining measures in MIPS for a minimum number of years (for example, at least 5 years) to limit scenarios where CMS does not have historical data on the same exact measure to set a benchmark or otherwise evaluate performance.

    Response: We understand the commenter's concern. However, we do not believe it appropriate to commit to maintaining the same measures in MIPS for a substantial period of time, because we are concerned about the possibility that the measures themselves or the underlying medical science may change. We believe MIPS must remain agile enough to ensure that the measures selected for the program reflect the best available science, and that may require dropping or changing measures so that they reflect the latest best practices. For example, when a gap in clinical care no longer exists, reporting the measure offers no benefit to the patient or clinician.

    Comment: One commenter encouraged CMS to indicate which measures would be on the quality measure list for more than 1 year to allow concentration of improvement efforts over a two to three-year period. The commenter indicated that uncertainty on which measures may be included on the list each year could negatively impact improvement programs in rural areas that have fewer patients and would require a longer time to determine if interventions are successful. Another commenter requested that CMS limit additions and modifications to quality measures, especially as MIPS eligible clinicians become accustomed to reporting, to allow eligible clinicians sufficient time to meet quality metrics.

    Response: We would like to note that CMS conducts annual reviews of all measures to ensure they are relevant, appropriate, and evidence based. Therefore there is potential for updates to the annual list of measures to be adopted on a yearly basis. We will make every effort to ensure that the measures we adopt for the MIPS program reflect the latest medical science, and we will also work to ensure that all physicians and MIPS eligible clinicians are fully aware of the measures that we have adopted.

    Comment: A few other commenters recommended testing and comment periods before new measures are added to assess for potential unintended effects associated with healthcare disparities, including a one-year transparency (report only) period before measures are phased into incentives, a requirement for NQF endorsement.

    Response: All of the measures selected for MIPS include routine maintenance and evaluation to assess performance and identify any unintended consequences. We have extensive measurement experience (such as in the PQRS) and do not believe we need to delay measure implementation to assess for unintended consequences. We further note that the NQF endorsement process is separate and apart from the MIPS measure selection process. We refer the commenter to NQF for their recommendations on enhancements to the endorsement process.

    Comment: One commenter was concerned about annual changes in the performance measurement category and ability to respond to the changes in an appropriate timeframe. Commenter proposed that a minimum of 9 months, and ideally 12 months, be given to review changes to the performance categories each year.

    Response: We understand commenter's concern, but we do not believe this timeline to be operationally feasible given the Program's statutory deadlines. We note that stakeholders have the ability to begin reviewing potential changes to the quality performance category and provide comment on the potential changes with the publication of the proposed rule each year.

    Comment: One commenter discussed how quality measures encourage shared decision making and patient centered care. They requested that CMS require both over treatment and under treatment of patient as specific quality measures in specific instances such as blood sugar and blood pressure.

    Response: We are looking at measures for appropriate use and are working with numerous stakeholders to identify more appropriate use measures.

    Comment: One commenter encouraged CMS to align quality measures of MIPS to Uniform Data System so FQHCs will be able to submit one set of quality data one time to both Uniform Data System and CMS.

    Response: We thank the commenter for this suggestion.

    Comment: One commenter was concerned that clinicians could select “low-bar” quality measures, or measures that are not the best representation of clinicians' patient populations or the diseases they treat. Commenter requested that CMS monitor the selection of quality measures by clinicians.

    Response: We believe that MIPS eligible clinicians should have the ability to select measures that they believe are most relevant to their practice. Further, we would like to note that we conduct annual reviews of all measures to ensure they are relevant, appropriate, and evidence based.

    After consideration of the comments, correcting, and revising specific information, we are finalizing at § 414.1330(a)(1) that for purposes of assessing performance of MIPS eligible clinicians on the quality performance category, CMS will use quality measures included in the MIPS final list of quality measures. Specifically, we are finalizing the Final Individual Quality Measures Available for MIPS Reporting in 2017 in Table A of the Appendix in this final rule with comment period. Included in Table B of the Appendix in this final rule with comment period is a final list of quality measures that do not require data submission. Newly proposed measures that we are finalizing are listed in Table D of the Appendix in this final rule with comment period. The final specialty-specific measure sets are listed in Table E of the Appendix in this final rule with comment period. Measures that we are finalizing for removal can be found in Table F of the Appendix and measures that will have substantive changes for the 2017 performance period can be found in Table G of the Appendix in this final rule with comment period.

    (2) Call for Quality Measures

    Each year, we have historically solicited a “Call for Quality Measures” from the public for possible quality measures for consideration for the PQRS. Under MIPS, we proposed to continue the annual “Call for Quality Measures” as a way to engage eligible clinician organizations and other relevant stakeholders in the identification and submission of quality measures for consideration. Under section 1848(q)(2)(D)(ii) of the Act, eligible clinician organizations are professional organizations as defined by nationally recognized specialty boards of certification or equivalent certification boards. However, we do not believe there needs to be any special restrictions on the type or make-up of the organizations carrying out the process of development of quality measures. Any such restriction would limit the development of quality measures and the scope and utility of the quality measures that may be considered for endorsement. Submission of potential quality measures regardless of whether they were previously published in a proposed rule or endorsed by an entity with a contract under section 1890(a) of the Act, which is currently the National Quality Forum, is encouraged.

    As previously noted, we encourage the submission of potential quality measures regardless of whether such measures were previously published in a proposed rule or endorsed by an entity with a contract under section 1890(a) of the Act. However, consistent with the expectations established under PQRS, we proposed to request that stakeholders apply the following considerations when submitting quality measures for possible inclusion in MIPS:

    • Measures that are not duplicative of an existing or proposed measure.

    • Measures that are beyond the measure concept phase of development and have started testing, at a minimum.

    • Measures that include a data submission method beyond claims-based data submission.

    • Measures that are outcome-based rather than clinical process measures.

    • Measures that address patient safety and adverse events.

    • Measures that identify appropriate use of diagnosis and therapeutics.

    • Measures that address the domain for care coordination.

    • Measures that address the domain for patient and caregiver experience.

    • Measures that address efficiency, cost and utilization of healthcare resources.

    • Measures that address a performance gap or measurement gap.

    We requested comment on these proposals.

    The following is summary of the comments we received regarding our proposal for the Call for Quality Measures.

    Comment: A few commenters supported the Call for Quality Measures approach to encouraging the development of quality measures and the list of considerations when submitting quality measures to MIPS. One commenter believed the criteria should also include: measures which span across the various phases of surgical care that align with the patient's clinical flow: measures based on validated clinical data; measures that can be risk-adjusted and include SDS factors, if applicable; and process measures used in conjunction with outcome measure to provide a more comprehensive picture of clinical workflow and help link to improvement activities.

    Response: We thank the commenter for their support and will consider including these additional factors for evaluating quality measures for potential inclusion in MIPS in the future. Further, we will consider additional measures covering the five phases of surgical care that the commenter specifies in the future. We have a rolling period for new measure suggestions, and we welcome commenters' nominations.

    Comment: One commenter recommended that the proposed rule quality measures emphasize patient experience, outcomes, shared decision making, care coordination, and other measures important to patients. One commenter believed the selection and development of measures should include patients, stakeholders, consumers and advocates. The commenter believes measures should be used to give feedback to clinicians and recommended the CAHPS for MIPS survey and clinical data registries be used to collect patient-reported data, and that individual clinician level data be collected on performance.

    Response: We agree that the selection and development of measures should include patients, consumers, and advocates. We have included patients, consumers, and advocates on the selection and development of measures to promote an objective and balanced approach to this process.

    Comment: One commenter recommended that CMS focus on developing measures assessing physicians' communication with patients, care coordination, and efforts to fill practice gaps, because commenter believed these skills are more indicative of the care physicians provide than outcome measures.

    Response: We thank the commenter for this feedback. We have a process in place for nominating measures for inclusion in the MIPS program, including an annual call for measures and the Measures Under Consideration (MUC) list, and we welcome stakeholders' feedback into that process.

    Comment: One commenter supported the inclusion of robust quality measures. The commenter encouraged CMS to focus on including quality measures under MIPS that target shared decision making and health outcomes, including survival and quality of life. Commenter supported outcome measures, but noted in certain circumstances, where there is a well-defined link to outcomes, that process measure or intermediary outcome measures may be most appropriate.

    Response: Thank you for your comment. We agree that measures that target shared decision making and health outcomes should be included in MIPS.

    Comment: One commenter stated that CMS should promote the adoption of new quality measures that fill in measure gaps, accentuate the benefits of innovation, and keep pace with evolving standards of clinical care.

    Response: Thank you for your comment. We agree we plan to work with stakeholders on new measure development.

    Comment: Some commenters suggested that CMS carefully consider the selection of quality measures to ensure that they meaningfully assess quality of care for patients with diverse needs, particularly those patients with one or more chronic conditions.

    Response: CMS is aware of the need for measures that address diverse needs and encourages the development of these types of measures.

    Comment: One commenter believed that more patient safety measures should be included. The commenter recommended that a culture of patient safety be encouraged across healthcare organizations; that indicators of physical and emotional harms be used to measure workforce safety; that patient engagement be included as a measure of safety, beyond patient satisfaction; and that measures to track and monitor transparency, communication and resolution programs be added to the MIPS portion of the proposed rule.

    Response: We thank the commenter and agree that patient safety should be encouraged across healthcare organizations. We note that we consider patient safety measures to be high-priority measures.

    Comment: One commenter recommended quality measures be redefined. The commenter believed many are reporting burdens and are pedestrian from a quality standpoint and have little to do with physician work.

    Response: Our quality measures define a reference point for care that is expected in the delivery of care. CQMs are tools that help measure and track the quality of health care services provided by MIPS eligible clinicians within our health care system. Measuring and reporting these measures helps to ensure that our health care system is delivering effective, safe, efficient, patient-centered, equitable, and timely care. MIPS eligible clinicians are accountable for the care they provide to our beneficiaries.

    Comment: One commenter requested that when a MAV process is invoked, the number of measures which could have been reported is greater than the number of additional measures needed to satisfy the reporting requirement.

    Response: We did not propose a MAV process for the MIPS Program, but we did propose, and will be finalizing, a data validation process. This process will apply for claims and registry submissions to validate whether MIPS eligible clinicians have submitted all applicable measures when MIPS eligible clinicians submit fewer than six measures or do not submit the required outcome measure or other high priority measure if an outcome measure is not available, or submit less than the full set of measures in the MIPS eligible clinicians' applicable specialty set.

    Comment: One commenter suggested that CMS employ a more transparent approach to measure selection for the MIPS program, including a detailed rationale on why certain measures are not selected, providing feedback to MIPS eligible clinicians and provider organizations which have committed resources to improving measures.

    Response: While we understand commenter's concern, we believe we have been substantially transparent with the considerations we have taken into account when developing the proposed measure list for MIPS and have provided detailed rationale explaining the choices we have made. In the appendix of this final rule with comment period, we have provided a list of measures proposed for removal along with the rationale. We would also like to note that measures that appear on the MUC list are reviewed by the MAP and undergo detailed analyses, and we refer stakeholders to the MAP's report for feedback on those measures. We will continue working with stakeholders and measure developers to improve their measures.

    Comment: In an effort to increase transparency in the process, the commenter suggested that prior to the publication of the recommendations, CMS contact the measure developer to make sure CMS's conclusions are accurate and to ensure the developer does not have data to suggest otherwise.

    Response: We review measures annually with measure owners and stewards. Further, we provide feedback to measure developers on measure being submitted through the Call for Measures process. Stakeholders also have the opportunity to comment on new measures that are proposed in the annual notice and comment process.

    Comment: A few commenters suggested that CMS develop a plan to transition from the use of process measures to outcomes measures to allow MIPS eligible clinicians to adopt the most updated evidence-based standards care and to ensure that MIPS eligible clinicians are truly achieving the goals of value-based health care. One commenter acknowledged that there is a large body of evidence showing that process measures do not improve outcomes.

    Response: We aim to have the most current measure specifications updated annually. We also agree that outcome measures are more appropriate for assessing health outcomes and for accountability. We describe our measure development process in detail in our Quality Measure Development Plan (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf). We look forward to working with stakeholders to develop a wide range of outcome measures.

    Comment: One commenter expressed concern that CMS' proposal is too focused on outcome measures while commenter believes the agency should also focus on establishing meaningful process measures tied to evidence-based outcomes. Another commenter noted that both outcome measures and high quality, evidence-based process measures that address gaps and variations in care have a role in improving care, and cautioned CMS against too much emphasis on outcomes without regard to evidence-based processes that underlie care.

    Response: Although process measures will continue to play an important role in quality measurement, we believe that they should be tied to evidence based outcomes. As noted, we have a measure development strategy that seeks to develop a wide range of outcome measures but our plan will also provide for the development of both process and structural measures that may be need to fill existing gaps in measurement. We encourage the submission of measures that address gaps in measurement, have significant variations in care, and also outcome measures, including patient reported outcome measures.

    Comment: Several commenters agreed that focusing more on the outcome of a clinical intervention than the process of care is better for patients and requested we adopted more outcome measures. Further, outcome measures would yield the most meaningful data for consumers and are true indicators of healthcare services.

    Response: We agree that outcome measures are important and will continue to emphasize the importance of outcomes measures in the future. We also agree that outcome measures are more appropriate for assessing health outcomes and for accountability. We describe our measure development process in detail in our Quality Measure Development Plan (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf). We look forward to working with stakeholders to develop a wide range of outcome measures.

    Comment: One commenter requested the outcome measures represent clear care goals rather than intermediate process measures, thereby allowing clinicians' freedom to determine the best allocation of resources to improve clinical outcomes.

    Response: We have made available numerous measures to include those with intermediate outcomes. Although there are far fewer measures that have intermediate outcomes we also agree that we should consider both intermediate and long-term outcome measures for assessing overall health outcomes and for accountability. We describe our measure development process in detail in our Quality Measure Development Plan (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf). We look forward to working with stakeholders to develop a wide range of outcome measures, including intermediate outcome measures.

    Comment: Another commenter noted that, within the set of quality measures that can be self-selected, 58 of the measures focus on outcomes and 192 focus on process, and that only 9 focus on efficiency. The commenter encouraged CMS to conduct additional research around efficiency measures that could be added to the overall menu of measures and, where available and clinically relevant to practice areas, MIPS eligible clinicians should be required to report on an efficiency measure. Some commenters believed that the relative imbalance of process measures over outcome measures can undermine CMS's efforts to encourage eligible clinicians to demonstrate actual improvements in a patient's health status.

    Response: We agree that there is a need for more outcome and efficiency measures and will strive to achieve a more balanced portfolio of measures in future years. As previously noted, we have a measure development strategy that seeks to develop a wide range of outcome measures but our plan will also provide for the development of both process and structural measures that may still be need to fill existing gaps in measurement. CMS encourages the submission of measures that address gaps in measurement and have significant variations in care. Outcome measures are a recognized gap in measurement, including patient reported outcome measures, and we look forward to working with stakeholders to develop a wide range of such measures.

    Comment: One commenter recommended that as CMS selects measures, it should include measures that capture variance across patient populations; should consider adopting more outcome measures; and should add measures related to coordination of care/exchange of information between specialists and PCPs in all specialty categories.

    Response: We agree with the commenter on the importance of these measures and have proposed these types of measures for the program. We would encourage the commenter to submit additional measures for possible inclusion in MIPS through the Call for Measure process. We are particularly interested in developing outcome measures for chronic conditions (such as diabetes care and hypertension management) which present a measurement challenge to capture the many factors that impact the care and outcomes of patients with chronic conditions.

    Comment: A few commenters agreed that outcome measures are very important, but cautioned CMS against simply increasing the number of such measures each year. Commenters also opposed the proposal to increase the required number of patient experience measures in future years because the physician lacks control over such measures. One commenter supported the inclusion of risk adjustment and stratification in measures and suggested that CMS examine ASPE's future recommendations.

    Response: We are aware of the need for measures that are adjusted for case-mix variation through risk adjustment and stratification techniques. As noted in this final rule with comment period, the Secretary is required to take into account the relevant studies conducted and recommendations made in reports under section 2(d) of the Improving Medicare Post-Acute Transformation (IMPACT) Act of 2014. Under the IMPACT Act, ASPE has been conducting studies on the issue of risk adjustment for sociodemographic factors on quality measures and cost, as well as other strategies for including SDS evaluation in CMS programs. We will review the report when issued by ASPE and will incorporate findings as appropriate and feasible through future rulemaking. With respect to patient experience measures, we believe that measures that assess issues that are important to patients are an integral feature of patient-centered care.

    Comment: One commenter requested that CMS continue to use both process and outcome measures moving forward as a ramp-up tactic for MIPS eligible clinicians new to reporting on quality measures. Additionally, some commenters expressed particular support for measures which track appropriate use. The commenters strongly believe that especially in advanced illness, individuals should only receive treatment that is aligned with their values and wishes but that many times, because of a lack of advance care planning, there is overuse and overtreatment at this time. Other commenters encouraged CMS to focus efforts on the development of underuse measures that can serve as a consumer protection for ensuring that eligible clinicians are not limiting access to needed care in order to reduce costs.

    Response: We agree with the importance of developing more measures of appropriate use and seek to have more of these measure types for a wider range of specialties, including geriatrics and palliative care.

    Comment: A few commenters suggested that CMS should focus on identifying and emphasizing measures that drive more robust outcomes. The commenters stated there are too many measures from which to choose.

    Response: We appreciate the commenter's focus on the importance of patient outcome measurement. However, we believe there remains a role for process measures that are linked to specific health outcomes. We would encourage the commenter to submit potential new measures for inclusion in MIPS through the Call for Measures process.

    Comment: A few commenters suggested that CMS use the recommendations of the National Academy of Medicine's (NAM) 2015 Vital Signs report to identify the highest priority measures for development and implementation in the MIPS.

    Response: We have reviewed the recommendations of the National Academy of Medicine report and it informed our Quality Measure Development Plan (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf) which emphasizes the need for outcome measures over process measures. We will continue to use the report as a resource to inform future measurement policy development.

    Comment: Several commenters supported the development and strengthening of patient reported outcomes, PRO-based measures, and patient experience quality measures as a component of the MACRA proposed payment models. Further, commenters stated that patient-generated data assesses issues that are important to patients and are a key element of patient-centered care, enabling shared decision-making and care planning, and ensuring that patients are receiving high-quality health care services.

    Response: We agree that PROs are important. Currently we have a number of PRO measures and intend to expand their portfolio. We also believe the other measure domains are important in measuring other aspects of care.

    Comment: One commenter recommended that patient reported outcomes should have been given great weight, as well as continued solicitation of multi-stakeholder input on the available required measures through the NQF-convened MAP and updated patient sampling requirement over time. The commenter also recommended that all clinicians in groups of two or more should report a standard patient experience measure.

    Response: We agree that patient-reported outcomes are important quality measures. We note also that patient experience measures, while not required, are considered high-priority and are incentivized through the use of bonus points. However, patient-reported measurement generally requires a cost to clinician practices to conduct the survey and mandatory reporting of such measure may present a burden to many clinicians, especially those in small and solo practices. In future years, we will continue to seek methods of expanding reporting of these measures without unduly penalizing practices that cannot afford the measurement costs.

    Comment: One commenter believed that it is necessary to specifically call out and prioritize patient-reported outcomes (PROs) and PRO-based measures (PROMs).

    Response: We agree. We highlighted person and caregiver-centered experience and outcome measures in the proposed rule (81 FR 28194) and continue to believe that they appropriately emphasize the importance of collecting patient-reported data.

    Comment: One commenter recommended that CMS should encourage EHR developers to incorporate PROMs, as well as development and use of PROMs.

    Response: We agree that the inclusion of PROMs in health IT systems can help support quality improvement efforts at the provider level. As PROMs begin to be electronically specified and approved for IT development, testing and clinician use, we will work with ONC, health IT vendors, and stakeholders engaged in measure development to support the process of beginning to offer and support PROMs within certified health IT systems.

    Comment: One commenter recommended expediting the adoption of patient-reported outcome measures (PROMs) for all public reporting programs as well as condition-specific outcome sets that focus on the longitudinal outcomes and quality-of-life measures that are most important to patients.

    Response: We agree with the commenter that PROMs are an important aspect of assessing care quality, and we intend to continue working with stakeholders to encourage their use. We refer readers to section II.E.10. of this final rule with comment period for final policies regarding public reporting on Physician Compare.

    Comment: One commenter stated the quality metrics have nothing to do with patient outcomes and measure process instead of results. The commenter requested the metrics be shifted to clinical outcome measures, including patient reported outcomes.

    Response: We believe patient-reported outcomes are important as well, but we respectfully disagree with commenter's characterization of our measures.

    Comment: One commenter recommended that CMS consider measures that are validated and scientifically sound and to ensure measures address existing clinical relevance, given that the existing vehicles for measure inclusion has expanded to include qualified clinical data registries and specialty measure sets. The commenter also recommended that CMS consider working towards a set of core measures (similar to what was implemented through the Core Quality Measures Collaborative) that are most impactful to patient care. Further, they recommended that CMS consider the adoption of more outcome measures, specifically those using patient-reported outcomes.

    Response: We thank the commenter for this feedback and agree. Our intent is to include more outcomes measures in the MIPS Program as more become available over time, and we are working with measure collaboratives to include more measures and align them with other health care payers. We believe the specialty measure sets ensure that we have adopted measures of clinical relevance for specialists. We did propose adoption of the majority of measures that were part of the CQMC core measure sets into the MIPS program.

    Comment: One commenter recommended that CMS consider paring down from the list of over 250 quality measures from which a clinician may self-select for quality reporting, and instead focus on the creation of a smaller number of clinically relevant measures, particularly including additional patient outcome measures where available, and where there are separate and distinct outcomes measures. Additionally, as CMS embarks on future iterative changes to the Quality Payment Program, the commenter encouraged CMS to continue to rely on multi-stakeholder and consensus driven feedback loops, such as Core Quality Measures Collaborative, to inform additional core measure sets, where such measure sets are useful and promote the appropriate comparisons.

    Response: We appreciate the commenters concerns and note that we intend to continue our work with the Core Quality Measures Collaborative. We did propose adoption of the majority of measures that were part of the CQMC core measure sets into the MIPS program. Further, to help clinicians successfully report, it is important that we provide as wide a range of measure options as possible that are germane to the clinical practice of as many MIPS eligible clinicians as possible.

    Comment: One commenter expressed concern related to the self-selection of quality measures. The commenter noted that they participated in the Core Quality Measures Collaborative (the “Collaborative”) to assist in the development of evidence-based measures and to help drive the health care system toward improved quality, decision making, and value-based payment and purchasing. The Collaborative recommended 58 MIPS quality measures. The commenter suggested that CMS consider making it mandatory for clinicians to report on those 58 measures when the measures are available within appropriate categories and when the measures are clinically relevant.

    Response: We have taken an approach to allow MIPS eligible clinicians select their own measures for reporting based on beneficiaries seen in their practices and the measures that are most relevant to their clinical practice. However, we have included the CQMC measures in the MIPS measure sets, including the specialty-specific measure sets, to encourage their adoption into clinical practice.

    Comment: A few commenters stated that CMS should ensure that ongoing quality measurement in the quality performance category encourages the appropriate use of imaging services that makes certain that Medicare patients receive accurate and timely diagnoses.

    Response: We are adopting a number of appropriate use measures that track both over- and under-use of medical services. We encourage stakeholders to submit additional measures on this topic, and will take those submissions into account in the future.

    Comment: One commenter expressed concern with the measures available to clinicians because many of the Core Quality Measures Collaborative measure sets were not included in the MIPS list and many of the MIPS measures are not NQF endorsed. Some commenters recommended that measures be approved by NQF before use in the program.

    Response: We believe including 17 Core Quality Measures Collaborative measures for the transition year is an excellent starting point to promote measurement alignment with private sector quality measurement leaders. While we encourage NQF-endorsement for measures, we do not require that all measures be endorsed by the NQF before use in the program, as requiring NQF endorsement would limit measures that currently fill performance gaps. We continue to encourage measure developers to submit their measures to NQF for endorsement.

    Comment: A few commenters supported CMS encouragement in the proposed rule of eliminating special restrictions as to the type and make-up of the organization developing quality measures. Commenters further supported the ability to submit measures regardless of whether such measures were previously published in a proposed rule or endorsed by NQF.

    Response: We would like to note that while we prefer NQF-endorsement of measures for MIPS, we do not require that new measures for inclusion in MIPS be NQF-endorsed; however, in order for a measure to be finalized for MIPS it must be published in the Federal Register.

    Comment: A few commenters supported the proposed “Call for Quality Measures.” Further, one commenter suggested that CMS use this process to focus on specialty measures.

    Response: We note that although we also conducted an annual Call for Measures under PQRS, section 1848(2)(D)(ii) of the Act requires us to conduct a Call for Quality Measure for MIPS annually.

    Comment: One commenter supported allowing new quality measures to be submitted by specialty societies with supporting data from QCDRs.

    Response: We encourage specialty societies to continue to submit new measures for potential inclusion in the MIPS program.

    Comment: One commenter supported adoption of evidence-based measures through the “Call for Quality Measures” process. The commenter further suggested that CMS establish an interim process for adoption of subspecialty quality measure sets until quality measures can go through the “Call for Quality Measures” process so that CMS may be able to quickly assess the commenter's members on clinically meaningful measures.

    Response: We thank the commenter for the recommendation; however, we believe that the current process allows for careful review and scrutiny of the measures. We note that the Call for Quality Measures is open year-round, and that measures for inclusion in MIPS must go through notice-and-comment rulemaking.

    Comment: One commenter sought clarification regarding whether new-process based measures will continue to be accepted.

    Response: While we will consider new process based measures, we would request that they be closely tied to an outcome and that there be demonstrable variation in performance.

    Comment: One commenter supported the flexibility CMS provided in the proposed rule for health care providers to select measures that make sense within their practice, as well as opening up the process for the annual submission of new measures, which will allow MIPS to evolve with the nation's dynamic health care system.

    Response: Thank you for the support.

    After consideration of the comments we are finalizing our proposal to continue the annual “Call for Quality Measures” under MIPS. Specifically, eligible clinician organizations and other relevant stakeholders may submit potential quality measures regardless of whether such measures were previously published in a proposed rule or endorsed by an entity with a contract under section 1890(a) of the Act. We do encourage measure developers and stakeholders to submit measures for NQF-endorsement as this provides a scientifically rigorous review of measures by a multi-stakeholder group of experts. Furthermore, we are finalizing that stakeholders shall apply the following considerations when submitting quality measures for possible inclusion in MIPS:

    • Measures that are not duplicative of an existing or proposed measure.

    • Measures that are beyond the measure concept phase of development and have started testing, at a minimum.

    • Measures that include a data submission method beyond claims-based data submission.

    • Measures that are outcome-based rather than clinical process measures.

    • Measures that address patient safety and adverse events.

    • Measures that identify appropriate use of diagnosis and therapeutics.

    • Measures that address the domain for care coordination.

    • Measures that address the domain for patient and caregiver experience.

    • Measures that address efficiency, cost and utilization of healthcare resources.

    • Measures that address a performance gap.

    (3) Requirements

    Section 1848(q)(2)(D)(iii) of the Act provides that, in selecting quality measures for inclusion in the annual final list of quality measures, the Secretary must provide that, to the extent practicable, all quality domains (as defined in section 1848(s)(1)(B) of the Act) are addressed by such measures and must ensure that the measures are selected consistent with the process for selection of measures under section 1848(k), (m), and (p)(2) of the Act.

    Section 1848(s)(1)(B) of the Act defines “quality domains” as at least the following domains: clinical care, safety, care coordination, patient and caregiver experience, and population health and prevention. We believe the five domains applicable to the quality measures under MIPS are included in the NQS's six priorities as follows:

    Patient Safety. These are measures that reflect the safe delivery of clinical services in all health care settings. These measures may address a structure or process that is designed to reduce risk in the delivery of health care or measure the occurrence of an untoward outcome such as adverse events and complications of procedures or other interventions. We believe this NQS priority corresponds to the domain of safety.

    Person and Caregiver-Centered Experience and Outcomes. These are measures that reflect the potential to improve patient-centered care and the quality of care delivered to patients. They emphasize the importance of collecting patient-reported data and the ability to impact care at the individual patient level, as well as the population level. These are measures of organizational structures or processes that foster both the inclusion of persons and family members as active members of the health care team and collaborative partnerships with health care providers and provider organizations or can be measures of patient-reported experiences and outcomes that reflect greater involvement of patients and families in decision making, self-care, activation, and understanding of their health condition and its effective management. We believe this NQS priority corresponds to the domain of patient and caregiver experience.

    Communication and Care Coordination. These are measures that demonstrate appropriate and timely sharing of information and coordination of clinical and preventive services among health professionals in the care team and with patients, caregivers, and families to improve appropriate and timely patient and care team communication. They may also be measures that reflect outcomes of successful coordination of care. We believe this NQS priority corresponds to the domain of care coordination.

    Effective Clinical Care. These are measures that reflect clinical care processes closely linked to outcomes based on evidence and practice guidelines or measures of patient-centered outcomes of disease states. We believe this NQS priority corresponds to the domain of clinical care.

    Community/Population Health. These are measures that reflect the use of clinical and preventive services and achieve improvements in the health of the population served. They may be measures of processes focused on primary prevention of disease or general screening for early detection of disease unrelated to a current or prior condition. We believe this NQS priority corresponds to the domain of population health and prevention.

    Efficiency and Cost Reduction. These are measures that reflect efforts to lower costs and to significantly improve outcomes and reduce errors. These are measures of cost, utilization of healthcare resources and appropriate use of health care resources or inefficiencies in health care delivery.

    Section 1848(q)(2)(D)(viii) of the Act provides that the pre-rulemaking process under section 1890A of the Act is not required to apply to the selection of MIPS quality measures. Although not required to go through the pre-rulemaking process, we have found the NQF convened Measure Application Partnership's (MAP) input valuable. We proposed that we may consider the MAP's recommendations as part of the comprehensive assessment of each measure considered for inclusion under MIPS. Elements we proposed to consider in addition to those listed in the “Call for Quality Measures” section of this final rule with comment period include a measure's fit within MIPS, if a measure fills clinical gaps, changes or updates to performance guidelines, and other program needs. Further, we will continue to explore how global and population-based measures can be expanded and plan to add additional population-based measures through future rulemaking. We requested comment on these proposals.

    The following is summary of the comments we received regarding our proposal on requirements for selecting quality measures.

    Comment: A few commenters recommended that CMS continue to use the Measure Application Partnership (MAP) pre-rulemaking process in determining the final list of quality measures each year. One commenter supported elimination of the requirement for recommendation by the MAP for inclusion of MIPS quality measures and believed this could potentially speed the process for implementing measures into MIPS.

    Response: Prior to proposing new quality measures for implementation into MIPS for the 2017 performance period, we did consult the MAP for feedback. To view the MAP's recommendations on these measures, please refer to the report entitled, “MAP 2016 Considerations for Implementing Measures in Federal Programs: Clinicians.” (http://www.qualityforum.org/Publications/2016/03/MAP_2016_Considerations_for_Implementing_Measures_in_Federal_Programs__Clinicians.aspx). We intend to continue to consult the MAP for feedback on proposed quality measures, but we retain the authority to propose measures that have not been supported by the MAP.

    Comment: Some commenters believed quality measures in MIPS should go through a multi-stakeholder evaluation process and that CMS should encourage the use of quality measures endorsed by the NQF.

    Response: Most measures are NQF endorsed or have gone through the pre-rulemaking process, but we retain the authority to adopt measures that are not so endorsed. All measures have gone through rulemaking and public comment process.

    Comment: One commenter had concerns with the performance measures currently used in PQRS, and therefore, recommended that any measures CMS proposes to use outside of the core set identified by the Core Quality Measures Collaborative be endorsed by the Measure Application Partnership (MAP).

    Response: We appreciate the comment to use measures identified by the CQMC, and while we intend to consult with MAP on measures for MIPS, we note that we have the authority to implement measures they have not reviewed.

    Comment: A few commenters recommended that quality measures should prioritize patient-reported outcomes and promote goal-concordant care, specifically that quality should be evaluated using a harmonized set of patient-reported outcomes and other appropriate measures that clinicians can reliably use to understand what matters to patients and families, achieve more goal-concordant care, and improve the patient and family experience and satisfaction. Another commenter suggested that CMS's proposed Quality Payment Program approach for considering value-based performance should expressly prioritize the patient and family voice and the constellation of what matters to them as key drivers of quality measures development and use.

    Response: We note that person and caregiver centered experience measures are considered high priority under MIPS. For this reason and the reasons cited by commenters, we encourage the development and submission of patent-reported outcomes to the Call for Measures for the reasons cited by the commenters.

    Comment: One commenter recommended CMS include in the MIPS quality requirements measures outcomes that align with an individual's stated goals and values, commonly referred to as person-centered care, believing that performance measures that promote individuals articulating their goals and desired outcomes hold the system accountable for helping people achieve their goals and preferences. The commenter suggested that CMS reference the National Committee on Quality Assurance's work on long term services and supported measures and person centered outcomes using a standardized format to form a basis for building person centered metrics into MIPS and APMs.

    Response: We will take this into consideration for use in the future.

    Comment: A few commenters suggested making global and population-based measures optional. Reclassifying these measures as “population health measures” under the quality category does not fix the inherent problems with these measures. Commenters suggested that CMS not include the three population health measures in the quality category.

    Response: We believe the population health measures are intended to incentivize quality improvement throughout the health care system, and we therefore believe that we have appropriately placed them under the Quality performance category. However, as discussed in section II.E.5.b. of this final rule with comment period, CMS will only finalize the all-cause readmission measure because the other population measures have not been fully tested with the new risk-adjusted methodology.

    Comment: One commenter expressed support for measures that address all six of the NQS domains. For the Patient Safety domain, commenter especially supported measures designed to reduce risk in the delivery of health care (for example, adverse events and complications from medication use). For the Communication and Care Coordination category, the commenter pointed out that for pharmacists, ensuring interoperability and bidirectional communication in this area is extremely critical.

    Response: We encourage MIPS eligible clinicians to select and report on measures that are applicable to their practices, regardless of their assigned domain, ultimately to improve the care of their beneficiaries.

    Comment: One commenter supported CMS aligning the MIPS quality measure domain of patient and caregiver experience with the National Quality Strategy's domain person and caregiver-centered experience and outcomes among the six required domains, believing it will improve patient centered care.

    Response: We appreciate the support. We support the measures in all domains, to include measures that embrace patient-centered care and involvement.

    After consideration of the comments, we are finalizing the requirements for the selection of the Annual MIPS Quality Measures. Specifically, we will categorize measures into the six NQS domains and we intend to place future MIPS quality measures within the NQF convened Measure Application Partnership's (MAP), as appropriate. We intend to consider the MAP's recommendations as part of the comprehensive assessment of each measure considered for inclusion under MIPS.

    (4) Peer Review

    Section 1848(q)(2)(D)(iv) of the Act, requires the Secretary to submit new measures for publication in applicable specialty-appropriate, peer-reviewed journals before including such measures in the final annual list of quality measures. The submission must include the method for developing and selecting such measures, including clinical and other data supporting such measures. We believe this opportunity for peer review helps ensure that new measures published in the final rule with comment period are meaningful and comprehensive. We proposed to use the Call for Quality Measures process as an opportunity to gather the information necessary to draft the journal articles for submission from measure developers, measure owners and measure stewards since we do not always develop measures for the quality programs. Information from measure developers, measure owners and measure stewards will include but is not limited to: background, clinical evidence and data that supports the intent of the measure; recommendation for the measure that may come from a study or the United States Preventive Services Task Force (USPSTF) recommendations; and how this measure would align with the CMS Quality Strategy. The Call for Quality Measures is a yearlong process; however, to be aligned with the regulatory timelines, establishing the proposed measure set for the year generally begins in April and concludes in July. We will submit new measures for publication in applicable specialty-appropriate, peer-reviewed journals before including such measures in the final annual list of quality measures. We requested comments on this proposal. Additionally, we solicited comment on mechanisms that could be used, such as the CMS Web site, to notify the public that the requirement to submit new measures for publication in applicable specialty-appropriate, peer-reviewed journals is met. Additionally, we solicited comment on the type of information that should be included in such notification.

    The following is summary of the comments we received regarding the submission of MIPS quality measures to a peer reviewed journal.

    Comment: One commenter supported the proposal that new measures must be submitted to peer reviewed journals.

    Response: We thank the commenter for their support.

    Comment: One commenter recommended that CMS use the Call for Quality Measures process as an opportunity to gather the information necessary to draft the journal articles required for quality measures implemented under MACRA. Commenter also recommended that any information required for journal article submission should align with the information required for the submission of the measure to CMS to reduce the workload of this new requirement on measure developers.

    Response: We appreciate the support and recommendation and intend to utilize the Call for Quality Measures process to gather information necessary to draft the journal articles.

    Comment: One commenter agreed that CMS should be responsible for submitting new measures for publication in applicable specialty-appropriate, peer-reviewed journals before including such measures in the final list of measures annually. The commenter agreed the public requirement will help ensure measures are both meaningful and comprehensive, but requested that CMS ensure a more collaborative approach to the submission of measures to peer-reviewed journals. A few commenters requested that CMS allow measure developers the right to first submit measure sets to specialty specific, peer-reviewed journals of their choice. One commenter was concerned that there are difficulties with the timing and sequencing of submitting new measures in that, with the requirement to submit new measures for publication in applicable specialty appropriate peer reviewed journals before including such measure, many journals will be very reluctant to publish measures that are already in the public domain, and the July 1 measure deadline provides a narrow window for publication. Another commenter noted that most peer-reviewed medical journals only contained ground-breaking research. Therefore, they would not be a good source of information about quality measurement and improvement. The commenter was concerned that this criterion for approving new quality measures would be a significant barrier.

    Response: We thank the commenters; however, we are required by statute to submit measures for publication in a peer-reviewed journal before including them in the final list of measures. Although we may collaborate with the measure owner to accurately capture the measure specifications, we cannot fulfill our statutory obligation by allowing the measure owner to submit the article. The statute requires the Secretary to submit new measures for publication in applicable specialty-appropriate, peer-reviewed journals before including such measures in the final annual list of quality measures. We would like to note, however, that this does not preclude a measure owner from independently submitting their measure for publication in a peer-reviewed journal.

    Comment: One commenter recommended that CMS accept measures independently published in peer reviewed journals as well as measures submitted by CMS.

    Response: We appreciate the suggestion; however, we are required by statute to submit measures for publication in a peer-reviewed journal before including them in the final list of measures for MIPS.

    Comment: One commenter sought clarity on the process for submitting new measures for publication in specialty-appropriate, peer-reviewed journals prior to including measures in the final list, and suggested an abbreviated peer review process for publication to ensure there will not be slowdowns in the process of getting measures into the MIPS quality program.

    Response: It is our intent to illustrate this process via subregulatory guidance that will be posted on our Web site. Further, we would like to note that we only have an obligation to submit the measure for publication. If the submission is not accepted for publication, we will still have met the statute requirement. If the submission is accepted, which is our preference, we are not obligated to delay our rulemaking process until the date the journal chooses to publish the submission.

    Comment: One commenter believed that the proposed process requiring that HHS submit measures for publication in applicable specialty-appropriate, peer-reviewed journals was highly duplicative of the work of measure developers; would infringe on measure ownership and copyright; and would ultimately limit the availability of and significantly delay the use of measures in MIPS. The commenter appreciated the exceptions to the rule for measures in QCDRs and those included in existing CMS programs, the commenter recommended this exclusion be extended to all measures published in a peer-review journal prior to their submission to CMS. The commenter believes that extending the exclusion would allow measure developers to maintain their ownership, copyright, prevent duplication, and ensure measures were not stagnated in the peer review and publication process.

    Response: The statute requires the Secretary to submit new measures for publication in applicable specialty-appropriate, peer-reviewed journals before including such measures in the final annual list of quality measures. Further, we would like to note that we only have an obligation to submit the measure; we do not have to wait for the measures to be published. Even if the article is not published, we will have met the requirements under section 1848(q)(2)(D)(iv) of the Act. We believe that the summary of proposed new quality measures will help increase awareness of quality measurement in the clinician community especially for clinicians or professional organizations that are not aware of the ability to provide public comment on proposed quality measures through the rulemaking process. We will only submit new measures in accordance with applicable ownership or copyright restrictions and cite the measure developer's contribution in the submission.

    Comment: One commenter recommended that new measures be posted to journals associated with the American Board of Medical Specialties (ABMS), related subspecialty journals or journals associated with the American College of that specialty and non-ABMS recognized clinical specialty journals that are trusted resources for specialists to ensure a wide range of readership and distribution.

    Response: We will take these recommendations into consideration for the future.

    Comment: Some commenters supported and appreciated the clarification that CMS will be submitting new measures for publication in applicable specialty appropriate, peer-reviewed journals before including such measures in the final list of measures annually. Commenters requested that CMS ensure a more collaborative approach to the submission of measures to peer-reviewed journals, possibly through societies that routinely publish guidelines in their peer-reviewed journals.

    Response: We appreciate the support. We will continue to seek input regarding our approach to the submission of measures from measure owners and specialty societies to improve the annual new measure submission process.

    Comment: One commenter recommended that CMS collaborate with a national, multi-stakeholder organization that can provide expertise on measurement science, quality improvement, and expertise on data submission mechanisms, such as clinical registries, to develop alternative approaches to the peer review process. Commenter expressed support for a process whereby new measures are subject to external expert review and recommended that such review occur in an expedient manner, and that results be made available and maintained as measures are updated.

    Response: Although we believe there is value in having external expert review of new measures, we note that we are required by statute to submit new measures to an applicable, specialty-appropriate peer-reviewed journal.

    Comment: One commenter stated that until the USPSTF recommendation process is substantially reformed so that specialist physicians are consulted as part of its recommendation process, CMS should proceed with great caution before incorporating any future USPSTF recommendations into MIPS quality measures.

    Response: We are committed to engaging all stakeholders in our measure development and selection process. We note that the annual call for measures and the annual measure update provides for the participation of patient, eligible clinician, and clinician stakeholders, including specialists, and allows for a transparent and robust review of our quality measure development and selection process.

    Comment: One commenter recommended a quicker timeline for including quality measures after they had been published in a peer-reviewed journal; specifically, if a measure is already published in a peer-reviewed journal, the commenter recommended that the timeline for approval for MIPS be 6-12 months.

    Response: We appreciate the comments; however, new measures, even if they have been previously published, can only be included in MIPS through notice and comment rulemaking. Further, there is a statutory requirement that we publish the new measures not later than November 1 prior to the first day of the applicable performance period for a given year.

    After consideration of the comments, we are finalizing our proposal to use the Call for Quality Measures process as a forum to gather the information necessary to draft the journal articles for submission from measure developers, measure owners and measure stewards since we do not always develop measures for the quality programs. Information from measure developers, measure owners and measure stewards shall include but is not limited to: Background, clinical evidence and data that supports the intent of the measure; recommendation for the measure that may come from a study or the United States Preventive Services Task Force (USPSTF) recommendations; and how this measure would align with the CMS Quality Strategy. The submission of this information will not preclude us from conducting our own research using Medicare claims data, Medicare survey results, and other data sources that we possess. We will submit new measures for publication in applicable specialty-appropriate, peer-reviewed journals before including such measures in the final annual list of quality measures.

    (5) Measures for Inclusion

    Under section 1848(q)(2)(D)(v) of the Act, the final annual list of quality measures must include, as applicable, measures from under section 1848(k), (m), and (p)(2) of the Act, including quality measures among: (1) Measures endorsed by a consensus-based entity; (2) measures developed under section 1848(s) of the Act; and (3) measures submitted in response to the “Call for Quality Measures” required under section 1848(q)(2)(D)(ii) of the Act. Any measure selected for inclusion that is not endorsed by a consensus-based entity must have an evidence-based focus. Further, under section 1848(q)(2)(D)(ix), the process under section 1890A of the Act is considered optional.

    Section 1848(s)(1) of the Act, as added by section 102 of the MACRA, also requires the Secretary of Health and Human Services to develop a draft plan for the development of quality measures by January 1, 2016. We solicited comments from the public on the “Draft CMS Measure Development Plan” through March 1, 2016. The final CMS Measure Development Plan was finalized and posted on the CMS Web site on May 2, 2016, available at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf.

    (6) Exception for QCDR Measures

    Section 1848(q)(2)(D)(vi) of the Act provides that quality measures used by a QCDR under section 1848(m)(3)(E) of the Act are not required to be established through notice-and-comment rulemaking or published in the Federal Register; be submitted for publication in applicable specialty-appropriate, peer-reviewed journals, or meet the criteria described in section 1848(q)(2)(D)(v) of the Act. The Secretary must publish the list of quality measures used by such QCDRs on the CMS Web site. We proposed to post the quality measures for use by qualified clinical data registries in the spring of 2017 for the initial performance period and no later than January 1 for future performance periods.

    Quality measures that are owned or developed by the QCDR entity and proposed by the QCDR for inclusion in MIPS but are not a part of the MIPS quality measure set are considered non-MIPS measures. If a QCDR wants to use a non-MIPS measure for inclusion in the MIPS program for reporting, we propose that these measures go through a rigorous CMS approval process during the QCDR self-nomination period. Specific details on third party intermediaries' requirements can be found in section II.E.9 of the proposed rule. The measure specifications will be reviewed and each measure will be analyzed for its scientific rigor, technical feasibility, duplication to current MIPS measures, clinical performance gaps, as evidenced by background, and literature review, and relevance to specialty practice quality improvement. Once the measures are analyzed, the QCDR will be notified of which measures are approved for implementation. Each non-MIPS measure will be assigned a unique ID that can only be used by the QCDR that proposed it. Although non-MIPS measures are not required to be NQF-endorsed, we encourage the use of NQF-endorsed measures and measures that have been in use prior to implementation in MIPS. Lastly, we note that MIPS eligible clinicians reporting via QCDR have the option of reporting MIPS measures included in Table A in the Appendix in this final rule with comment period to the extent that such measures are appropriate for the specific QCDR and have been approved by CMS. We requested comment on these proposals.

    The following is a summary of the comments we received regarding our proposals on QCDR measures.

    Comment: One commenter supported CMS's proposed exception for QCDR measures.

    Response: We appreciate the support.

    Comment: Some commenters agreed that non-MIPS measures implemented in QCDRs should be analyzed for scientific rigor, technical feasibility, duplication to current MIPS measures, clinical performance gaps, as evidenced by background and literature review, and relevance to specialty practice quality improvement.

    Response: We appreciate the support.

    Comment: One commenter stated that quality measures developed by QCDRs should not be subject to an additional CMS verification process before they are used for MIPS reporting and that an additional process is problematic for specialty areas such as oncology where there are deficiencies in the quality measure set for these types of practices. The commenter further believed the additional verification and approval processes appear as micro-managing the QCDR-developed measures process which could undermine the goals of QCDR reporting and creates additional burden given mature QCDRs such as the Quality Oncology Practice Initiative have already undergone an extremely robust and evidenced-based process to ensure clinical validity and reliability. The commenter further stated that additional uncertainty, restraints and regulatory burden should not be placed on these QCDRs. The commenter did support focusing on evaluating the QCDR measure development methodology during the self-nomination process instead.

    Response: While we do not wish to add burden to QCDRs, we do need to maintain an appropriate standard for measures used in our program, especially since MIPS payment adjustments are based on the quality metrics.

    Comment: One commenter recommended that CMS publish the specific criteria that they plan to use in evaluating QCDR measures moving forward. Some commenters requested that if CMS decides to deny the use of a measure in a QCDR, that CMS provide the measure developer/steward/owner with specific information on what criteria were not met that led to a measure not being accepted for use and provide a process for immediate reconsideration when the issues have been addressed.

    Response: Criteria were already adopted under PQRS and proposed under MIPS (see 81 FR 28284) for non-MIPS measures. In the future, we may publish supplemental guidance. In addition, measures should be fully developed prior to submission, and we intend to provide necessary feedback in a timely fashion.

    Comment: A few commenters supported CMS's proposal for non-MIPS measures in QCDRs to go through a rigorous CMS approval process during the QCDR self-nomination period, and encouraged CMS to engage in a multi-stakeholder process as part of this approval process. One commenter recommended adopting an approval process for QCDR measures that would require them to be endorsed by the NQF.

    Response: We intend to take the multi-stakeholder process's views into account when adopting policies on this topic in the future. We retain the authority to adopt measures that have not been endorsed by NQF, and we do not believe it appropriate to commit to requiring endorsement.

    Comment: One commenter did not agree that CMS should support new measures developed by QCDRs.

    Response: We respectfully disagree because we believe that QCDRs offer MIPS eligible clinicians the opportunity to report on measures associated with their beneficiaries that otherwise they may not be able to report.

    Comment: A few commenters recommended that CMS encourage QCDRs to submit their measures for review by a consensus-based standards organization, like the NQF. One commenter suggested that CMS publish data for these measures to promote greater understanding of the use of QCDR measures and performance trends.

    Response: The QCDRs develop new measures and propose them for consideration into our programs. We review all proposed measures and consider them for inclusion based on policy principles described in our Quality Measure Development Plan (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf). Although we do not require NQF endorsement for measure approval and acceptance, we expect all submitted measures to have had a rigorous evaluation including an assessment for feasibility, reliability, strong evidence basis, and validity. All of our measures, regardless of NQF endorsement status, are thoroughly reviewed, undergo rigorous analysis, presented for public comment, and have a strong scientific and clinical basis for inclusion. QCDR measures must be approved by us before they can be made available for use by MIPS eligible clinicians.

    Comment: One commenter approved of the use of QCDRs but is concerned that if QCDR measures are not part of the MIPS quality measure set and must undergo a thorough approval process by CMS, this will delay adoption of MIPS eligible measures and limit opportunities for transparency and stakeholder input to ensure measures are evidence-based and clinically rigorous. The commenter suggested that subjecting these measures to a formal endorsement process, such as National Quality Forum (NQF) endorsement, could help ensure that QCDR measures enjoy broad, consensus-based support through a process of thorough review and public vetting.

    Response: We agree that ideally measures developed by QCDRs would be submitted to NQF for endorsement. However, we will not require NQF-endorsement and will continue to review measures submitted by QCDRs prior to their implementation in the MIPS program. We believe that QCDRs allow specialty societies and others to develop more relevant measures for specialists that can be implemented more rapidly and efficiently.

    Comment: A few commenters expressed concern with CMS's “stringent” approach to QCDR measures as they believe it may be too burdensome. Commenters stated that QCDR measures should continue to be developed by a multi-stakeholder processes by the relevant specialty societies and reviewed by CMS in the QCDR approval process, but they should not be required to undergo MAP and NQF processes that are too time consuming to allow such developments to keep pace with constantly changing CMS requirements.

    Response: We would like to note that QCDR measures are not required to undergo MAP and NQF processes.

    Comment: One commenter supported flexibility with regard to the measures that are available for reporting by physicians and also supported the statutory provision that does not require that QCDR developed measures to be NQF-endorsed.

    Response: We appreciate the comment and support.

    Comment: One commenter expressed concern with the need for CMS to encourage reporting of NQF measures. The commenter noted that obtaining NQF endorsement can be costly, time consuming and not the only way to ensure that measures are sound. The commenter expressed concern that the language will be interpreted as a requirement for NQF endorsement and encouraged CMS to reconsider the language. Another commenter opposed all measures being required to be endorsed by NQF for use in QCDRs because: requiring QCDR measures to go through NQF would go against CMS's goal of quickly iterating measures; the NQF process is cost and resource prohibitive for smaller specialties; such a revision would reduce the flexibility of QCDRs to offer specialty-specific reporting measures, which provide broader options that may be more meaningful to some practices than existing PQRS measures; and QCDRs provide a better picture of the overall quality of care provided, because QCDRs collect and report quality information on patients from all payers, not just Medicare patients.

    Response: We would like to note that NQF endorsement is not a requirement for QCDR or MIPS measures. However, we do encourage application for NQF endorsement because it provides a rigorous scientific and consensus based measures evaluation.

    Comment: One commenter expressed support for the use of quality measures that are used by QCDRs such as the Quality Oncology Practice Initiative (QOPI), which is designated as a QCDR and focuses specifically on measuring and assessing the quality of cancer care. However, the commenter expressed concern over the process for approval of QCDR measures, stating that CMS should not slow the continued use of existing, robust QCDR measures; decrease adoption of innovative, clinically relevant QCDR measures; or weaken the protections that exempt quality measures developed for use in a QCDR from many of the measure development process required for other MIPS measures.

    Response: We understand the commenters concern and will continue to review QCDR measures in a timely fashion. Further, we would like to note that the approval criteria are not changing.

    Comment: One commenter supported the CMS approach to non-MIPS measures used by QCDRs, including the caution about “check box” measures. Commenter expressed concern that the measurement of cancer care planning could become one such measure. Instead, the commenter suggested that care planning measures be developed as patient engagement/experience measures.

    Response: We thank the commenter for the recommendation and will take under consideration for future years. We note that, consistent with clinicians submitting quality data through other reporting mechanisms, those submitting quality data through QCDRs must meet our requirements for one outcome measure, or, if one is not applicable, one high-priority measure.

    Comment: A few commenters recommended that CMS allow QCDRs to utilize measures from other QCDRs (with permission). One commenter further stated that CMS proposed that QCDR non-MIPS measures must go through a rigorous approval process and then be assigned a unique identifier that can only be used by the QCDR that proposed the measure. Commenters believe that prohibiting the sharing of non-MIPS quality measures between QCDRs would inhibit the efficient and cost-effective use and dissemination of such measures.

    Response: We allow a QCDR to use a measure with permission from the measure owner, which may be a QCDR in some instances. Further, if the QCDR would like the measure to be shared among other clinicians, they can submit the measure to be included in the Program, where it would not be limited to that specific QCDR. Any measure needs only a single submission for the measure approval process.

    Comment: One commenter recommended that CMS not require or restrict a QCDR from licensing its proprietary quality measures to other QCDRs after the QCDR-developed measures become available for MIPS reporting.

    Response: We do not restrict but in fact encourage the sharing of QCDR-developed quality measures with clinicians and also other QCDRs.

    Comment: One commenter requested that CMS clarify that the QCDR-developed measures available for 2016 PQRS reporting would automatically qualify for 2017 MIPS quality reporting.

    Response: QCDR guidelines evolve over time as we continue to learn from implementation. We expect that measures in a QCDR 1 year would be expected to be retained for the next, however, we will review measures each year to ensure they are still relevant and meet scientific standards. Further, we would like to note that all QCDRs that were previously approved for PQRS will not be “grandfathered” as qualified under MIPS. Rather the QCDR must meet the requirements as described in section II.E.9.a. of this final rule with comment period.

    Comment: One commenter indicated that requiring data collection in 2017 for measures not already included in a QCDR represents a myriad of technical challenges. QCDRs' development and modifications require partnering with a number of developers that program code and develop software updates to facilitate reporting. Software developer often require 9-12 months to update data elements. In addition, time is required to train practice staff on how to enter new data and integrate measures into the practice workflow.

    Response: We thank the commenter for the support of the QCDR program and understand the concern of the time involved in doing this work. We believe that QCDRs that implement and support non-MIPS measures are aware of the measure specifications in enough time to reliably work with developers to make system changes. Since these measures are owned by the QCDR or their partners, we believe they already know the changes needed prior to the submission of the measure for inclusion in the program.

    Comment: One commenter asked CMS to modify the QCDR self-nomination process to allow measures that have been approved in prior years a period of stability by automatic measure approval for a period of at least 3 years, which would allow physicians and developers a period of assured measure inclusion.

    Response: The QCDR measures are reviewed annually to ensure they are still appropriate for use in the program. We thank the commenter for the recommendation and will consider for future years.

    Comment: One commenter suggested that CMS streamline the process for measure inclusion into MIPS beyond the accommodations that have been made for QCDRs and recommended that CMS consider the development of an “open source” QCDR that would allow small specialty organizations the opportunity to take advantage of the benefits of QCDRs for measure development, thereby shortening the process for inclusion in MIPS.

    Response: It is not our intent to expand QCDR types at this time, but we will take this suggestion into consideration for future rulemaking.

    Comment: One commenter supported the inclusion of outcome measures and other high priority measures for QCDRs, as well as the optional reporting of cross cutting measures by those clinicians who find those measures relevant to their practice. However, the commenter did not support mandating cross cutting measures requirements, especially for QCDRs since it contradicts the intent of this submission mechanism, which is to give clinicians broad flexibility over determining which measures are most meaningful for their specialized practice.

    Response: CMS believes that there are basic standards that each physician, regardless of their specialty, can and should perform. Additionally, the MIPS program offers payment incentives and MIPS payment adjustments based on the value of care patients receive. Having across-cutting set of measures will allow for direct comparisons among participants. We would like to note, however, that as discussed in section II.E.5.b. of this final rule with comment period, we are not finalizing the cross-cutting measure requirement.

    Comment: One commenter requested that CMS compile the list of entities qualified to submit data as a QCDR, and that CMS accept the Indian Health Service (IHS) Resource and Patient Management System (RPMS) and other Tribal health information systems as a QCDR and work with IHS and Tribes to ensure health information systems are capable of meeting MIPS reporting requirements.

    Response: CMS posts a list of approved QCDRs on its Web site annually. Entities are required to self-nominate to participate in MIPS as a QCDR. Entities that meet the definition of a “QCDR” at § 414.1305 and meet the participation requirements outlined in section II.E.9 of this final rule with comment period will be approved as a QCDR.

    Comment: One commenter requested that CMS consider employing a MAV process for QCDRs or at minimum clarifying its intent for using such a process. The commenter stated that even in QCDRs certain clinicians do not have enough measures to report.

    Response: QCDRs are required to go through a rigorous approval process that requires both their MIPS and non-MIPS measures be submitted at time of self-nomination. Since QCDRs have the ability to have up to 30 non-MIPS measures approved for availability to the MIPS eligible clinicians we anticipate that very few MIPS eligible clinicians who utilize the QCDR mechanism would not have measures applicable to them.

    Comment: One commenter recommended that CMS not score non-MIPS QCDR measures in their first year as commenter does not believe they will have good benchmarking data.

    Response: The non-MIPS measures approved for use within QCDRs are required to have benchmarks when possible and appropriate.

    Comment: One commenter requested that CMS consider allowing QCDRs to determine the appropriate reporting sample (number or percentage) on a measure by measure basis.

    Response: We will consider this recommendation in future rulemaking as we review the impact of such a change. However, we believe that the reporting sample must be of sufficient size to meet our reliability standards.

    Comment: One commenter supported that the proposed rule established a quality measure review process for those measures that are not NQF-endorsed or included on the final MIPS measure list to assess if the quality measures have an evidence-based focus, and are reliable and valid.

    Response: We appreciate the comment and support.

    Comment: One commenter did not support CMS's proposal to support new measures developed by QCDRs because the commenter believed quality measures should go through a rigid evaluation and review process. The commenter believed CMS should focus on streamlining quality reporting by gradually eliminating excessive measures.

    Response: We would like to note that all QCDR measures undergo a rigorous approval process before receiving approval.

    Comment: One commenter indicated that allowing for the inclusion of non-MIPS quality measures via QCDRs will introduce more inconsistency and burden and result in data that cannot be compared across states/regions/providers, depending on their QCDR of origin.

    Response: Acceptance of non-MIPS QCDR measures is to support specialty groups' ability to report on measures most relevant to their practice. QCDRs operate on a large scale, many at a national level, and offer valid and reliable measure data.

    After consideration of the comments, we are finalizing at § 414.1330(a)(2) our proposal that for purposes of assessing performance of MIPS eligible clinicians on the quality performance category, CMS will use quality measures used by QCDRs. In the circumstances where a QCDR wants to use a non-MIPS measure for inclusion in the MIPS program for reporting, those measures will go through a CMS approval process during the QCDR self-nomination period. We also are finalizing our proposal to post the quality measures for use by qualified clinical data registries in the spring of 2017 for the initial performance period and no later than January 1 for future performance periods.

    (7) Exception for Existing Quality Measures

    Section 1848(q)(2)(D)(vii)(II) of the Act provides that any quality measure specified by the Secretary under section 1848(k) or (m) of the Act and any measure of quality of care established under section 1848(p)(2) of the Act for a performance or reporting period beginning before the first MIPS performance period (herein referred to collectively as “existing quality measures”) must be included in the annual list of MIPS quality measures unless removed by the Secretary. As discussed in section II.E.4 of the proposed rule, we proposed that the performance period for the 2019 MIPS adjustment would be CY 2017, that is, January 1, 2017 through December 31, 2017. Therefore, existing quality measures would consist of those that have been specified or established by the Secretary as part of the PQRS measure set or VM measure set for a performance or reporting period beginning before CY 2017.

    Section 1848(q)(2)(D)(vii)(I) of the Act provides that existing quality measures are not required to be established through notice-and-comment rulemaking or published in the Federal Register (although they remain subject to the applicable requirements for removing measures and including measures that have undergone substantive changes), nor are existing quality measures required to be submitted for publication in applicable specialty-appropriate, peer-reviewed journals.

    The following is a summary of the comments we received regarding our proposal on the Exception for Existing Quality Measures.

    Comment: Some commenters expressed preference for leveraging existing quality measures to ensure consistency of measurement.

    Response: The vast of majority of measures that we are finalizing for the MIPS quality performance category are existing PQRS measures.

    Comment: One commenter suggested that CMS conduct robust assessment of previously developed quality measures to ensure that the measures improve patient care and outcomes before introducing or maintaining those measures in the MIPS Program.

    Response: We routinely review all of our existing measures through a maintenance and evaluation process that assess for the clinical impact on quality and any unintended consequences. We are committed to utilizing measures that improve patient care and outcomes.

    After consideration of comments received from stakeholders on our proposals for exceptions to existing quality measures, we are finalizing our policies as proposed. While CMS has modified its performance period proposal as discussed in section II.E.4 of this final rule with comment period, this policy would not be affected since the minimum 90-day performance period would not begin any earlier that January 1, 2017.

    (8) Consultation With Relevant Eligible Clinician Organizations and Other Relevant Stakeholders

    Section 1890A of the Act, as added by section 3014(b) of the Affordable Care Act, requires that the Secretary establish a pre-rulemaking process under which certain steps occur for the selection of certain categories of quality and efficiency measures, one of which is that the entity with a contract with the Secretary under section 1890(a) of the Act (that is, the NQF) convenes multi-stakeholder groups to provide input to the Secretary on the selection of such measures. These categories are described in section 1890(b)(7)(B) of the Act and include the quality measures selected for the PQRS. In accordance with section 1890A(a)(1) of the Act, the NQF convened multi-stakeholder groups by creating the MAP. Section 1890A(a)(2) of the Act requires that the Secretary make publicly available by December 1 of each year a list of the quality and efficiency measures that the Secretary is considering under Medicare. The NQF must provide the Secretary with the MAP's input on the selection of measures by February 1 of each year. The lists of measures under consideration for selection are available at http://www.qualityforum.org/map/.

    Section 1848(q)(2)(D)(viii) of the Act provides that relevant eligible clinician organizations and other relevant stakeholders, including state and national medical societies, must be consulted in carrying out the annual list of quality measures available for MIPS assessment. Section 1848(q)(2)(D)(ii)(II) of the Act defines an eligible clinician organization as a professional organization as defined by nationally recognized specialty boards of certification or equivalent certification boards. Section 1848(q)(2)(D)(viii) of the Act further provides that the pre-rulemaking process under section 1890A of the Act is not required to apply to the selection of MIPS quality measures.

    Although MIPS quality measures are not required to go through the pre-rulemaking process under section 1890A of the Act, we have found the MAP's input valuable. The MAP process enables us to consult with relevant EP organizations and other stakeholders, including state and national medical societies, patient and consumer groups and purchasers, in finalizing the annual list of quality measures. In addition to the MAP's input this year, we also received input from the Core Quality Measure Collaborative on core quality measure sets. The Core Quality Measure Collaborative was organized by AHIP in coordination with CMS in 2014. This multi-stakeholder workgroup has developed seven condition or setting-specific core measure sets to help align reporting requirements for private and public health insurance providers. Sixteen of the newly proposed measures under MIPS were recommended by the Core Quality Measure Collaborative and many of the remaining measures in the core sets were already in the PQRS program and have been proposed for MIPS for CY 2017.

    The following is a summary of the comments we received regarding consultation with relevant eligible clinician organizations and other relevant stakeholders.

    Comment: A few commenters applauded the work that went into establishing the measures that went in to MIPS. The commenters suggested CMS continue to work with all stakeholders to align quality measures with those used in the private sector.

    Response: We intend to continue to work with stakeholders to further align the MIPS quality measures with those used in the private sector.

    Comment: Several commenters encouraged CMS to engage as broad an array of stakeholder organizations as possible in the measure review and selection process, noting that physicians and healthcare facility stakeholders, relevant task forces, provider groups, including nurses, physician assistants, nurse practitioners, patients, and caregivers should be included. Further, the commenters requested CMS implement new opportunities for stakeholders to participate in the measure development process.

    Response: Part of the process for measure adoption is the public comment period, and we use the public comment period to enable all relevant stakeholders of all types, including the various stakeholders listed above, to provide feedback on measures that we have proposed for the Program.

    Comment: One commenter encouraged CMS to keep measure developers, clinicians, and stakeholders engaged in the quality measure development and selection process to ensure the implementation of clinically meaningful measures that are aligned across the MACRA Quality Payment Program performance pathways and other payer programs.

    Response: We will continue to keep measure developers, clinicians, and stakeholders engaged in the quality measure development and selection process as evidenced by the multiple opportunities to provide input to the measure development and selection process.

    Comment: A few commenters stated that CMS should work broadly with stakeholders, including patients and patient advocacy organizations to identify and address measures gaps. Further, these stakeholders could provide insight on patient experience and satisfaction measures, as well as measures of care planning and coordination. Increasingly, patient advocacy organizations are working to develop such measures based on their own registry data. Commenters encouraged CMS to commit to acting as a resource for those stakeholders that have less experience with the measures submission process, to encourage their participation in the process. Commenter also encouraged CMS to identify disease states for which commenters have articulated gaps in quality measures, and determine the feasibility of adopting measures based upon consensus-based clinical guidelines upon which CMS could solicit comments.

    Response: We appreciate the recommendations and will engage with all stakeholders, including patient and consumer organizations. We provide a wide array of support and information about our measure development process. Our Measure Development Plan for stakeholders' provides clear guidance on this process (available at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf. We will take these suggestions into consideration in the future.

    Comment: One commenter suggested that CMS look to and work with International Consortium for Health Outcomes Measurement (ICHOM) to develop additional and needed outcome measures and references MEDPACs June 2014 report.

    Response: We will continue to collaborate with stakeholders that develop outcome measures for quality reporting.

    Comment: One commenter recommended that CMS collaborate with specialty societies, frontline clinicians, and EHR vendors in the development, testing, and implementation of measures with a focus on integrating the measurement of and reporting on performance with quality improvement and care delivery and on decreasing clinician burden.

    Response: We agree it is important to continuously enhance the integration of health IT support for quality measurement and improvement with safe, effective care delivery workflows that minimize burdens on the clinician, patient, and clinical relationship. We will take the commenter's recommendation into consideration as we develop, test, and implement new measures.

    Comment: One commenter recommended that CMS carefully review measure sets and defer to medical professional specialty society comments to ensure that measure sets are appropriately constructed. The commenter recommended that CMS obtain insight from clinicians who will be reporting these services to test the validity of the measure sets.

    Response: We will continue to work with specialty groups to improve the specialty measure sets in the future.

    Comment: Several commenters recommended that CMS use the core measure sets developed by the Core Quality Measures Collaborative because using these measure sets would ensure alignment, harmonization, and the avoidance of competing quality measures among payers.

    Response: Measures that are a part of the CQMC core measure sets have been proposed for implementation and CMS intends to continue its collaboration with the CQMC to ensure alignment and harmonization in quality measure reporting.

    Comment: One commenter recommended that CMS consider the recommendations made by the American College of Physicians (ACP) Performance Measurement Committee with regard to measure selection within MIPS.

    Response: The ACP, like all other professional societies, has the opportunity to comment and provide feedback on our measure selection, including their recommendations, through the notice and comment process.

    Comment: One commenter stated CMS has not adequately involved physicians in the measure development process.

    Response: All Technical expert panels (TEPs) for measures developed by CMS or a CMS contractor include a clinical expert. Additionally, the majority of measures in the program are not developed by CMS but by medical specialty societies.

    Comment: One commenter suggested that CMS account for the professional role of the Advanced Practice Registered Nurse (APRN) and all appropriate stakeholders who provide clinical services to beneficiaries when creating and evaluating quality measures. The commenter suggested that CMS ensure the committees and Technical Expert Panels tasked with developing quality measures include nurses.

    Response: We value the expertise of APRNs in providing patient care and we will consider their participation in the future.

    Comment: One commenter believed CMS should continue to work with stakeholders to make the process for selection of quality measures clear and well defined. The commenter encouraged CMS to focus on getting new, relevant measures into the program within a shorter timeframe. The commenter believed that a 2-year submission to implementation interval would hinder introduction of new measures into MIPS through the traditional approach. The commenter believed there will be growth in measures submitted to the program through QCDRs in the future.

    Response: We do not develop most of the measures, but rather measure stewards/owners submit their measures to CMS for consideration and implementation. We will work with measure developers and other stakeholders to continue to try and shorten the timeframe for measure development and implementation and to make the process as efficient as possible.

    Comment: One commenter requested that CMS promote and disseminate research on which process improvement measures have proven to be the most effective at improving clinical outcomes.

    Response: We will take this under consideration and will continue working with clinicians to promote best practices and the highest quality healthcare for clinicians and Medicare beneficiaries.

    Comment: One commenter believes we should consider how to work with measure developers to integrate patient preferences into measure design.

    Response: We agree with the commenter and believe the patient experience and incorporation of patient preferences are important components of healthcare quality.

    Comment: Commenters recommended that CMS consult with relevant eligible clinician organizations and other relevant stakeholders and reminded CMS that the MACRA statute does not require CMS to utilize the NQF MAP to provide guidance into the pre-rulemaking process on the selection of MIPS quality measures, but requires the Secretary to consult with relevant eligible clinician organizations, including state and national medical societies. To strengthen the pre-rulemaking process, commenters recommended that CMS address issues with the MAP around: voting options on individual measures; discussion and treatment of existing measures undergoing maintenance review; timelines for commenting on MAP recommendations; the make-up of the MAP coordinating committee and workgroups; and the sometimes inadequate notice for public comment (for example, agendas are often not available until close to the day of a MAP meeting). In addition, the commenters reminded CMS that requiring measure developers to propose measures to the MAP for use in CMS programs introduces another time-consuming step in the measure development cycle, and that MACRA provides CMS the flexibility in terms of how it uses the MAP.

    Response: We appreciate their feedback about the MAP, and the commenters correctly note that we retain the authority to adopt measures without MAP's recommendations. We will continue to work with the NQF on optimizing the MAP process and will take the commenters' recommendations into consideration in future rulemaking.

    (9) Cross-Cutting Measures for 2017 and Beyond

    Under PQRS we realized the value in requiring EPs to report a cross-cutting measure and have proposed to continue the use of cross-cutting measures under MIPS. The cross-cutting measures help focus our efforts on population health improvement and they also allow for meaningful comparisons between MIPS eligible clinicians. Under MIPS, we proposed fewer cross-cutting measures than those available under PQRS for 2016 reporting; however, we believe the list contains measures for which all patient-facing MIPS eligible clinicians should be able to report, as the measures proposed include commonplace health improvement activities such as checking blood pressure and medication management. We proposed to eliminate some measures for which the reporting MIPS eligible clinician may not actually be providing the care, but are just reporting another MIPS eligible clinician's performance result. An example of this would be a MIPS eligible clinician who never manages a diabetic patient's glucose, yet previously could have reported a measure about hemoglobin A1c based on an encounter. This type of reporting will likely not help improve or confirm the quality of care the MIPS eligible clinician provides to his or her patients. Although there are fewer proposed cross-cutting measures under MIPS, in previous years some measures were too specialized and could not be reported on by all MIPS eligible clinicians. The proposed cross-cutting measures under MIPS are more broadly applicable and can be reported on by most specialties. Non-patient facing MIPS eligible clinicians do not have a cross-cutting measure requirement. The cross-cutting measures that were available under PQRS for 2016 reporting that are not being proposed as cross-cutting measures for 2017 reporting are:

    • PQRS #001 (Diabetes: Hemoglobin A1c Poor Control).

    • PQRS #046 (Medication Reconciliation Post Discharge).

    • PQRS #110 (Preventive Care and Screening: Influenza Immunization).

    • PQRS #111 (Pneumonia Vaccination Status for Older Adults).

    • PQRS #112 (Breast Cancer Screening).

    • PQRS #131 (Pain Assessment and Follow-Up).

    • PQRS #134 (Preventive Care and Screening: Screening for Clinical Depression and Follow-Up Plan).

    • PQRS #154 (Falls: Risk Assessment).

    • PQRS #155 (Falls: Plan of Care).

    • PQRS #182 (Functional Outcome Assessment).

    • PQRS #240 (Childhood Immunization Status).

    • PQRS #318 (Falls: Screening for Fall Risk).

    • PQRS #400 (One-Time Screening for Hepatitis C Virus (HCV) for Patients at Risk).

    While we proposed to remove the above listed measures from the cross-cutting measure set, these measures were proposed to be available as individual quality measures available for MIPS reporting, some of which have proposed substantive changes.

    The following is a summary of the comments we received regarding our proposal on cross-cutting measures for 2017 and beyond.

    Comment: Some commenters supported the proposal to require reporting at least one cross-cutting measure, and suggested that CMS support the development of additional cross-cutting measures.

    Response: We appreciate the support; however, as discussed in section II.E.5.b. of this final rule with comment period, we are not finalizing the cross-cutting measure requirement in an effort to reduce program complexity as part of the transition year of CY 2017.

    Comment: Several commenters requested that CMS provide a broader selection of cross-cutting measures to choose from. Further stating that the list is not robust enough to allow all clinicians to meet this requirement.

    Response: We appreciate the suggestion; however, as discussed in section II.E.5.b. of this final rule with comment period, we are not finalizing the cross-cutting measure requirement as part of the transition year of CY 2017.

    Comment: One commenter requested that all eligible clinicians must receive clear and timely notification of all cross-cutting and outcome measures before the start of the reporting period so that they can select and plan for a full year of quality improvement activities.

    Response: We appreciate the recommendation; however, as discussed in section II.E.5.b. of this final rule with comment period, we are not finalizing the cross-cutting measure requirement as part of the transition year of CY 2017.

    Comment: Numerous commenters did not agree with requiring all patient facing clinicians to report one cross-cutting measure. The commenters did not believe there were measures that are important or informative for some procedural or technical sub-specialties and that they are difficult to understand and implement. Further, one commenter believes that the cross-cutting measures appear to be measures that will be applicable for multiple clinicians types rather than cross‐sectional measures, or anything that would push for community collaboration.

    Response: We appreciate the feedback and would like to note that, as discussed in section II.E.5.b. of this final rule with comment period, we are not finalizing the cross-cutting measure requirement as part of the transition year of CY 2017.

    Comment: One commenter stated that Non-patient facing clinicians should be exempt from reporting a cross cutting measure.

    Response: We would like to note that non-patient facing clinicians would have been exempt from reporting a cross-cutting measure. Further, as discussed in section II.E.5.b of this final rule with comment period, we are not finalizing the cross-cutting measure requirement as part of the transition year of CY 2017.

    Comment: A few commenters recommended that CMS work with stakeholders to develop cross-cutting measures for non-patient facing MIPS eligible clinicians, as these MIPS eligible clinicians play an important role in ensuring safe, appropriate, high-quality care. The commenters supported allowing non-patient facing MIPS eligible clinicians to report through a QCDR that can report non-MIPS measures.

    Response: We appreciate the recommendation; however, as discussed in section II.E.5.b. of this final rule with comment period, we are not finalizing the cross-cutting measure requirement as part of the transition year of CY 2017.

    Comment: A few commenters objected to the requirement that clinicians report one cross-cutting measure chosen from a list of general quality measures because it is counter to the statute's intent to allow eligible clinicians who report via QCDR the flexibility to select measure that are most relevant to their practice. The commenters urged CMS to remove the requirement that physicians reporting the quality performance category via QCDR must report on one cross-cutting measure.

    Response: We appreciate the commenters' feedback; however, as discussed in section II.E.5.b. of this final rule with comment period, we are not finalizing the cross-cutting measure requirement as part of the transition year of CY 2017.

    Comment: Several commenters disagreed with our proposal to remove various measures from the cross-cutting measure set. We also received support for some of the measures we proposed to include, as well as comments on measures that commenters did not support. Additionally, we received several recommendations of additional quality measures for potential inclusion in the cross-cutting measure set.

    Response: We appreciate the commenters' feedback and would like to note that we are not finalizing the cross-cutting measure requirement as part of the transition year of CY 2017. We would also like to note that the measures that were proposed for the cross-cutting measure set are still listed as available measures under Table A of the appendix in this final rule with comment period.

    As a result of the comments, and based on our other finalized policies, we are not finalizing the set of cross-cutting measures as proposed to reduce the complexity of the program. Rather we are incorporating these measures within the MIPS individual (Table A) and specialty measure sets (Table E) within the appendix of this final rule with comment period. We continue to value the reporting of cross-cutting measures to incentivize improvements in population health and in order to be better able to compare large numbers of physicians on core quality measures that are important to patients and the health of populations. We understand that many clinicians believe that cross-cutting measures may not apply to them. We are seeking additional comments in this final rule with comment period from the public for future notice-and-comment rulemaking on approaches to implementation of cross-cutting measures in future years of the MIPS program that could achieve these program goals and be meaningful to MIPS eligible clinicians and the patients they serve.

    d. Miscellaneous Comments

    We received a number of comments for this section that are not related to specific measure proposals as well as comments spanning multiple measure proposals that contained common themes. We have summarized those comments below.

    Comment: Numerous commenters made requests for new measures to be included in the annual list of quality measures. For example, we received several comments requesting additional measures be added that pertain to palliative care and behavioral-health.

    Response: We appreciate the commenters' suggestions. We would encourage the commenters to submit potential new measures for inclusion in MIPS through the Call for Quality Measures process.

    Comment: Numerous commenters made requests for changes to existing measure specifications. For example, some commenters requested encounter codes be added or removed from measure specifications or certain denominator criteria be expanded to include additional target groups for various measures.

    Response: Although CMS has authority over all of its quality programs and measure changes within those programs, we also work with measure owners regarding the updates to measures. Measure changes are not automatically implemented within quality programs. We may adopt changes to measures in two ways: (1) For measures with substantive changes, the changes must be adopted through notice-and-comment rulemaking. Generally, measures with substantive changes are proposed through rulemaking and open for comment. (2) For measures with non-substantive or technical changes, we can consider implementing the changes through subregulatory means.

    Comment: Numerous commenters made requests for additional specialty measure sets, as well as modifications to the proposed specialty measure sets.

    Response: We appreciate the commenter's suggestions. We plan to work with the measure developers and specialty societies to continuously improve and expand the specialty-measure sets in the future. Further, several comments were not specific enough as to the measures that would be appropriate to the specialty measure set or where there were not enough measures within the current measure set to provide a sufficient number of measures for the specific specialty set. In instances where we received comments that were specific enough to develop or modify the specialty measure sets, and which we believed were appropriate, we have included those updates along with the rationale for those changes in the measure tables in the appendix.

    Comment: We received several requests to update measure steward information in the measure tables located in the appendix.

    Response: We appreciate the commenters' feedback and have made the necessary updates to the measures steward information in the measure tables.

    Comment: Some commenters asked that physician led specialty organizations be able develop evidence-based quality guidelines of their own and proceed with a simple attestation procedure to document compliance.

    Response: As discussed in section II.E.5.c. of this final rule with comment period, we have an annual call for measures where clinicians have the opportunity to submit additional measures covering the services that they provide. We have also made available a measure development plan for stakeholders' review, available at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf. While we recognize the simplicity of simple attestation, we believe it is important to receive actual performance information on how an MIPS eligible clinician or group reported, not just whether they did the measure.

    Comment: A few commenters requested the adoption of appropriate use criteria (AUC) as quality measures to ensure the best care for patients. The commenters recommended that the specialty areas covered by the AUCs include: Radiology, cardiology, musculoskeletal (includes specialized therapy management, interventional pain, large joint surgery, spine surgery), radiation therapy, genetics and lab management, medical oncology, sleep medicine, specialty drug, and post-acute care. In addition, the commenters recommended that AUC be derived from leading specialty societies, be incorporated from current peer-reviewed medical literature, have input from subject matter expert clinicians and community-based physicians, be available to any eligible clinicians free of charge on a Web site, and have a proven track record of effectiveness in a wide range of practice settings. The AUC should be subject to oversight and review by nationally recognized, independent accrediting bodies, and be reviewed annually.

    Response: We are finalizing quality measures that are based on the AUC in this rule.

    Comment: One commenter promoted the value of palliative care and encouraged CMS to monitor the effects of MACRA, specifically the quality and cost performance categories, on patient access to health care providers, particularly palliative care providers.

    Response: We appreciate the suggestion. We intend to monitor the effects of the MIPS program on all aspects of care.

    We have considered the comments received and will take them into consideration in future notice-and-comment rulemaking.

    e. Cost Performance Category (1) Background (a) General Overview and Strategy

    Measuring cost is an integral part of measuring value. We envision the measures in the MIPS cost performance category would provide MIPS eligible clinicians with the information they need to provide appropriate care to their patients and enhance health outcomes. In implementing the cost performance category, we proposed to start with existing condition and episode-based measures, and the total per capita costs for all attributed beneficiaries measure (total per capita cost measure). We also proposed that all cost measures would be adjusted for geographic payment rate adjustments and beneficiary risk factors. In addition, a specialty adjustment would be applied to the total per capita cost measure. We proposed that all of the measures attributed to a MIPS eligible clinician or group would be weighted equally within the cost performance category, and there would be no minimum number of measures required to receive a score under the cost performance category. Lastly, we indicated that we plan to draw on standards for measure reliability, patient attribution, risk adjustment, and payment standardization from the VM as well as the Physician Feedback Program, as we believe many of the same measurement principles for cost measurement in the VM are applicable for measurement in the cost performance category in MIPS (81 FR 28196).

    We proposed that all measures used under the cost performance category would be derived from Medicare administrative claims data and as a result, participation would not require use of a data submission mechanism.

    In response to public comments, as detailed in section II.E.5.e.(2) of this final rule with comment period, we are lowering the weight of the cost performance category in the MIPS final score from 10 percent in the proposed rule to 0 percent for the transition year (MIPS payment year 2019). We are finalizing a weight of 10 percent for MIPS payment year 2020. For MIPS payment year 2021 and beyond, the cost performance category will have a weight of 30 percent of the final score as required by section 1848(q)(5)(E)(i) of the Act. Reducing the weight of the cost performance category provides MIPS eligible clinicians and groups the opportunity to better understand the cost measures in MIPS without an effect on their payments, especially the impact of adjustments to the attribution methodologies and their performance based on the MIPS decile scoring system. We are also limiting the cost measures finalized for the CY 2017 performance period to those that have been included in the VM or the 2014 sQRUR and that are reliable for both individual and group reporting. We plan to continue developing care episode groups, patient condition groups, and patient relationship categories (and codes for such groups and categories). We plan to incorporate new measures as they become available and will give the public the opportunity to comment on these provisions through future notice and comment rulemaking.

    The following is a summary of the comments we received on the general provisions of cost measurement within the MIPS program.

    Comment: Several commenters supported the inclusion of cost measures as part of the MIPS program, noting the important role of clinicians in ordering services and managing care so as to avoid unnecessary services.

    Response: We thank the commenters for their support and believe that cost is an important element of the MIPS program, reflecting the key role of clinicians in guiding care decisions. However, we also consider it important to phase in cost measurement. Therefore, we are limiting the number of cost measures for the CY 2017 performance period and lowering the weight of the cost performance category to 0 percent in the final score for the transition year, 10 percent in the second MIPS payment year, and 30 percent in the third and following MIPS payment years.

    Comment: Several commenters noted concern with the inclusion of cost measures in MIPS because it could cause unethical behavior and improper reductions in care, and clinicians control only a small part of healthcare costs. Some commenters noted that clinicians do not determine the costs of services such as hospital visits, durable medical equipment, or prescription drugs. Others asked that cost measures should only be used when there is a direct tie to quality measurement.

    Response: We agree that cost should be considered in the context of quality. The statutory design of the final score incorporates both quality and cost such that they are linked in the clinician's overall assessment in MIPS. We recognize that clinicians do not personally provide, order, or determine the price of all of the individual services in the cost measures, but we do believe that clinicians do have an effect on the volume and type of services that are provided to a patient through better coordination of care and improved outcomes. We plan to continue to assess best methods for attributing cost to MIPS eligible clinicians.

    Comment: Many commenters supported cost measures being calculated using claims data so as not to add additional reporting burden. Some commenters expressed concern with cost measures solely calculated based on claims and suggested that CMS consider other measures, such as appropriate use criteria or elements of Choosing Wisely.

    Response: We agree that claims data can provide valuable information on cost and this method has the advantage of not requiring additional reporting from MIPS eligible clinicians. We appreciate that there are some potential measures related to cost that would not necessarily be calculated using claims. Some of these measures, such as appropriate use measures, are included, as appropriate, in the quality and improvement activity performance categories. We will take into consideration the commenter's suggestion related to elements of the Choosing Wisely measures in the future and determine whether they may be considered as cost measures.

    Comment: Several commenters expressed concern that the proposed measures for the cost performance category did not adequately adjust costs to account for the risks associated with different types of patients. They commented that the measures do not adjust for the socioeconomic status, patient compliance, or other non-health factors that might contribute to spending. Many of these commenters encouraged socioeconomic status to be included as a risk adjustment variable for individual measures or the entire program.

    Response: We note that we are establishing, in this final rule with comment period, the cost performance category weight as 0 percent of the final score for the transition year (MIPS payment year 2019) to allow MIPS eligible clinicians to gain experience with these measures in MIPS. Although we believe the measures are valid and reliable, we will continue to evaluate the potential impact of risk factors, including socioeconomic status, on cost measure performance. Please see section II.E.5.b.(3) for a discussion of the integration of the findings of the ASPE report on socioeconomic factors into the overall MIPS program in the future.

    Comment: Several commenters expressed concern that the risk adjustment methods used in the cost performance category would not adequately address the issues of their particular specialty or field of medicine. Many recommended that they only be compared to clinicians who had the same specialty.

    Response: We will continue to explore methods to refine our risk adjustment methods to accommodate the different types of patients treated by clinicians in the Medicare system. We are applying a specialty adjustment to the total per capita cost measure because we found, when implementing this measure as part of the VM, that there were widely divergent costs among patients treated by various specialties that were not addressed by other risk adjustment methods. The other measures we are including in the cost performance category for the CY 2017 performance period accommodate clinical differences in other ways. The MSPB measure is adjusted on the basis of the index admission diagnosis-related groups (DRGs), which is likely to differ based on the specialty of the clinician attributed to the measure. The episode-based measures are triggered on the basis of the provision of a service that identifies a type of patient who is often seen by a certain specialty or limited number of specialties and this concurrent risk adjustment is an effective predictor of episode cost. We believe that the adjustments contained in these measures adequately differentiate patient populations by different specialties and we will continue to investigate methods to ensure that the unique attributes of various medical specialties are appropriately accounted for within the program.

    Comment: Some commenters expressed concern that cost measures would discourage the development of new therapies. One commenter suggested that CMS not include the costs of new technology within cost measures.

    Response: We wish to ensure that cost measurement does not hinder the appropriate uptake of new technologies. One challenge of new technologies is that the costs are not represented in the historical benchmarks. However, we are finalizing a policy to create benchmarks for the cost measures based on the performance period, so the benchmarks will build in the costs associated with adoption of new technologies in that period. We also anticipate that new technologies may reduce the need for other services, which could further reduce the cost of care. We believe that excluding new technology from the cost measures is not appropriate when the technology is being paid for by the Medicare program and its beneficiaries, but we will continue to monitor this issue to determine whether adjustments should be made in the future.

    (b) MACRA Requirements

    Section 1848(q)(2)(A)(ii) of the Act establishes cost as a performance category under the MIPS. Section 1848(q)(2)(B)(ii) of the Act describes the measures of the cost performance category as the measurement of resource use for a MIPS performance period under section 1848(p)(3) of the Act, using the methodology under section 1848(r) of the Act as appropriate, and, as feasible and applicable, accounting for the cost of drugs under Part D.

    As discussed in section II.E.5.e.(1)(c) of the proposed rule, we previously established in rulemaking the VM, as required by section 1848(p) of the Act, that provides for differential payment to a physician or a group of physicians (and EPs as the Secretary determines appropriate) under the PFS based on the quality of care furnished compared to cost. For the evaluation of costs of care, section 1848(p)(3) of the Act refers to appropriate measures of costs established by the Secretary that eliminate the effect of geographic adjustments in payment rates and take into account risk factors (such as socioeconomic and demographic characteristics, ethnicity, and health status of individuals, such as to recognize that less healthy individuals may require more intensive interventions) and other factors determined appropriate by the Secretary.

    Section 1848(r) of the Act specifies a series of steps and activities for the Secretary to undertake to involve the physician, practitioner, and other stakeholder communities in enhancing the infrastructure for cost measurement, including for purposes of MIPS and APMs. Section 1848(r)(2) of the Act requires the development of care episode and patient condition groups, and classification codes for such groups. That section provides for care episode and patient condition groups to account for a target of an estimated one-half of expenditures under Medicare Parts A and B (with this target increasing over time as appropriate). We are required to take into account several factors when establishing these groups. For care episode groups, we must consider the patient's clinical issues at the time items and services are furnished during an episode of care, such as clinical conditions or diagnoses, whether or not inpatient hospitalization occurs, the principal procedures or services furnished, and other factors determined appropriate by the Secretary. For patient condition groups, we must consider the patient's clinical history at the time of a medical visit, such as the patient's combination of chronic conditions, current health status, and recent significant history (such as hospitalization and major surgery during a previous period), and other factors determined appropriate. We are required to post on the CMS Web site a draft list of care episode and patient condition groups and codes for solicitation of input from stakeholders, and subsequently, post on the CMS Web site an operational list of such groups and codes. As required by section 1848(r)(2)(H) of the Act, no later than November 1 of each year (beginning with 2018), the Secretary shall, through rulemaking, revise the operational list as the Secretary determines may be appropriate.

    To facilitate the attribution of patients and episodes to one or more clinicians, section 1848(r)(3) of the Act requires the development of patient relationship categories and codes that define and distinguish the relationship and responsibility of a physician or applicable practitioner with a patient at the time of furnishing an item or service. These categories shall include different relationships of the clinician to the patient and reflect various types of responsibility for and frequency of furnishing care. We are required to post on the CMS Web site a draft list of patient relationship categories and codes for solicitation of input from stakeholders, and subsequently, post on the CMS Web site an operational list of such categories and codes. As required by section 1848(r)(3)(F) of the Act, not later than November 1 of each year (beginning with 2018), the Secretary shall, through rulemaking, revise the operational list as the Secretary determines may be appropriate.

    Section 1848(r)(4) of the Act requires that claims submitted for items and services furnished by a physician or applicable practitioner on or after January 1, 2018, shall, as determined appropriate by the Secretary, include the applicable codes established for care episode groups, patient condition groups, and patient relationship categories under sections 1848(r)(2) and (3) of the Act, as well as the NPI of the ordering physician or applicable practitioner (if different from the billing physician or applicable practitioner).

    Under section 1848(r)(5) of the Act, to evaluate the resources used to treat patients, the Secretary shall, as determined appropriate, use the codes reported on claims under section 1848(r)(4) of the Act to attribute patients to one or more physicians and applicable practitioners and as a basis to compare similar patients, and conduct an analysis of resource use. In measuring such resource use, the Secretary shall use per patient total allowed charges for all services under Medicare Parts A and B (and, if the Secretary determines appropriate, Medicare Part D) and may use other measures of allowed charges and measures of utilization of items and services. The Secretary shall seek comments through one or more mechanisms (other than notice and comment rulemaking) from stakeholders regarding the resource use methodology established under section 1848(r)(5) of the Act.

    On October 15, 2015, as required by section 1848(r)(2)(B) of the Act, we posted on the CMS Web site for public comment a list of the episode groups developed under section 1848(n)(9)(A) of the Act with a summary of the background and context to solicit stakeholder input as required by section 1848(r)(2)(C) of the Act. That posting is available at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/MACRA-MIPS-and-APMs.html. The public comment period closed on February 15, 2016.

    (c) Relationship to the Value Modifier

    Currently, the VM established under section 1848(p) of the Act utilizes six cost measures (see 42 CFR 414.1235): (1) A total per capita costs for all attributed beneficiaries measure (which we will refer to as the total per capita cost measure); (2) a total per capita costs for all attributed beneficiaries with chronic obstructive pulmonary disease (COPD) measure; (3) a total per capita costs for all attributed beneficiaries with congestive heart failure (CHF) measure; (4) a total per capita costs for all attributed beneficiaries with coronary artery disease (CAD) measure; (5) a total per capita costs for all attributed beneficiaries with diabetes mellitus (DM) measure; and (6) an MSPB measure.

    Total per capita costs (measures 1-5) and the MSPB measure include payments under both Medicare Part A and Part B, but do not include Medicare payments under Part D for drug expenses. Cost measures for the VM are attributed at the physician group and solo practice level using the Medicare-enrolled billing TIN. They are risk adjusted and payment standardized, and the expected cost is adjusted for the TIN's specialty composition. We refer readers to our discussions of these total per capita cost measures (76 FR 73433 through 73434, 77 FR 69315 through 69316), MSPB measure (78 FR 74774 through 74780, 80 FR 71295 through 71296), payment standardization methodology (77 FR 69316 through 69317), risk adjustment methodology (77 FR 69317 through 69318), and specialty adjustment methodology (78 FR 74781 through 74784) in earlier rulemaking for the VM. More information about these measures may be found in documents under the links titled “Measure Information Form: Overall Total Per Capita Cost Measure,” “Measure Information Form: Condition-Specific Total Per Capita Cost Measures,” and “Measure Information Form: Medicare Spending Per Beneficiary Measure” available at https://www.cms.gov/medicare/medicare-fee-for-service-payment/physicianfeedbackprogram/valuebasedpaymentmodifier.html.

    The total per capita cost measures use a two-step attribution methodology that is similar to, but not exactly the same, as the assignment methodology used for the Shared Savings Program. The attribution focuses on the delivery of primary care services (77 FR 69320) by both primary care clinicians and specialists. The MSPB measure has a different attribution methodology. It is attributed to the TIN that provides the plurality of Medicare Part B claims (as measured by allowed charges) during the index inpatient hospitalization. We refer readers to the discussion of our attribution methodologies (77 FR 69318 through 69320, 79 FR 67960 through 67964) in prior rulemaking for the VM.

    These total per capita cost measures include payments for a calendar year and have been reported to TINs for several years through the Quality and Resource Use Reports (QRURs), which are issued as part of the Physician Feedback Program under section 1848(n) of the Act. The total per capita cost measures have been used in the calculation of the VM payment adjustments beginning with the 2015 payment adjustment period and the MSPB measure has been used in the calculation of the VM payment adjustments beginning with the 2016 payment adjustment period. More information about the current attribution methodology for these measures is available in the “Fact Sheet for Attribution in the Value-Based Payment Modifier Program” document available at https://www.cms.gov/medicare/medicare-fee-for-service-payment/physicianfeedbackprogram/valuebasedpaymentmodifier.html.

    In the MIPS and APMs RFI (80 FR 59102 through 59113), we solicited feedback on the cost performance category. A summary of those comments is located in the proposed rule (81 FR 28198).

    (2) Weighting in the Final Score

    As required by section 1848(q)(5)(E)(i)(II)(bb) of the Act, the cost performance category shall make up no more than 10 percent of the final score for the first MIPS payment year (CY 2019) and not more than 15 percent of the final score the second MIPS payment year (CY 2020). Therefore, we proposed at § 414.1350 that the cost performance category would make up 10 percent of the final score for the first MIPS payment year (CY 2019) and 15 percent of the final score for the second MIPS payment year (CY 2020) (81 FR 28384). As required by section 1848(q)(5)(E)(i)(II)(aa) of the Act and proposed at § 414.1350 (81 FR 28384), starting with the third MIPS payment year and for each MIPS payment year thereafter, the cost performance category would make up 30 percent of the final score.

    The following is a summary of the comments we received regarding our proposals for the cost performance category weight in the final score for the first and second MIPS payment years.

    Comment: Several commenters supported the weighting of the cost performance category as 10 percent of the MIPS final score for 2019. However, we also had many commenters that encouraged us to reduce the weight of the cost performance category to as low as 0 percent for 2019 due to lack of familiarity with cost measures. Other commenters recommended a delay in the inclusion of the cost performance category within the final score because attribution methods did not properly identify the clinician who was responsible for the care and patients could be attributed to clinicians who had little influence on their overall care. Others recommended delay because risk adjustment methods based on administrative data could not properly capture the clinical risk differences among patients, placing clinicians who see more complex patients at a disadvantage. Others noted that more time was needed to perfect cost measures. Others recommended that cost measures be attributed to only those clinicians who volunteer to participate in a pilot in the transition year.

    Response: Clinicians have received feedback on cost measures through the VM and the Physician Feedback Program reports for a number of years; however, we agree that clinicians may need time to become familiar with cost measures in MIPS. The VM calculation and the Physician Feedback Program are different in two significant ways from the proposed approach to cost measurement in the MIPS. The first major difference is that we proposed to attribute measures at the TIN/NPI level for those submitting as individuals rather than at the TIN level used for the VM. While this would not make a difference for those in solo practice, it would present a significant change for those that practice in groups and participate in MIPS as individuals. In MIPS, we have finalized a policy in section II.E.5.a.(2) of this rule that those that elect to participate in MIPS as groups, must be assessed for all performance categories as groups. Conversely, those that elect to participate in MIPS as individual clinicians will be measured on all four performance categories as an individual. With the exception of solo practitioners (defined for the VM as a single TIN with one EP identified by an NPI billing under the TIN), the VM evaluates performance at the aggregate group level. For example, a surgeon in a multi-specialty group who elects to participate in MIPS as an individual would receive feedback on the cost measures attributed to him or her individually as opposed to that of the entire group. Second, as discussed in section II.E.5.e.(3)(c) of this final rule with comment period, to facilitate participation at the individual level, we will attribute cases at the TIN/NPI level, rather than at the TIN level, as is done currently under the VM. Even for groups that have received QRURs on cost measures under the VM, this global change to the attribution logic is likely to change the attributed cases, which in turn could affect their performance on cost measures.

    In addition, as discussed in section II.E.6.a.(3) of this final rule with comment period, scoring for the cost performance category under MIPS is different from the VM because it is based on performance within a decile system as opposed to the quality-tiering scoring system used in the VM. A group or solo practitioner that scored in the average range under the VM quality-tiering methodology may be scored “above average” or “below average” in MIPS because of the difference in the scoring methods. We believe it is important for this transition year for MIPS eligible clinicians to have the opportunity to become familiar with the attribution changes and the scoring changes by receiving performance feedback showing what their performance on the cost measures will look like under the MIPS attribution and scoring rules before cost measures affects payment.

    Section 1848(q)(5)(E)(i)(II)(bb) of the Act provides that for the first and second MIPS payment years, “not more than” 10 percent and 15 percent, respectively, of a MIPS eligible clinician's final score shall be based on performance in the cost performance category. Accordingly, we believe that the statute affords discretion to adopt a weighting for the cost performance category lower than 10 percent and 15 percent for the first and second payment years, respectively. For these reasons described above, we believe that a transition period would be appropriate; we are lowering the weight of the cost performance category for the first and second MIPS payment years. We are not finalizing our proposal for a weighting of 10 percent for the transition year and 15 percent for the second MIPS payment year. Instead we are finalizing a weighting of 0 percent for the transition year and 10 percent for the second MIPS payment year.

    We are not reducing the weight of the cost category due to concerns with attribution, risk adjustment, or the measure specifications. We intend to continue improving all aspects of the cost measures, but we believe our final methods are sound. However, due to the changes in scoring and attribution, we agree that MIPS eligible clinicians should have more time to become familiar with these measures in the context of MIPS. Finally, we do not believe we should restrict the cost performance category to a pilot. MIPS eligible clinicians are not required to submit data and the cost performance category does not contribute to the final score for the transition year. Therefore, we will calculate a cost performance category score for all MIPS eligible clinicians for whom we can reliably calculate a score.

    Comment: Many commenters encouraged CMS to defer assigning any weight to the cost performance category for MIPS until patient relationship codes have been in use.

    Response: Section 1848(r)(3) of the Act requires us to develop patient relationship categories and codes that define and distinguish the relationship and responsibility of a physician or applicable practitioner with a patient. We are currently reviewing comments received on the draft list of patient relationship categories and will post an operational list of these categories and codes in April 2017. We disagree with commenters that we should wait until the patient relationship codes are in use before measuring cost. While we believe that these patient relationship codes can be an important contributor to better clarifying the particular role of a clinician in patient care, these codes will not be developed in time for the first MIPS performance period. Moreover, section 1848(r)(4) directs that such codes shall be included, as determined appropriate by the Secretary, on claims for items and services furnished on or after January 1, 2018. Following their inclusion on claims, we will need time to evaluate how best to incorporate those codes into cost measures. While this additional analysis of patient relationship codes takes place, the cost performance category will remain an important part of the MIPS. In their current form, we find the cost measures adopted in this final rule with comment period both reliable and valid.

    After consideration of the comments, we believe that a transition period for measuring cost would be appropriate; therefore, we are not finalizing the weighting of the cost performance category in the MIPS final score as proposed. Instead, we are finalizing at § 414.1350(b) a weighting of 0 percent for the 2019 MIPS payment year and 10 percent for the 2020 MIPS payment year. Starting with the 2021 MIPS payment year, the cost performance category will be weighted at 30 percent, as required by section 1848(q)(5)(E)(i)(II)(aa) of the Act. We recognize that the individual attribution of cost measures for those MIPS eligible clinicians in group practices and the new MIPS scoring system is a change for clinicians and we would like to give them an opportunity to gain experience with the cost measures before increasing the weight of the performance category within the final score.

    (3) Cost Criteria

    As discussed in section II.E.5.a. of the proposed rule (81 FR 28181), performance in the cost performance category would be assessed using measures based on administrative Medicare claims data. We did not propose any additional data submissions for the cost performance category. As such, MIPS eligible clinicians and groups would be assessed based on cost for Medicare patients only and only for patients that are attributed to them. MIPS eligible clinicians or groups that do not have enough attributed cases to meet or exceed the case minimums proposed in sections II.E.5.e.(3)(a)(ii) and II.E.5.e.(3)(b)(ii) of the proposed rule would not be measured on cost. For more discussion of MIPS eligible clinicians and groups without a cost performance category score, please refer to II.E.6.a.(3)(d) and II.E.6.b.(2) of this final rule with comment period.

    (a) Value Modifier Cost Measures Proposed for the MIPS Cost Performance Category

    For purposes of assessing performance of MIPS eligible clinicians on the cost performance category, we proposed at § 414.1350(a) to specify cost measures for a performance period (81 FR 28384). For the CY 2017 MIPS performance period, we proposed to utilize the total per capita cost measure, the MSPB measure, and several episode-based measures discussed in section II.E.5.e.(3)(b). of the proposed rule (81 FR 28200) for the cost performance category. The total per capita costs measure and the MSPB measure are described in section II.E.5.e.(1)(c) of the proposed rule (81 FR 28197). We proposed including the total per capita cost measure as it is a global measure of all Medicare Part A and Part B resource use during the MIPS performance period and inclusive of the four condition-specific total per capita cost measures under the VM (chronic obstructive pulmonary disease, congestive heart failure, coronary artery disease, and diabetes mellitus) for which performance tends to be correlated and its inclusion was supported by commenters on the MIPS and APMs RFI (80 FR 59102 through 59113). We also anticipate that MIPS eligible clinicians are familiar with the total per capita cost measure as the measure has been in the VM since 2015 and feedback has been reported through the annual QRUR to all groups starting in 2014.

    We proposed to adopt the MSPB measure because by the beginning of the initial MIPS performance period in 2017, we believe most MIPS eligible clinicians will be familiar with the measure in the VM or its variant under the Hospital Value-Based Purchasing (VBP) Program. However, we proposed two technical changes to the MSPB measure calculations for purposes of its adoption in MIPS which were discussed in the proposed rule at 81 FR 28200.

    We proposed to use the same methodologies for payment standardization, and risk adjustment for these measures for the cost performance category as are defined for the VM. For more details on the previously adopted payment standardization methodology, see 77 FR 69316 through 69317. For more details on the previously adopted risk adjustment methodology, see 77 FR 69317 through 69318.

    We did not propose to include the four condition-specific total per capita cost measures (chronic obstructive pulmonary disease, congestive heart failure, coronary artery disease, and diabetes mellitus). Instead, we generally proposed to assess performance in part using the episode-based measures (81 FR 28200). This shift is in response to feedback received as part of the MIPS and APMs RFI (80 FR 59102 through 59113). In the MIPS and APMs RFI, commenters stated that they do not believe the existing condition-specific total per capita cost measures under the VM are relevant to their practice and expressed support for episode-based measures under MIPS.

    The following is summary of the comments we received regarding our proposal to include the total per capita cost measure and MSPB measure as cost measures.

    Comment: Several commenters supported the inclusion of the total per capita cost measure.

    Response: We will include the total per capita cost measure in the CY 2017 performance period.

    Comment: Several commenters opposed the inclusion of the total per capita cost measure because it was developed to measure hospitals.

    Response: We believe that the commenters may have confused the total per capita cost measure with the MSPB measure, which was originally developed for use in the Hospital Value Based Purchasing program and is triggered on the basis of an index admission. The total per capita cost measure was not developed for nor ever used to measure quality or cost by a hospital in a Medicare program. Many patients who are attributed under the total per capita cost measure are not admitted to a hospital in a calendar year. The total per capita cost measure has been a part of the VM program since inception.

    Comment: A commenter opposed the inclusion of the total per capita cost measure because it focused on primary care.

    Response: The MIPS program aims to measure the cost of all clinicians, both primary care and specialists. While the total per capita cost measure may be more likely to be attributed to clinicians that provide primary care and uses a primary care attribution method, other measures may be more likely to be attributed to specialists. Including a diversity of measures allows the program to measure all types of clinicians.

    Comment: A commenter opposed the inclusion of the total per capita cost measure and instead urged CMS to speed development of episode-based measures.

    Response: We plan to incorporate episode-based measures within the cost performance category of the MIPS program. We proposed to include 41 episode-based measures for the CY 2017 performance period (81 FR 28200) and plan to continue to develop more episode groups. However, we believe there is value to continue to include the total per capita cost measure as well. Not all patients will necessarily be attributed in episode-based measures and the total per capita cost measure is the best current measure of all patients.

    Comment: A commenter supported the CMS decision not to propose for the cost performance category the four condition-specific total per capita cost measures that are used in the Value Modifier because they are duplicative of the total per capita cost measure covering all patients. Several commenters recommended that the four condition-specific total per capita cost measures be used in the cost performance category.

    Response: We intend to use episode-based measures for specific disease focus areas in future years. We believe that the design of episode-based measures which incorporate clinical input and distinguish related from unrelated services will better allow clinicians to improve performance on a particular population of patients. We will not include the four condition-specific total per capita cost measures in MIPS.

    Comment: Several commenters opposed the inclusion of a specialty adjustment within the total per capita cost measure because this adjustment would reward specialties that provide more expensive treatments.

    Response: The specialty adjustment for the total per capita cost measure has been used since the 2016 VM, which was based on 2014 data. We reviewed the different expected costs associated with various specialties as part of the CY 2014 PFS rulemaking and found substantial differences in average costs for attributed patients. For example, specialties such as medical oncology tend to treat relatively costly beneficiaries and bill for expensive Part B drugs but other specialties such as dermatology tend to treat low cost patients. Although cost data are adjusted to account for differences in patient characteristics, the effects of this adjustment do not fully account for the differences in costs associated with different specialties under this measure; therefore, we believe this adjustment is still warranted in MIPS. We are open to ways to improve the risk adjustment of this measure in the future to ensure that it appropriately evaluates all specialties of medicine.

    Comment: Several commenters supported the inclusion of a specialty adjustment within the total per capita cost measure because patients who become sick often seek more care from specialists and their expected costs would not be reflected within the risk adjustment methodology.

    Response: We believe the specialty adjustment is a necessary element of the total per capita cost measure. The MSPB and episode-based measures are designed with expected costs based in part on the clinical condition or procedure that triggers an episode. However, the total per capita cost measure is risk adjusted only on the basis of clinical conditions before the performance period. This risk adjustment cannot completely accommodate changes in source of care that are the result of new onset illness during the performance period. The specialty adjustment helps to accommodate for the differences in the types of patients seen by different specialists.

    Comment: A commenter recommended that costs associated with a hospital visit should not be included in the total per capita cost measure because multiple physicians are often involved.

    Response: We do not believe that excluding hospital services from the total per capita cost measure would be consistent with an overall focus on care coordination that may extend to periods when a patient is hospitalized.

    Comment: Several commenters supported the inclusion of the MSPB Measure.

    Response: We believe that this measure is both familiar to clinicians from use in the VM and QRUR and reflects a period of care in which a clinician may be able to influence cost. We will finalize the MSPB measure.

    Comment: Several commenters opposed the inclusion of the MSPB measure because it was developed to measure hospitals. Others suggested that it not be included in MIPS until it had been analyzed for use in a clinician program. Several comments opposed the inclusion of the MSPB measure because it focuses on primary care. Other commenters suggested the episode-based measures better measured specialists.

    Response: While this measure was originally used as part of the Hospital Value-Based Purchasing program, the MSPB measure has also been used in the VM, a clinician program, since 2016 and we continue to believe that the clinician who provides a significant number of services during a hospital visit also has some responsibility for overall cost. We also see value in using common measures to create parallel incentives for hospitals and MIPS eligible clinicians to coordinate care and achieve efficiencies. We believe that the MSPB measure will be attributed to all clinicians who provide significant care in the hospital, including specialists and primary care clinicians to the extent which they admit patients to the hospital. If a clinician does not provide hospital services, that clinician will not be attributed any cases to be scored on the measure.

    Comment: Several commenters expressed concern that cost measures could attribute patients for services before they are seen by the clinician to whom they are attributed. For example, a clinician could take over responsibility for primary care of a patient who had experienced health difficulties in the earlier part of the year that resulted in emergency room visits and hospital admissions that were partly due to the result of a lack of care coordination. This patient may not have had more than one visit with a particular clinician before this new clinician took over, resulting in all costs being attributed to the individual once he or she billed for two office visits for that patient.

    Response: Our attribution methods aim to measure the influence of a clinician on the cost of care of his or her patients. In some cases, certain elements within the cost measure may not be directly related to the performance of the attributed clinician. We aim to address this by requiring a minimum case volume and risk adjusting so that clinicians are compared on the basis of similar patient populations. We will continue to work with stakeholders to improve cost measures.

    Comment: Several commenters noted that the same costs could be included in the total per capita cost measure, the MSPB measure, and the episode-based measures and suggested that costs should only be counted once for an individual physician.

    Response: We believe that attempting to remove costs from one measure because they are reflected in another measure would make it much harder for clinicians to understand their overall performance on measures within the cost performance category. Measures are constructed to capture various components of care. In some cases, a clinician or group may provide primary care or episodic care for the same patient and we believe that costs should be considered in all relevant measures to make the measure performance comparable between MIPS eligible clinicians.

    Comment: One commenter recommended that CMS use a total cost of care measure developed using a different methodology that is not limited to Medicare and instead captures data from all payer claims databases.

    Response: We are unaware of a national data source that would allow us to accurately capture cost data for payers. Therefore, we are limited to using Medicare cost data for the total per capita cost measure. Following our consideration of the comments, we will finalize our proposal to include the total per capita cost measure and the MSPB measure within the MIPS cost performance category for the CY 2017 performance period. We believe these measures have the advantage of having been used within the VM and covering a broad population of patients.

    (i) Attribution

    In the VM, all cost measures are attributed to a TIN. In MIPS, however, we proposed to evaluate performance at the individual and group levels. Please refer to section II.E.5.e.(3)(c) of this rule for our discussion to address attribution differences for individuals and groups. For purposes of this section, we will use the general term MIPS eligible clinicians to indicate attribution for individuals or groups.

    For the MSPB measure, we proposed to use attribution logic that is similar to what is used in the VM. MIPS eligible clinicians with the plurality of claims (as measured by allowed charges) for Medicare Part B services, rendered during an inpatient hospitalization that is an index admission for the MSPB measure during the applicable performance period would be assigned the episode. The only difference from the VM attribution methodology would be that the MSPB measure would be assigned differently for individuals than for groups. For the total per capita cost measure, we proposed to use a two-step attribution methodology that is similar to the methodology used in the 2017 and 2018 VM. We also proposed to have the same two-step attribution process for the claims-based population measures in the quality performance category (81 FR 28192), CMS Web Interface measures, and CAHPS for MIPS. However, we also proposed to make some modifications to the primary care services definition that is used in the attribution methodology to align with policies adopted under the Shared Savings Program.

    The VM currently defines primary care services as the set of services identified by the following Healthcare Common Procedure Coding System (HCPCS)/CPT codes: 99201 through 99215, 99304 through 99340, 99341 through 99350, the welcome to Medicare visit (G0402), and the annual wellness visits (G0438 and G0439). We proposed to update this set to include new care coordination codes that have been implemented in the PFS: Transitional care management (TCM) codes (CPT codes 99495 and 99496) and the chronic care management (CCM) code (CPT code 99490). These services were added to the primary care service definition used by the Shared Saving Program in June 2015 (80 FR 32746 through 32748). We believe that these care coordination codes would also be appropriate for assigning services in the MIPS.

    In the CY 2016 PFS final rule, the Shared Saving Program also finalized another modification to the primary care service definition: To exclude nursing visits that occur in a skilled nursing facility (SNF) (80 FR 71271 through 71272). Patients in SNFs (place of service (POS) 31) are generally shorter stay patients who are receiving continued acute medical care and rehabilitative services. While their care may be coordinated during their time in the SNF, they are then transitioned back to the community. Patients in a SNF (POS 31) require more frequent practitioner visits—often from 1 to 3 times a week. In contrast, patients in nursing facilities (NFs) (POS 32) are almost always permanent residents and generally receive their primary care services in the facility for the duration of their life. Patients in the NF (POS 32) are usually seen every 30 to 60 days unless medical necessity dictates otherwise. We believe that it would be appropriate to follow a similar policy in MIPS; therefore, we proposed to exclude services billed under CPT codes 99304 through 99318 when the claim includes the POS 31 modifier from the definition of primary care services.

    We believe that making these two modifications would help align the primary care service definition between MIPS and Shared Savings Program and would improve the results from the two-step attribution process.

    We note, however, that while we are aligning the definition for primary care services, the two-step attribution for MIPS would be different from the one used for the Shared Saving Program. We believe there are valid reasons to have differences between MIPS and the Shared Savings Program attribution. For example, as discussed in CY 2015 PFS final rule (79 FR 67960 through 67962), we eliminated the primary care service pre-step that is statutorily required for the Shared Savings Program from the VM. We noted that without the pre-step, the beneficiary attribution method would more appropriately reflect the multiple ways in which primary care services are provided, which are not limited to physician groups. As MIPS eligible clinicians include more than physicians, we continue to believe it is appropriate to exclude the pre-step.

    In addition, in the 2015 Shared Savings Program final rule, we finalized a policy for the Shared Savings Program that we did not extend to the VM two-step attribution: To exclude select specialties (such as several surgical specialties) from the second attribution step (80 FR 32749 through 32754). We do not believe it is appropriate to restrict specialties from the second attribution step for MIPS. If such a policy were adopted under MIPS, then all specialists on the exclusion list, unless they were part of a multispecialty group, would automatically be excluded from measurement on the total per capita cost measure, as well as on claims-based population measures which rely on the same two-step attribution. While we do not believe that many MIPS eligible clinicians or groups with these specialties would be attributed enough cases to meet or exceed the case minimum, we believe that an automatic exclusion could remove some MIPS eligible clinicians and groups that should be measured for cost.

    We requested comments on these proposed changes.

    The following is a summary of the comments we received regarding our proposal to use the attribution methods from the VM for the MSPB and total per capita cost measure with changes to the definition of primary care services.

    Comment: Some commenters recommended that attribution be based in part on a patient attestation of their relationship with a clinician.

    Response: We do not currently have a method for patients to attest to their relationship with a clinician so are unable to incorporate this mechanism into cost measures at this time. We will continue to work on improving attribution.

    Comment: Several commenters opposed the attribution method used in the MSPB of assigning patients to all physicians who provided at least 30 percent of inpatient care, indicating that the attribution method had not been fully tested.

    Response: The MSPB measure attributes patients to the clinician that provided the plurality of Medicare Part B charges during the index admission, not to all clinicians who provide at least 30 percent of inpatient care. We believe that this method is the best way to identify the single clinician who most influenced the care during a given hospital admission.

    Comment: A commenter supported the exclusion of skilled nursing facility codes from the list of codes used to attribute the total per capita cost measure because patients in skilled nursing facilities require high intensity time-limited care.

    Response: We are finalizing the exclusion of skilled nursing facility codes as proposed.

    Comment: Several commenters expressed concern that incident-to billing practices, in which physicians bill for services provided by other clinicians such as nurse practitioners or physician assistants, obscure the actual clinician providing care and make attribution difficult. A commenter suggested that a new modifier be created to indicate when a service was provided under incident-to rules.

    Response: “Incident to” billing is allowed, consistent with § 410.26 of our regulations, when auxiliary personnel provide services that are an integral, though incidental, part of the service of a clinician, and are commonly furnished without charge or included in the bill of a clinician. “Incident to” services are furnished under the supervision of the billing clinician, and with certain narrow exceptions, under direct supervision. These services are billed and paid under the PFS as if the billing clinician personally furnished the service. We recognize that some services of certain MIPS eligible clinicians may be billed as incident to the services of others. However, given that the billing clinician provides the requisite supervision and bills for the service as if it was personally furnished, we do not believe “incident to” billing interferes with appropriate attribution of services. If this is a concern for certain MIPS eligible clinicians, we believe billing practices could be adjusted such that services are billed by the individual MIPS eligible clinician who provides the service.

    Comment: A commenter expressed concern that attributing care to a single professional or group for costs could cause compartmentalization of care.

    Response: The cost measures that are used in MIPS aim to measure how a particular clinician or group impacts a patient's cost, both directly or indirectly. We have aimed to design a program that encourages more consideration of the costs of care associated with patients even after other clinicians become involved, so the measures require that clinicians who are most significantly responsible for their care, as measured by Medicare allowed amounts, assume accountability for it. We believe this system will encourage more coordination of care and consideration of cost.

    Comment: A commenter opposed the inclusion of transition care management within the list of codes used to attribute the total per capita cost measure, noting that these codes are often used by specialists that may not have overall responsibility for care.

    Response: We believe that those clinicians who are billing for transitional care management are providing significant services that reflect oversight for a patient. In some cases, the clinician providing transitional care management is different from the one providing primary care but in other cases it is the same individual. We believe that our attribution method of assigning patients to the clinician who provides the plurality of primary care services (which includes many services other than transitional care management) is the best method to attribute the total per capita cost measure. This change is consistent with the attribution methods that are used in the Shared Savings Program.

    After considering comments, we are finalizing our proposal to use modified attribution methods from the VM for the total per capita cost measure and the MSPB. Specifically, we are also finalizing the removal of skilled nursing facility codes (CPT codes 99304-99318) from and addition of transitional care management (CPT codes 99495-99496) and chronic care management codes (CPT code 99490) to the list of primary care services used to attribute the total per capita cost measure. We believe that the changes to the attribution methodology allow us to better identify the clinician or group and the extent of accountability for total per capita cost.

    (ii) Reliability

    We seek to ensure that MIPS eligible clinicians and groups are measured reliably; therefore, we intend to use the 0.4 reliability threshold currently applied to measures under the VM to evaluate their reliability. A 0.4 reliability threshold standard means that the majority of MIPS eligible clinicians and groups who meet the case minimum required for scoring under a measure have measure reliability scores that exceed 0.4. We generally consider reliability levels between 0.4 and 0.7 to indicate “moderate” reliability and levels above 0.7 to indicate “high” reliability. In cases where we have considered high participation in the applicable program to be an important programmatic objective, such as the Hospital VBP Program, we have selected this 0.4 moderate reliability standard. We believe this standard ensures moderate reliability, but does not substantially limit participation.

    To ensure sufficient measure reliability for the cost performance category in MIPS, we also proposed at § 414.1380(b)(2)(ii) to use the minimum of 20 cases for the total per capita cost measure (81 FR 28386), the same case minimum that is being used for the VM. An analysis in the CY 2016 PFS final rule (80 FR 71282) confirms that this measure has high average reliability for solo practitioners (0.74) as well as for groups with more than 10 professionals (0.80).

    In the CY 2016 PFS final rule, we finalized a policy that increases the minimum cases for the MSPB measure from 20 to 125 cases (80 FR 71295 through 71296) due to reliability concerns with the measure including the specialty adjustment. That said, we recognize that a case size increase of this nature also may limit the ability of MIPS eligible clinicians to be scored on the MSPB measure, and have been evaluating alternative measure calculation strategies for potential inclusion under MIPS that better balance participation, accuracy, and reliability. As a result of this, we proposed two modifications to the MSPB measure.

    The first technical change we proposed was to remove the specialty adjustment from the MSPB measure's calculation. As currently reported on the QRURs, the MSPB measure is risk adjusted to ensure that these comparisons account for case-mix differences between practitioners' patient populations and the national average. It is unclear that the current additional adjustment for physician specialty improves the accounting for case-mix differences for acute care patients, and thus, may not be needed, and as our analysis below indicated, reliability for the measure improves when then adjustment is removed.

    The second technical change we proposed was to modify the cost ratio used within the MSPB equation to evaluate the difference between observed and expected episode cost at the episode level before comparing the two at the individual or group level. In other words, rather than summing all of the observed costs and dividing by the sum of all the expected costs, we would take the observed to expected cost ratio for each MSPB episode assigned to the MIPS eligible clinician or group and take the average of the assigned ratios. As we did previously, we would take the average ratio for the MIPS eligible clinician or group and multiply it by the average of observed costs across all episodes nationally, in order to convert a ratio to a dollar amount.

    Our analysis, which is based on all Medicare Part A and B claims data for beneficiaries discharged from an acute inpatient hospital between January 1, 2013 and December 1, 2013, indicates that these two changes would improve the MSPB measure's ability to calculate costs and the accuracy with which it can be used to make clinician-level performance comparisons. We also believe that these changes would help ensure the MSPB measure can be applied to a greater number of MIPS eligible clinicians while still maintaining its status as a reliable measure. More specifically, our analysis indicated that after making these changes to the MSPB measure's calculations, the MSPB measure meets the desired 0.4 reliability threshold used in the VM for over 88 percent of all TINs with a 20-case minimum, including solo practitioners. While this percentage is lower than our current policy for the VM (where virtually all TINs with 125 or more episodes have moderate reliability), setting the case minimum at 20 allows for an increase in participation in the MSPB measure. Therefore, we proposed to use a minimum of 20 cases for the MSPB measure (81 FR 28386). As noted previously, we consider expanded participation of MIPS eligible clinicians, particularly individual reporters, to be of great import for the purposes of transitioning to MIPS and believe that this justifies a slight decrease of the percentage of TINs meeting the reliability threshold.

    We welcomed public comment on these proposals.

    The following is summary of the comments we received regarding our proposals to use a 0.4 reliability threshold and a minimum of 20 cases for the total per capita cost measure.

    Comment: Many commenters expressed concern with the proposed 0.4 reliability threshold for cost measures. Many commenters suggested that only measures with high reliability (over 0.7 or 0.8) be used within the program.

    Response: We believe that measures with a reliability of 0.4 with a minimum attributed case size of 20 meet the standards for being included as cost measures within the MIPS program. We aim to measure cost for as many clinicians as possible and limiting measures to reliability of 0.7 or 0.8 would result in few individual clinicians with attributed cost measures. In addition, a 0.4 reliability threshold ensures moderate reliability for most MIPS eligible clinicians or group practices that are being measured on cost.

    We will finalize our reliability threshold of 0.4 but will continue to work to develop measures and improve specifications to ensure the highest level of reliability feasible within the cost measures in the MIPS program. We did not receive any specific comments on the our proposal to use a minimum of 20 cases for the total per capita cost measure. We are finalizing at § 414.1380(b)(2)(ii) that a MIPS eligible clinician must meet the minimum case volume specified by CMS to be scored on a cost measure. Therefore, a MIPS eligible clinician must have a minimum of 20 cases to be scored on the total per capta cost measure.

    The following is a summary of the comments we received regarding our proposal to modify the case minimum for the MSPB, the proposal to remove the specialty adjustment from the MSPB measure's calculation, and the proposal to modify the cost ratio used within the MSPB equation.

    Comment: Several comments opposed the 20 case minimum for MSPB, noting that CMS had previously increased the minimum to 125 within the VM program and that the 20 case minimum did not meet our standard of 0.4 reliability threshold.

    Response: We understand the concerns of the commenters. We would like to reiterate that the proposed adjustments to the MSPB measure improve its reliability at 20 cases. As stated in the proposed rule, these changes result in the measure meeting 0.4 reliability for over 88 percent of TINs with at least 20 attributed cases, including solo practitioners. In MIPS, however, we must assess reliability at the individual clinician level as well as the TIN level because clinicians may choose to be assessed as individuals or part of a group in the MIPS program. Therefore, we reran the reliability analysis for the proposed MSPB using 2015 data to assess the impact at the TIN/NPI level. Table 6 summarizes the results for different case volumes. This analysis indicates only 77 percent of individual TIN/NPIs have 0.4 reliability at a 20 case volume. Therefore, we will increase the minimum case volume to 35 cases which has a 0.4 reliability threshold for 90 percent of individual TIN/NPIs and 97 percent of TINs that are attributed.

    Table 6—Proposed MSPB Reliability With TIN/NPI Attribution Reliability of revised MSPB measure using TIN/NPI attribution Minimum 20 cases
  • (%)
  • Minimum 30 cases
  • (%)
  • Minimum 35 cases
  • (%)
  • Percent of TIN/NPIs with 0.4 reliability at different minimum case volume requirements 77 86 90 Percent of TINs with 0.4 reliability at different minimum case volume requirements 90 95 97

    Comment: Several comments supported the removal of specialty adjustment from the MSPB measure, noting that in some cases certain specialties may have higher spending that is not appropriate based on the condition of the patient. Several other commenters opposed the removal of the specialty adjustment from the MSPB measure because it would disadvantage those specialists who care for the sickest patients and not recognize the differences in the types of patients seen by different specialties. Some commenters opposed the change in the calculation of observed to expected ratio at the episode level rather than the clinician or group level.

    Response: The MSPB measure includes not only risk adjustment to capture the clinical conditions of the patients in the period prior to the index admission, but also includes risk adjustment that reflects the clinical presentation based on the index MS-DRG. We believe that including the index MS-DRG helps to identify a pool of patients either receiving a procedure or admitted for a particular medical condition and the HCC risk adjustment helps to adjust for comorbidities which may suggest that a clinician is treating patients who are sicker than most within that pool. Since there is less variation in the specialties caring for a particular type of MS-DRG, adding specialty adjustment reduces reliability. We will continue to analyze all cost measures to ensure they include the proper risk adjustment and meet our reliability threshold.

    We are finalizing at § 414.1380(b)(2)(ii) that a MIPS eligible clinician must meet the minimum case volume specified by CMS to be scored on a cost measure. Following our consideration of the comments, we are not finalizing our proposal of a minimum case volume of 20 for the MSPB measure. Instead, we are finalizing a minimum case volume of 35 for the MSPB. We are also adopting our proposals to not adjust the MSPB measure by specialty and to calculate observed to expected ratio at an episode level. We will continue to analyze the measure to ensure reliability.

    (b) Episode-Based Measures Proposed for the MIPS Cost Performance Category

    As noted in the previous section, we proposed to calculate several episode-based measures for inclusion in the cost performance category. Groups have received feedback on their performance on episode-based measures through the Supplemental Quality and Resource Use Report (sQRUR), which are issued as part of the Physician Feedback Program under section 1848(n) of the Act; however, these measures have not been used for payment adjustments through the VM. Several stakeholders expressed in the MIPS and APMs RFI the desire to transition to episode-based measures and away from the general total per capita cost measures used in the VM. Therefore, in lieu of using the total per capita cost measures for populations with specific conditions that are used for the VM, we proposed episode-based measures for a variety of conditions and procedures that are high cost, have high variability in resource use, or are for high impact conditions. In addition, as these measures are payment standardized and risk adjusted, we believe they meet the statutory requirements for appropriate measures of cost as defined in section 1848(p)(3) of the Act because the methodology eliminates the effects of geographic adjustments in payment rates and takes into account risk factors.

    We also reiterated that while we transition to using episode-based measures for payment adjustments, we will continue to engage stakeholders through the process specified in section 1848(r)(2) of the Act to refine and improve the episodes moving forward.

    As noted earlier, we have provided performance information on episode-based measures to MIPS eligible clinicians through the sQRURs, which are released in the fall. The sQRURs provide groups and solo practitioners with information to evaluate their resource utilization on conditions and procedures that are costly and prevalent in the Medicare FFS population. To accomplish this goal, various episodes are defined and attributed to one or more groups or solo practitioners most responsible for the patient's care. The episode-based measures include Medicare Part A and Part B payments for services determined to be related to the triggering condition or procedure. The payments included are standardized to remove the effect of differences in geographic adjustments in payment rates and incentive payment programs and they are risk adjusted for the clinical condition of beneficiaries. Although the sQRURs provide detailed information on these care episodes, the calculations are not used to determine a TIN's VM payment adjustment and are only used to provide feedback.

    We proposed to include in the cost performance category several clinical condition and treatment episode-based measures that have been reported in the sQRUR or were included in the list of the episode groups developed under section 1848(n)(9)(A) of the Act published on the CMS Web site: https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/MACRA-MIPS-and-APMs.html. The identified episode-based measures have been tested and previously published. Tables 4 (81 FR 28202-28206) and 5 (81 FR 28207) of the proposed rule listed the 41 clinical condition and treatment episode-based measures proposed for the CY 2017 performance period, as well as whether the episodes have previously been reported in a sQRUR.

    While we proposed the measures listed in Tables 4 and 5 of the proposed rule for the cost performance category, we stated in the proposed rule that we were uncertain as to how many of these measures we would ultimately include in the final rule with comment period. As these measures have never been used for payment purposes, we indicated that we may choose to specify a subset of these measures in the final rule with comment period. We requested public comment on which of the measures listed in Tables 4 and 5 of the proposed rule to include in the final rule with comment period. In addition to considering public comments, we intended to consider the number of MIPS eligible clinicians able to be measured, the episode's impact on Medicare Part A and Part B spending, and whether the measure has been reported through sQRUR. In addition, while we do not believe specialty adjustment is necessary for the episode-based measures, we will continue to explore this further given the diversity of episodes. We solicited comment on whether we should specialty adjust the episode-based measures.

    The following is summary of the comments we received regarding the episode-based measures proposed for the cost performance category for the CY 2017 performance period.

    Comment: Several comments supported the inclusion of episode-based measures because they more closely tracked a clinician's influence on the care provided than total per-capita cost measures.

    Response: Episode-based measures are an important component of the overall measurement of cost and we are finalizing a subset of episode-based measures.

    Comment: Several commenters supported the eventual inclusion of episode-based measures in the cost performance category but opposed the inclusion of these measures in the transition year of MIPS because clinicians are not familiar with them yet and have not had the opportunity to receive feedback on them. Commenters recommended a more transparent process in the development of episode groups. Others recommended that only those measures included in the sQRUR in previous years be included in the transition year of the MIPS program.

    Response: We agree with the commenters. Even though we have reduced the weight of the cost performance category to 0 percent for the first MIPS payment year, we believe that clinicians would benefit from more exposure to these episode-based measures and how they might be scored before they are included in the MIPS final score. While 14 of the episode-based measures we proposed were included in the 2014 sQRUR, a number of them have never been included in the VM or a sQRUR. Therefore, as discussed below, we are finalizing a subset of the proposed episode-based measures, which have been included in the sQRUR for 2014 and meet our reliability threshold of 0.4. We note that we selected episodes from the 2014 sQRUR because these measures have been included in 2 years of sQRUR (2014 and 2015) which provides clinicians an opportunity for initial feedback before the MIPS performance period begins although the feedback does not contain any scoring information, nor does it contain the updated attribution changes.

    In addition, we intend to provide performance feedback to clinicians on additional episode-based measures that we are not finalizing for inclusion in the MIPS cost performance category for the CY 2017 performance period but may want to consider proposing for inclusion in the MIPS cost performance category in the future. Section 1848(q)(12)(A)(i) of the Act requires that we provide timely confidential feedback to MIPS eligible clinicians on their performance under the cost performance category. While the feedback on these additional episode-based measures would be for informational purposes only, we believe it will aid in MIPS eligible clinicians' ability to understand the measures and the attribution rules and methods that we use to calculate performance on these measures, which may be helpful in the event that we decide to propose the measures for the MIPS cost performance category in future rulemaking.

    Comment: Some commenters suggested that 41 episode-based measures was too many and that a smaller number should be used in the program. Another commenter suggested that CMS establish a maximum number of episode-based measures that may be attributed to a particular clinician or group.

    Response: We believe that a large number of episode-based measures is needed to capture the diversity of clinicians in the MIPS program, as many clinicians may only have a small number of attributable episodes. While some large multispecialty groups may have a large number of episodes attributed, we believe this reflects the diversity of care that they are providing to patients. However, for the CY 2017 performance period, we are finalizing a reduced set of measures which are reliable at the group (TIN) and individual (TIN/NPI) level and where feedback has been previously presented to eligible clinicians or groups.

    As discussed in the preceding response, we also intend to provide performance feedback to MIPS eligible clinicians under section 1848(q)(12)(A)(i) of the Act on additional episode-based measures for informational purposes only.

    Comment: A commenter suggested that CMS provide technical assistance to specialty societies and other organizations in order to develop episode groups for specialty care.

    Response: Episode development under section 1848(r) of the Act will continue. This process includes extensive communication with technical experts in the field and stakeholders but does not provide for technical assistance to organizations.

    Comment: A commenter opposes the use of episode-based measures for upper respiratory infection (measure 33) and deep vein thrombosis of extremity (measure 34) because they are likely to occur in high risk patients.

    Response: For the CY 2017 performance period, we are only finalizing episode-based measures which have been previously reported in the 2014 supplemental QRUR and meet our reliability thresholds. Upper respiratory infection and deep vein thrombosis of extremity were not included in the 2014 sQRUR, therefore we are not finalizing these measures for the MIPS CY 2017 performance period. We intend to develop episode-based measures that cover patients with various levels of risk. We believe that the advantage of episode-based measures is defining a certain patient population that will be similar even if everyone is high risk. In addition, episode-based measures are risk adjusted in the same fashion as the other cost measures that were proposed to be included within the program.

    Comment: Several commenters suggested development of future episode-based measures because many clinicians do not have episode-based measures for patients they treat.

    Response: We intend to continue to develop episode-based measures that cover more procedures and conditions and invite stakeholder feedback on additional conditions or procedures.

    Comment: A commenter expressed concern that ICD-9-CM codes are insufficient to be used within episode-based measures because they do not contain enough clinical data to predict costs. Others suggested that the measures should be updated to use ICD-10-CM codes.

    Response: ICD-9-CM was used for diagnosis coding for Medicare claims until October 1, 2015. Because ICD-9-CM codes were required for billing for all services, we believe they are the richest source of clinical data available to allow us to specify and risk adjust episode-based measures. The transition from ICD-9-CM to ICD-10-CM took place on October 1, 2015. There are many more diagnosis codes available in ICD-10-CM than in ICD-9-CM which reflect increased specificity in some clinical areas. In preparation for the transition to ICD-10-CM, a crosswalk of diagnosis codes from ICD-9-CM to ICD-10-CM was created and this was used for the transition of coverage policies and other documents that include diagnosis codes. We expect to use this crosswalk as a baseline for our transition work but understand that there may be changes that need to be made to accommodate the different use of diagnostic codes with ICD-10-CM.

    Comment: Commenter suggests CMS consider episode-based measures for chronic conditions that do not have an inpatient trigger, so that costs for chronic conditions can be assessed under the cost performance category even if an inpatient stay does not occur.

    Response: We will continue to work to develop episode-based measures and our work is not limited to those conditions that include an inpatient stay.

    Comment: Commenter stated that there is difficulty in attributing an episode-based measure to a clinician providing a diagnostic service.

    Response: One feature of episode-based measures is that they allow for the creation of a list of related services for a particular condition or procedure. This means that episode-based measures could be triggered on the basis of a diagnostic service if experts could develop a list of services that are typically related. Among our ten finalized episode-based measures is one triggered on the basis of colonoscopy, which is a diagnostic service.

    Comment: A commenter indicated that future development of episode-based measures should not be limited to Methods A and B as described in the rule.

    Response: We generally believe that a consistent approach to cost measure development is easier to understand and fair to all clinicians. However, we recognize that cost measure development is ongoing and will continue to investigate methods to best capture the contributions of individual clinicians and groups to cost and will consider other methods if they are necessary.

    Comment: Several commenters expressed concern with particular elements of the technical specifications of certain episode-based measures. One commenter requested that pneumatic compression devices be added as a relevant service to the VTE episode-based measure, that patient-activated event recorders be removed from the list of relevant services from the heart failure (chronic) episode-based measure, that AV node ablation be removed from the list of relevant services from Atrial Fibrillation/Flutter Chronic episode-based measure along with other recommendations.

    Response: As we mentioned, we want to use episode-based measures that meet our reliability threshold and for which we have provided feedback through the 2014 sQRUR. We invite continued feedback on the episode-based measures as they are created and refined through the process outlined in section 1848(r) of the Act. However, we are not modifying the specifications for any of the episodes that we are finalizing in this rule.

    Comment: A commenter recommended that that the osteoporosis and rheumatoid arthritis episode-based measures should not be included in cost measurement in the transition year because the episode-based measures have not been thoroughly vetted.

    Response: Although all episode-based measures were created with clinical input, the measures identified by the commenters were not included in the 2014 sQRUR, so individual clinicians may be unfamiliar with them before the MIPS performance period. Therefore, we are not finalizing these episode-based measures for the CY 2017 performance period.

    Comment: A commenter expressed concern with the use of HCC scores to risk adjust episode-based measures because HCC scores have been shown to under-predict costs for high cost patients or for patients in rural areas.

    Response: We are unaware of other risk adjustment methodologies that are more appropriate than HCC for Medicare beneficiaries. We will continue to conduct analyses to ensure that risk adjustment is as precise as possible to ensure that clinicians are not inappropriately disadvantaged because of the use of this risk adjustment methodology.

    Comment: A commenter supported the use of procedure codes to trigger the episode-based measure for cataract surgery as opposed to the licensure status of the physician. Another commenter expressed concern with the episode-based measure for cataract surgery because it did not reflect previous discussions with CMS regarding this episode-based measure.

    Response: We will continue to work to improve the specifications of the episode-based measures. We are finalizing the episode-based measure for Lens and Cataract Procedures because it meets our reliability threshold and was included in the 2014 sQRUR. We offered stakeholders the opportunity to review measure specifications for all of the episode-based measures under development in a posting in February 2016 and invite continued feedback on the specifications going forward.

    Comment: A commenter recommended that CMS provide more guidance on the implications of billing for a trigger code for the lens and cataract episode-based measure and including a modifier for preoperative management only (modifier 56) or postoperative management only (modifier 55).

    Response: Clinicians who bill for services with modifiers that indicate that they did not actually perform the index procedure will not be attributed for the costs associated with that episode.

    We appreciate the enthusiasm expressed by many commenters for the development of episode-based measures and their more nuanced focus on particular types of care. We also understand the concerns expressed regarding lack of familiarity with the episode-based measures. For this reason, we are modifying our proposal and finalizing for the CY 2017 performance period only 10 episode-based measures from the proposed rule. All of these measures were included in the 2014 sQRUR and meet the reliability threshold of 0.4 for the majority of clinicians and groups at a case minimum of 20. Table 7 includes the episode-based measures that are finalized for the CY 2017 performance period and includes their reliability, which we calculated using data from the 2015 sQRUR when the measure is attributed at the TIN level, as in the VM, and when attributed at the TIN/NPI level, as we will do under the MIPS program. The measures listed in Table 7 will be used (along with the total per capita cost measure and the MSPB measure finalized in this rule) to determine the cost performance category score. As we noted earlier, the weight of the cost category is 0 percent for 2019 MIPS payment year, therefore the performance category score will provide information to MIPS eligible clinicians, but performance will not affect the final score for the 2019 MIPS payment year.

    Table 7—Episode-Based Measures Finalized for the CY 2017 Performance Period Method type/
  • measure number from
  • Table 4 (Method A)
  • and Table 5
  • (Method B) from
  • proposed rule *
  • Episode name and description Included in 2014 sQRUR % TINs
  • meeting 0.4
  • reliability
  • threshold
  • % TIN/NPIs
  • meeting 0.4
  • reliability
  • threshold
  • A/1 Mastectomy (formerly titled “Mastectomy for Breast Cancer”)—Mastectomy is triggered by a patient's claim with any of the interventions assigned as Mastectomy trigger codes. Mastectomy can triggered by either an ICD procedure code, or CPT codes in any setting (e.g. hospital, surgical center) Yes 99.6 100.0 A/5 Aortic/Mitral Valve Surgery—Open heart valve surgery (Valve) episode is triggered by a patient claim with any of Valve trigger codes Yes 93.9 92.0 A/8 Coronary Artery Bypass Graft (CABG)—Coronary Artery Bypass Grafting (CABG) episode is triggered by an inpatient hospital claim with any of CABG trigger codes for coronary bypass. CABG generally is limited to facilities with a Cardiac Care Unit (CCU); hence there are no episodes or comparisons in other settings Yes 96.9 94.8 A/24 Hip/Femur Fracture or Dislocation Treatment, Inpatient (IP)-Based—Fracture/dislocation of hip/femur (HipFxTx) episode is triggered by a patient claim with any of the interventions assigned as HipFxTx trigger codes. HipFxTx can be triggered by either an ICD procedure code or CPT codes in any setting Yes 88.9 76.1 B/1 Cholecystectomy and Common Duct Exploration—Episodes are triggered by the presence of a trigger CPT/HCPCS code on a claim when the code is the highest cost service for a patient on a given day. Medical condition episodes are triggered by IP stays with specified MS-DRGs Yes 89.6 81.8 B/2 Colonoscopy and Biopsy—Episodes are triggered by the presence of a trigger CPT/HCPCS code on a claim when the code is the highest cost service for a patient on a given day. Medical condition episodes are triggered by IP stays with specified MS-DRGs Yes 100.0 99.9 B/3 Transurethral Resection of the Prostate (TURP) for Benign Prostatic Hyperplasia—For procedural episodes, treatment services are defined as the services attributable to the MIPS eligible clinician or group managing the patient's care for the episode's health condition Yes 95.2 95.5 B/5 Lens and Cataract Procedures—Procedural episodes are triggered by the presence of a trigger CPT/HCPCS code on a claim when the code is the highest cost service for a patient on a given day Yes 99.7 99.5 B/6 Hip Replacement or Repair—Procedural episodes are triggered by the presence of a trigger CPT/HCPCS code on a claim when the code is the highest cost service for a patient on a given day Yes 97.8 97.7 B/7 Knee Arthroplasty (Replacement)—Procedural episodes are triggered by the presence of a trigger CPT/HCPCS code on a claim when the code is the highest cost service for a patient on a given day Yes 99.9 99.8 * Table 4 of the proposed rule is located on 81 FR 28202-28206; Table 5 of the proposed rule is located at 81 FR 28207.

    In addition, for informational purposes, we intend to provide feedback to MIPS eligible clinicians under section 1848(q)(12)(A)(i) of the Act on the additional episode-based measures which may be introduced into MIPS in future years. We believe it will aid in MIPS eligible clinicians' ability to understand the measures and the attribution rules and methods that we use to calculate performance on these measures, which may be helpful in the event that we decide to propose the measures for the MIPS cost performance category in future rulemaking.

    (i) Attribution

    For the episode-based measures listed in Tables 4 and 5 of the proposed rule (81 FR 28202), we proposed to use the attribution logic used in the 2014 sQRUR (full description available at https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/PhysicianFeedbackProgram/Downloads/Detailed-Methods-2014SupplementalQRURs.pdf), with modifications to adjust for whether performance is being assessed at an individual or group level. Please refer to 81 FR 28208 of the proposed rule for our proposals to address attribution differences for individuals and groups. For purposes of this section, we will use the general term MIPS eligible clinicians to indicate attribution for individuals or groups.

    Acute condition episode-based measures would be attributed to all MIPS eligible clinicians that bill at least 30 percent of inpatient evaluation and management (IP E&M) visits during the initial treatment, or “trigger event,” that opened the episode. E&M visits during the episode's trigger event represent services directly related to the management of the beneficiary's acute condition episode. MIPS eligible clinicians that bill at least 30 percent of IP E&M visits are therefore likely to have been responsible for the oversight of care for the beneficiary during the episode. It is possible for more than one MIPS eligible clinician to be attributed a single episode using this rule. If an acute condition episode has no IP E&M claims during the episode, then that episode is not attributed to any MIPS eligible clinician.

    Procedural episodes would be attributed to all MIPS eligible clinicians that bill a Medicare Part B claim with a trigger code during the trigger event of the episode. For inpatient procedural episodes, the trigger event is defined as the IP stay that triggered the episode plus the day before the admission to the IP hospital. For outpatient procedural episodes constructed using Method A, the trigger event is defined as the day of the triggering claim plus the day before and 2 days after the trigger date. For outpatient procedural episodes constructed using Method B, the trigger event is defined as only the day of the triggering claim. Any Medicare Part B claim or line during the trigger event with the episode's triggering procedure code is used for attribution. If more than one MIPS eligible clinician bills a triggering claim during the trigger event, the episode is attributed to each of the MIPS eligible clinicians. If co-surgeons bill the triggering claim, the episode is attributed to each MIPS eligible clinician. If only an assistant surgeon bills the triggering claim, the episode is attributed to the assistant surgeon or group. If an episode does not have a concurrent Medicare Part B claim with a trigger code for the episode, then that episode is not attributed to any MIPS eligible clinician.

    The following is a summary of the comments we received regarding our attribution methodology for the episode-based measures:

    Comment: A commenter suggested that episodes be attributed to the clinician with the highest Part B charges.

    Response: The episode-based measures each have different attribution methodologies. We believe that always attributing episodes to the clinician with the highest Part B charges is not necessarily appropriate in all cases, particularly in cases in which a procedure may trigger the beginning of an episode.

    Comment: A commenter suggested that until the patient relationship codes are developed, clinicians should be allowed to select the cost measures that apply to them.

    Response: We believe that the cost measures that are included in this final rule with comment period are constructed in such a way to ensure that clinicians or groups are measured for cost for the patients for which they provide care. For example, a clinician or group would be required to provide 20 coronary artery bypass grafts to be attributed an episode-based measure for that procedure. We believe that requiring a cardiothoracic surgeon or group to select this cost measure through some kind of administrative mechanism would not add value to the program and could potentially increase administrative burden for the clinician.

    Comment: A commenter suggested that CMS employ Method B, which examines episodes independently, rather than Method A, in which cost is assigned to episodes on the basis of hierarchical rules, in developing episode-based measures for podiatrists.

    Response: We continue to work on the development of episode groups and are evaluating the use of Method A and Method B within that context for a variety of medical conditions and procedures. Episode-based measures using both methods are included in this final rule with comment period.

    Comment: A commenter expressed concern that certain specialties such as hospital-based physicians and palliative care physicians will have a large number of episode-based measures attributed to them.

    Response: We believe that the episode-based measures represent a wide variety of procedural and medical episodes. For the transition year, we have limited the number of episode-based measures and reduced the weight of the cost performance category but recognize that some clinicians may have more attributed episode-based measures than others based on the nature of the patients that they treat. However, it is important to note that being attributed additional cost measures does not change the weight of the cost performance category in the final score, which is set at 0 percent for the 2019 MIPS payment year. In addition, having more attributed episode-based measures does not inherently disadvantage a clinician, particularly if the episodes are lower in cost compared to the cost for similar episodes with similarly complex patients. We intend to continue to develop episode-based measures to ensure that all specialties of medicine may be measured on cost in a similar fashion.

    Following our consideration of the comments, we will finalize the attribution methodology for episode-based measures as proposed.

    (ii) Reliability

    To ensure moderate reliability, we proposed at § 414.1380(b)(2)(ii) to use the minimum of 20 cases for all episode-based measures listed in Tables 4 and 5 of the proposed rule (81 FR 28386). We proposed to not include any measures that do not have average moderate reliability (at least 0.4) at 20 episodes.

    Comment: Several commenters opposed the inclusion of episode-based measures with a reliability of 0.4 at a 20 minimum case size and recommended that only measures with a 0.7 reliability at a 20 minimum case size be included.

    Response: We believe that episode-based measures with a reliability of 0.4 with a minimum attributed case size of 20 meet the standards for being included as cost measures within the MIPS program. We aim to measure cost for as many clinicians as possible and limiting episode-based measures to reliability of 0.7 or 0.8 at a minimum case size of 20 would result in few individual clinicians being attributed enough patients under these measures, particularly since the episode-based measures represent only a subset of patients seen by an individual clinician or group.

    Please see section II.E.5.e.(3)(b) for additional discussion of using 0.4 as the reliability threshold. All of the episode-based measures that we are finalizing are reliable at this threshold for 20 cases at both the individual and group level. We are finalizing at § 414.1380(b)(2)(ii) that a MIPS eligible clinician must meet the minimum case volume specified by CMS to be scored on a cost measure. After considering the comments, we are finalizing our proposal that a MIPS eligible clinician must have a minimum of 20 cases to be scored on an episode-based measure.

    (c) Attribution for Individual and Groups

    In the VM and sQRUR, all cost measurement was attributed at the solo practitioner and group level, as identified by the TIN. In MIPS, however, we proposed to evaluate performance at the individual and group levels. For MIPS eligible clinicians whose performance is being assessed individually across the other MIPS performance categories, we proposed to attribute cost measures using the TIN/NPI rather than the TIN. Attribution at the TIN/NPI level allows individual MIPS eligible clinicians, as identified by their TIN/NPI, to be measured based on cases that are specific to their practices, rather than being measured on all the cases attributed to the group TIN. For MIPS eligible clinicians that choose to have their performance assessed as a group across the other MIPS performance categories, we proposed to attribute cost measures at the TIN level (the group TIN under which they report). The logic for attribution would be similar whether attributing to the TIN/NPI level or the TIN level. As an alternative proposal, we solicited comment on whether MIPS eligible clinicians that choose to have their performance assessed as a group should first be attributed at the individual TIN/NPI level and then have all cases assigned to the individual TIN/NPIs attributed to the group under which they bill. This alternative would apply one consistent methodology to both groups and individuals, compared to having a methodology that assigns cases using TIN/NPI for assessment at the individual level and another that assigns cases using only TIN for assessment at the group level. For example, the general attribution logic for the MSPB is to assign the MSPB measure based on the plurality of claims (as measured by allowed charges) for Medicare Part B services rendered during an inpatient hospitalization that is an index admission for the MSPB measure. Our proposed approach would determine “plurality of claims” separately for individuals and groups. For individuals, we would assign the MSPB measure using the “plurality of claims” by TIN/NPI, but for groups we would determine the “plurality of claims” by TIN. The alternative proposal, in contrast, would determine the “plurality of claims” by TIN/NPI for both groups and individuals. However, for individuals, only the MSPB measure attributed to the TIN/NPI would be evaluated, while for groups the MSPB measure attributed to any TIN/NPI billing under the TIN would be evaluated.

    We requested comment on this proposal and alternative considered.

    Comment: A commenter supported the proposal to attribute cost measures at the TIN level for groups that select to be assessed on other MIPS performance categories as a group.

    Response: We believe both attribution methodologies are valid, but as described below, we are finalizing the alternative proposal.

    Comment: Several commenters supported the alternative proposal of attributing cost for all clinicians at the TIN/NPI level, regardless of whether they participate in MIPS as a group or as individual clinicians.

    Response: We believe having a consistent attribution methodology for individual and group reporting would be beneficial and simpler for clinicians to understand. Therefore, we are finalizing the alternative proposal.

    To reduce complexity in the MIPS program, we are finalizing the alternative proposal to attribute cost measures for all clinicians at the TIN/NPI level. For those groups that participate in group reporting in other MIPS performance categories, their cost performance category scores will be determined by aggregating the scores of the individual clinicians within the TIN. For example, if a TIN had one surgeon that billed for 11 codes and another surgeon in that TIN billed for 12 codes that would trigger the knee arthroplasty episode-based measure, neither surgeon would have enough cases to be measured individually. However, if the TIN elects group reporting, the TIN would be assessed on the 23 combined cases.

    (d) Application of Measures to Non-Patient Facing MIPS Eligible Clinicians

    Section 101(c) of the MACRA added section 1848(q)(2)(C)(iv) to the Act, which requires the Secretary to give consideration to the circumstances of professional types who typically furnish services without patient facing interaction (non-patient facing) when determining the application of measures and activities. In addition, this section allows the Secretary to apply alternative measures or activities to non-patient facing MIPS eligible clinicians that fulfill the goals of a performance category. Section 101(c) of the MACRA also added section 1848(q)(5)(F) to the Act, which allows the Secretary to re-weight MIPS performance categories if there are not sufficient measures and activities applicable and available to each type of MIPS eligible clinician involved.

    For the 2017 MIPS performance period, we did not propose any alternative measures for non-patient facing MIPS eligible clinicians or groups. This means that non-patient facing MIPS eligible clinicians or groups may not be attributed any cost measures that are generally attributed to clinicians who have patient facing encounters with patients. We therefore anticipate that, similar to MIPS eligible clinicians or groups that do not meet the required case minimum for any cost measures, many non-patient facing MIPS eligible clinicians may not have sufficient measures and activities available to report and would not be scored on the cost performance category under MIPS. We refer readers to section II.E.6.b.2. of this final rule with comment period where we discussed how we would address performance category weighting for MIPS eligible clinicians or groups who do not receive a performance category score for a given performance category. We also intend to work with non-patient facing MIPS eligible clinicians and specialty societies to propose alternative cost measures for non-patient facing MIPS eligible clinicians and groups under MIPS in future years. Lastly, we solicited comment on how best to incorporate appropriate alternative cost measures for all MIPS eligible clinician types, including non-patient facing MIPS eligible clinicians.

    The following is summary of the comments we received.

    Comment: Many commenters supported a policy to not attribute cost measures to those clinicians and groups that meet the requirements of non-patient facing MIPS eligible clinicians because these clinicians would have little influence on cost, particularly with regard to the measures that were proposed for the transition year of the program.

    Response: We did not propose to preclude non-patient facing MIPS eligible clinicians from receiving a score for the cost performance category. Rather, based on the cost measures that we proposed for the CY 2017 performance period, we did not anticipate many non-patient facing MIPS eligible clinicians would have sufficient case volume as the measures are generally attributed to clinicians who have patient-facing encounters. If non-patient facing MIPS eligible clinicians do in fact have sufficient case volume, however, they would be attributed measures in accordance with the attribution methodology and would receive a score for the cost performance category.

    Comment: Many commenters recommended that CMS work to develop alternative cost measures that could be used for non-patient facing clinicians or groups in the future.

    Response: We will continue to investigate all methods to measure cost, including methods for those clinicians who provide services that are not included in the existing cost measure attribution criteria.

    We appreciate the comments received and will attribute cost measures to non-patient facing MIPS eligible clinicians who have sufficient case volume, in accordance with the attribution methodology.

    (e) Additional System Measures

    Section 1848(q)(2)(C)(ii) of the Act, as added by section 101(c) of MACRA provides that the Secretary may use measures used for a payment system other than for physicians, such as measures for inpatient hospitals, for purposes of the quality and cost performance categories of MIPS. The Secretary, however, may not use measures for hospital outpatient departments, except in the case of items and services furnished by emergency physicians, radiologists, and anesthesiologists.

    We intend to align any facility-based MIPS measure decision across the quality and cost performance categories to ensure consistent policies for MIPS in future years. We refer readers back to section II.E.5.b.(5) of this rule which discusses our strategy and solicits comments related to this provision. Below is our response to comments related to measuring the cost of facility-based clinicians.

    Comment: Some commenters supported the consideration of inpatient hospital cost measures for MIPS but requested that CMS create a methodology with an appropriate attribution methodology that could account for clinicians practicing in multiple facilities. Some commenters supported the inclusion of inpatient hospital cost measures as an option for certain clinicians and others opposed their inclusion in MIPS.

    Response: We will take these comments into consideration if we propose system measures in future rulemaking.

    Comment: Many commenters expressed concern that the total per capita cost measure, MSPB, and episode-based measures would not capture cost associated with their particular specialty or field of medicine, such as anesthesiology. Commenters encouraged CMS to develop measures that would capture cost covering the unique contributions of all specialties.

    Response: We will continue to develop more episode-based measures and other mechanisms of measuring cost that will cover a broader group of medical specialists in the coming years and will plan to work with stakeholders to identify gaps in cost measurement.

    We appreciate the comments and will take all comments into consideration as we develop future cost measures.

    (4) Future Modifications to Cost Performance Category

    In the future, we intend to consider how best to incorporate Medicare Part D costs into the cost performance category, as described in section 1848(q)(2)(B)(ii) of the Act. We solicited public comments on how we should incorporate those costs under MIPS for future years. We also intend to continue developing and refining episode-based measures for purposes of cost performance category measure calculations.

    The following is summary of the comments we received regarding the inclusion of Medicare Part D costs within cost measurement.

    Comment: Several commenters expressed support the inclusion of Part D costs in future cost measures, some citing the contribution of prescribing behavior to overall health costs and that including costs from other categories without including oral prescription drugs presented an incomplete picture.

    Response: To the extent possible, we will investigate ways to account for the cost of drugs under Medicare Part D in the cost measures in the future, as feasible and applicable, in accordance with section 1848(q)(2)(B)(ii) of the Act.

    Comment: Several commenters opposed the inclusion of Part D drug costs in future cost measures, noting that certain physicians prescribe more expensive drugs than others and that there are technical challenges to price standardizing Part D data and others questioned the appropriateness of the data. Others commented that including Part D costs could create improper incentives to prescribe services based on the part of Medicare that covers the service.

    Response: Drugs covered under Medicare Part D are a growing component of the overall costs for Medicare beneficiaries and one in which clinicians have a significant influence. However, not all patients covered by Medicare A and B are covered under a Medicare Part D plan, which presents a technical challenge in assessing the cost of drugs for all patients. In addition, Medicare Part D is provided through private plans which independently negotiate payment rates for certain drugs or drugs within a particular class. We will continue to investigate methods to incorporate this important component of healthcare spending into our cost measures in the future.

    Comment: Several commenters suggested removing the costs associated with drugs covered under Medicare Part B from cost in addition to those covered under Medicare Part D.

    Response: We believe that clinicians play a key role in prescribing drugs for their patients and that the costs associated with drugs can be a significant contributor to the overall cost of caring for a patient. We do not believe it would be appropriate to remove the cost of Medicare Part B drugs from the cost measures.

    We appreciate the comments and will take all comments into consideration as we develop future cost measures.

    f. Improvement Activities Performance Category (1) Background (a) General Overview and Strategy

    The improvement activities performance category focuses on one of our MIPS strategic goals, to use a patient-centered approach to program development that leads to better, smarter, and healthier care. We believe improving the health of all Americans can be accomplished by developing incentives and policies that drive improved patient health outcomes. Improvement activities emphasize activities that have a proven association with better health outcomes. The improvement activities performance category also focuses on another MIPS strategic goal which is to use design incentives that drive movement toward delivery system reform principles and participation in APMs. A further MIPS strategic goal we are striving to achieve is to establish policies that can be scaled in future years as the bar for improvement rises. Under the improvement activities performance category, we proposed baseline requirements that will continue to have more stringent requirements in future years, and lay the groundwork for expansion towards continuous improvement over time.

    (b) The MACRA Requirements

    Section 1848(q)(2)(C)(v)(III) of the Act defines an improvement activity as an activity that relevant eligible clinician organizations and other relevant stakeholders identify as improving clinical practice or care delivery, and that the Secretary determines, when effectively executed, is likely to result in improved outcomes. Section 1848(q)(2)(B)(iii) of the Act requires the Secretary to specify improvement activities under subcategories for the performance period, which must include at least the subcategories specified in section 1848(q)(2)(B)(iii)(I) through (VI) of the Act, and in doing so to give consideration to the circumstances of small practices, and practices located in rural areas and geographic health professional shortage areas (HPSAs).

    Section 1848(q)(2)(C)(iv) of the Act generally requires the Secretary to give consideration to the circumstances of non-patient facing MIPS eligible clinicians or groups and allows the Secretary, to the extent feasible and appropriate, to apply alternative measures and activities to such MIPS eligible clinicians and groups.

    Section 1848(q)(2)(C)(v) of the Act required the Secretary to use a request for information (RFI) to solicit recommendations from stakeholders to identify improvement activities and specify criteria for such improvement activities, and provides that the Secretary may contract with entities to assist in identifying activities, specifying criteria for the activities, and determining whether MIPS eligible clinicians or groups meet the criteria set. In the MIPS and APMs RFI, we requested recommendations to identify activities and specify criteria for activities. In addition, we requested details on how data should be submitted, the number of activities, how performance should be measured, and what considerations should be made for small or rural practices. There were two overarching themes from the comments that we received in the MIPS and APMs RFI. First, the majority of the comments indicated that all subcategories should be weighted equally and that MIPS eligible clinicians or groups should be allowed to select from whichever subcategories are most applicable to them during the performance period. Second, commenters supported inclusion of a diverse set of activities that are meaningful for individual MIPS eligible clinicians or groups. We have reviewed all of the comments that we received and took these recommendations into consideration while developing the proposed improvement activities policies.

    We are finalizing at § 414.1305 the definition of improvement activities, as proposed, to mean an activity that relevant MIPS eligible clinician, organizations and other relevant stakeholders identify as improving clinical practice or care delivery and that the Secretary determines, when effectively executed, is likely to result in improved outcomes.

    (2) Contribution to Final Score

    Section 1848(q)(5)(E)(i)(III) of the Act specifies that the improvement activities performance category will account for 15 percent of the final score, subject to the Secretary's authority to assign different scoring weights under section 1848(q)(5)(F) of the Act. Therefore, we proposed at § 414.1355, that the improvement activities performance category would account for 15 percent of the final score.

    Section 1848(q)(5)(C)(i) of the Act specifies that a MIPS eligible clinician or group that is certified as a patient-centered medical home or comparable specialty practice, as determined by the Secretary, must be given the highest potential score for the improvement activities performance category for the performance period. For a further description of APMs that have a certified patient centered-medical home designation, we refer readers to the proposed rule (81 FR 28234).

    A patient-centered medical home would be recognized if it is a nationally recognized accredited patient-centered medical home, a Medicaid Medical Home Model, or a Medical Home Model. The NCQA Patient-Centered Specialty Recognition would also be recognized, which qualifies as a comparable specialty practice. Nationally recognized accredited patient-centered medical homes are recognized if they are accredited by: (1) The Accreditation Association for Ambulatory Health Care; (2) the National Committee for Quality Assurance (NCQA) patient-centered medical home recognition; (3) The Joint Commission Designation; or (4) the Utilization Review Accreditation Commission (URAC).18 We refer readers to the proposed rule (81 FR 28330) for further description of the Medicaid Medical Home Model or Medical Home Model. The criteria for being an organization that accredits medical homes is that the organization must be national in scope and must have evidence of being used by a large number of medical organizations as the model for their patient-centered medical home. We solicited comment on our proposal for determining which practices would qualify as patient-centered medical homes. We also note that practices may receive a patient-centered medical home designation at a practice level, and that individual TINs may be composed of both undesignated practices and practices that have received a designation as a patient-centered medical home (for example, only one practice site has received patient-centered medical home designation in a TIN that includes five practice sites). For MIPS eligible clinicians who choose to report at the group level, reporting is required at the TIN level. We solicited comment on how to provide credit for patient-centered medical home designations in the calculation of the improvement activities performance category score for groups when the designation only applies to a portion of the TIN (for example, to only one practice site in a TIN that is comprised of five practice sites).

    18 Gans, D. (2014). A Comparison of the National Patient-Centered Medical Home Accreditation and Recognition Programs. Medical Group Management Association, www.mgma.com.

    Section 1848(q)(5)(C)(ii) of the Act provides that MIPS eligible clinicians or groups who are participating in an APM (as defined in section 1833(z)(3)(C) of the Act) for a performance period must earn at least one half of the highest potential score for the improvement activities performance category for the performance period. For further description of improvement activities and the APM scoring standard for MIPS, we refer readers to the proposed rule (81 FR 28234). For all other MIPS eligible clinicians or groups, we refer readers to the scoring requirements for MIPS eligible clinicians and groups in the proposed rule (81 FR 28247).

    Section 1848(q)(5)(C)(iii) of the Act provides that a MIPS eligible clinician or group must not be required to perform activities in each improvement activities subcategory or participate in an APM to achieve the highest potential score for the improvement activities performance category.

    Section 1848(q)(5)(B)(i) of the Act requires the Secretary to treat a MIPS eligible clinician or group that fails to report on an applicable measure or activity that is required to be reported, they will receive the lowest potential score applicable to the measure or activity.

    The following is a summary of the comments we received regarding the improvement activities performance category contribution to the final score.

    Comment: Several commenters expressed concern about the burden of complying with this performance category in addition to the other three performance categories and some recommended that the performance category not be included in the MIPS program, believing it would be difficult to report. Some commenters requested that we remove the improvement activities performance category completely.

    Response: We recognize that there are challenges associated with understanding how to comply with a new program such as MIPS and the improvement activities performance category. However, the statute requires the improvement activities performance category be included in the Quality Payment Program. After consideration of the comments expressing concern about reporting burden, we are reducing the number of required activities we proposed from a maximum of six medium-weighted or three high-weighted or some combination thereof for full credit to a requirement of no more than four medium-weighted activities, two high-weighted activities, or a combination of medium and high-weighted activities where each selected high-weighted activity reduces the number of medium-weighted activities required. We believe this is still aligned with the statute in measuring performance in this performance category. We will continue to provide education and outreach to provide further clarity.

    Comment: Some commenters expressed concern that improvement activities would not be successfully implemented because of the low percentage that this category was given in the final MIPS scoring methodology. The commenters suggested increasing the improvement activities performance categories percentage toward the final score. Another commenter recommended reducing the quality performance category's weighting from 50 percent to 35 percent and increasing the improvement activities performance category from 15 percent to 30 percent for 2017, indicating this would increase the likelihood that more MIPS eligible clinicians would fully participate.

    Response: We believe we have appropriately weighted the improvement activities performance category within the final score, particularly given the statutory direction under section 1848(q)(5)(E)(i)(III) of the Act that the category account for 15 percent of the final score, subject to the Secretary's authority to assign different scoring weights under certain circumstances. However, we intend to monitor the effects of category weighting under MIPS over time.

    Comment: Several commenters requested that CMS develop a definition of a Medical Home or certified patient-centered medical home that includes practices that are designated by private health plans such as Blue Cross and Blue Shield of Michigan (BCBSM) patient-centered medical home program. Some commenters also requested including regional patient-centered medical home recognition programs that are free to practices. Other commenters requested that CMS consider MIPS eligible clinicians or groups that have completed a certification program that has a demonstrated track record of support by non-Medicare payers, state Medicaid programs, employers, or others in a region or state. Some commenters requested that CMS consider other significant rigorous certification programs or state-level certification. One example of a state-level certification program, provided by a commenter, was the Oregon patient-centered medical home certification. One commenter suggested recognizing certified patient-centered medical homes that may not have sought national certification. The same commenter also suggested providing a MIPS eligible clinician or group full credit as a certified patient-centered medical home if they were performing the advanced primary care functions reflected in the Joint Principles of the Patient-Centered Medical Home and the five key functions of the Comprehensive Primary Care Initiative. One commenter suggested that any MIPS eligible clinician or group that has received a certification from any entity that meets the necessary criteria as a patient-centered medical home accreditor should receive full credit. One commenter requested that “The Compliance Team”, a privately held, for-profit, healthcare accreditation organization that receives deeming authority from the CMS as an accreditation organization, be included as part of the accreditation organizations for patient-centered medical home. This commenter also stated that the exclusion of “The Compliance Team” from the final list of approved administering organizations would create artificial barriers to entry that will likely drive up the cost of accreditation because all the small practices and clinics that already went through accreditation with The Compliance Team would need to go through a second accreditation. One commenter requested that Behavioral Health Home Certification also be recognized for full credit as a patient-centered medical home. Some commenters further stated that CMS should ensure that the activities and standards included in such accredited programs are meaningful, incorporate private sector best practices, and directly improve patient outcomes. Other commenters agreed with using the accreditation programs that were proposed in the rule to qualify patient-centered medical home models under the improvement activities performance category for full credit, including recommending that practices undergo regular re-accreditation by the proposed bodies to ensure they are continuing to provide care in a manner consistent with being a medical home. In addition, some commenters recommended the Quality Payment Program develop a way to reward practices that may not have reached patient-centered medical home recognition but are in the process of transformation.

    Response: We were not previously aware of additional certifying bodies that are used by a large number of medical organizations that adhere to similar national guidelines for certifying a patient-centered medical home, meaning they are national in scope, as the ones cited in the proposal. Consistent with the credit provided for practices that have been certified as a patient-centered medical home or comparable specialty practice for certified bodies included in the proposal, we will also recognize practices that have received accreditation or certification from other certifying bodies that have certified a large number of medical organization and meet national guidelines. We further define large as certifying bodies that the certifying organizations must have certified 500 or more certified member practices. In addition to the 500 or more practice threshold for certifying bodies, the second criterion requires a practice to: (1) Have a personal clinician in a team-based practice; (2) have a whole-person orientation; (3) provide coordination or integrated care; (4) focus on quality and safety; and (5) provide enhanced access (Gans, 2014). The Oregon Patient-centered Primary Care Home Program described by comments and the Blue Cross Blue Shield of Michigan (BCBSM) are two examples of programs that would meet these two criteria in the proposed rule.

    While we believe that some of the advanced primary care functions in the Joint Principles of the Patient-Centered Medical Home and key functions of the Comprehensive Primary Care Initiative might count as improvement activities there is a distinction maintained between being an actual certified patient-centered medical home per the statute and performing some functions of one. Therefore, performing these functions alone would not qualify for full credit. Other certifications that are not for patient-centered medical homes or comparable specialty practices would also not qualify automatically for the highest score.

    MIPS eligible clinicians and groups that receive certification from other accreditation organizations that certify for a patient-centered medical home or comparable specialty practice, including accredited organizations that receive deeming authority from CMS, such as The Compliance Team, would receive full credit as long as those accredited bodies meet the two criteria. These two criteria are: (1) The accredited body must have certified 500 or more member practices as a patient-centered medical home or comparable practice; and (2) they must meet national guidelines.

    Comment: Some commenters agreed with CMS regarding not requiring that a MIPS eligible clinician select from any specific subcategories of activities. However, the commenters opposed CMS' suggestion to eventually calculate performance in this performance category due to the technical complexity of doing so, but also because it would ignore the overall intent of the performance category, which is to recognize engagement in innovative activities that contribute to quality rather than actual performance. One commenter encouraged CMS to re-consider the improvement activities and scoring criteria in future years to incentivize physician improvement.

    Response: We will take this suggestion into account as we continue implementation and refinement of the MIPS program in the future. While we recognize that it may be technically complex at this time to calculate performance within the improvement activities performance category, our expectation is that such a process would become simpler over time as MIPS eligible clinicians become accustomed to implementing improvement activities. For further discussion of improvement activities scoring as a component of the final score, we refer readers to section II.E.6.a.(4) in this final rule with comment period.

    After consideration of the comments regarding the contribution to final score we are finalizing at § 414.1355, that the improvement activities performance category would account for 15 percent of the final score. We are not finalizing our policy on recognizing only practices that have received nationally recognized accredited or certified-patient centered medical home certifications. Rather, we are finalizing at § 414.1380 an expanded definition of what is acceptable for recognition as a certified-patient centered medical home or comparable specialty practice. We are recognizing a MIPS eligible clinician or group as being a certified patient-centered medical home or comparable specialty practice if they have achieved certification or accreditation as such from a national program, or they have achieved certification or accreditation as such from a regional or state program, private payer or other body that certifies at least 500 or more practices for patient-centered medical home accreditation or comparable specialty practice certification. Examples of nationally recognized accredited patient-centered medical homes are: (1) The Accreditation Association for Ambulatory Health Care; (2) the National Committee for Quality Assurance (NCQA) Patient-Centered Medical Home (3) The Joint Commission Designation; or (4) the Utilization Review Accreditation Commission (URAC). We are finalizing that the criteria for being a nationally recognized accredited patient-centered medical home are that it must be national in scope and must have evidence of being used by a large number of medical organizations as the model for their patient-centered medical home. We will also provide full credit for the improvement activities performance category for a MIPS eligible clinician or group that has received certification or accreditation as a patient-centered medical home or comparable specialty practice from a national program or from a regional or state program, private payer or other body that administers patient-centered medical home accreditation and certifies 500 or more practices for patient-centered medical home accreditation or comparable specialty practice certification.

    (3) Improvement Activities Data Submission Criteria (a) Submission Mechanisms

    For the purpose of submitting under the improvement activities performance category, we proposed in the proposed rule (81 FR 28181) to allow for submission of data for the improvement activities performance category using the qualified registry, EHR, QCDR, CMS Web Interface, and attestation data submission mechanisms. If technically feasible, we would use administrative claims data to supplement the improvement activities submission. Regardless of the data submission method, all MIPS eligible clinicians or groups must select activities from the improvement activities inventory provided in Table H in in the Appendix to this final rule with comment period. We believe the proposed data submission methods would allow for greater access and ease in submitting data, as well as consistency throughout the MIPS program.

    In addition, we proposed at § 414.1360, that for the transition year only, all MIPS eligible clinicians or groups, or third party intermediaries such as health IT intermediaries, QCDRs and qualified registries that submit on behalf of a MIPS eligible clinician or group, must designate a yes/no response for activities on the improvement activities inventory. In the case where a MIPS eligible clinician or group is using a health IT intermediary, QCDR, or qualified registry for their data submission, the MIPS eligible clinician or group will certify all improvement activities have been performed and the health IT intermediary, QCDR, or qualified registry will submit on their behalf. An agreement between a MIPS eligible clinician or group and a health IT vendor, QCDR, or qualified registry for data submission for improvement activities as well as other performance data submitted outside of the improvement activities performance category could be contained in a single agreement, minimizing the burden on the MIPS eligible clinician or group. See the proposed rule (81 FR 28281) for additional details.

    We proposed to use the administrative claims method, if technically feasible, only to supplement improvement activities performance category submissions. For example, if technically feasible, MIPS eligible clinicians or groups, using the telehealth modifier GT, could get automatic credit for this activity. We requested comments on these proposals.

    The following is a summary of the comments we received regarding the improvement activities performance category data submission criteria and mechanisms.

    Comment: Some commenters noted that the definitions of some improvement activities (such as those that require patient-specific factors) are impossible for CEHRTs to determine from the data in the EHR. The commenters believed these will create usability problems and complicate clinical workflows.

    Response: If an EHR vendor or developer cannot complete system changes to support usability and simplify clinical workflows for some improvement activities, a MIPS eligible clinician or group may use another calculation method to support that attestation. For example, a MIPS eligible clinician or group may use their CEHRT to generate a list of patients for whom they have prescribed an antidiabetic agent (for example, insulin) and use an associated documented record with reference to an individual glycemic treatment goal that includes patient-specific factors to identify the competition rate through manual or other IT assisted calculation. We also encourage MIPS eligible clinicians to work with their CEHRT system developers to ensure that their systems consider the MIPS eligible clinician's workflow needs. In addition, we note that ONC recently relied an EHR Contract Guide, available at https://www.healthit.gov/sites/default/files/EHR_Contracts_Untangled.pdf, which is designed to help clinicians and developers work together to consider key issues related to product needs and product operation.

    Comment: One commenter opposed separate processes for attesting improvement activities when those activities are related to advancing care information or quality measures performance categories.

    Response: For the transition year of MIPS, we have concluded that we must require separate processes for attestation in separate performance categories, including cases where improvement activities are related to advancing care information or quality performance categories. Refer to section II.E.5.g. and Table H in in the Appendix to this final rule with comment period for more information on improvement activities that are designated activities which receive a 10 percent bonus in the advancing care information performance category. MIPS eligible clinicians should factor this 10 percent bonus into their selection of activities to meet the requirements of the improvement activities performance category as well. We intend to continue to streamline reporting requirements under MIPS in the future. For the advancing care information performance category, however, we have revised the policy for the transition year of MIPS, so that additional designated activities in Table H in in the Appendix to this final rule with comment period may also qualify for a bonus in the advancing care information performance category. We refer readers to section II.E.5.g.(5) of this final rule with comment period for more information on this bonus; MIPS eligible clinicians should factor this into their selection of activities to meet the requirements of the improvement activities performance category as well. We intend to continue examining how to streamline reporting requirements under MIPS in the future.

    Comment: Several commenters requested additional clarification on how MIPS eligible clinicians would report as a group for the improvement activities performance category. The commenters provided suggestions for how CMS should provide credit for those groups, including suggestions: (1) That CMS not require all MIPS eligible clinicians in a group to report all activities in the transition year; (2) that CMS specify how many clinicians in each group must participate in each activity to achieve points for the entire group; and (3) that CMS give credit to the entire group if at least part of a group is performing an activity.

    Response: We would like to explain that all MIPS eligible clinicians, reporting as a group, will receive the same score for the improvement activities performance category. If at least one clinician within the group is performing the activity for a continuous 90 days in the performance period, the group may report on that activity.

    Comment: A few commenters expressed concern with the improvement activities performance category noting that it will be necessary to have timely specifications on how to satisfy the qualifications for each activity to earn improvement activities credit.

    Response: The improvement activities inventory in Table H in in the Appendix to this final rule with comment period includes a description of the specifications for how to satisfy the qualifications for each project (activity) in order to earn points.

    Comment: Some commenters requested clarification on the submission mechanisms for the improvement activities performance category. The commenters believed that some activities require use of a third party vendor while others did not. The commenter stated it is unclear how MIPS eligible clinicians will report on activities within the improvement activities performance category.

    Response: The submission mechanisms for the improvement activities performance category are listed in section II.E.5.f.(3) of this final rule with comment period. We agree there are some activities such as those that reference the use of a QCDR that may require a third party vendor. There are many others, however, that do not require third party vendor engagement or suggest that use of certified EHR technology is one way to support a given activity but not the only way to support an activity. We will provide technical assistance through subregulatory guidance to further explain how MIPS eligible clinicians will report on activities within the improvement activities performance category. This subregulatory guidance will also include how MIPS eligible clinicians will be able to identify a specific activity through some type of numbering or other similar convention.

    Comment: One commenter requested clarification that if an EHR vendor reports the improvement activities performance category for a MIPS eligible clinician or group, the vendor is simply reporting the MIPS eligible clinician's or group's attestation of success, not attesting to that success.

    Response: The commenter is correct in that the vendor simply reports the MIPS eligible clinician's or group's attestation, on behalf of the clinician or group, that the improvement activities were performed. The vendor is not attesting on its own behalf that the improvement activities were performed.

    Comment: Another commenter recommended allowing improvement activities to be reported via the CMS Web Interface for the transition year, rather than through a QCDR or EHR.

    Response: The CMS Web Interface is one of the data submission mechanisms available for the improvement activities performance category reporting. We have included a number of possible submission mechanisms for MIPS and recognize the need to make the attestation process as simple as possible.

    Comment: One commenter recommended that CMS provide additional clarity in the final rule with comment period on how MIPS eligible clinicians should attest if they meet part, but not all, of the entire improvement activity. In order to provide a more accurate and fair score, this commenter recommended providing more prescriptive criteria so that points may be assigned for sub-activities within each activity.

    Response: A MIPS eligible clinician must meet all requirements of the activity to receive credit for that activity. Partial satisfaction of an activity is not sufficient for receiving credit for that activity. However, many activities offer multiple options for how clinicians may successfully complete them and additional criteria for activities are already included in the improvement activities inventory.

    Comment: Some commenters supported CMS' proposed “yes/no” responses via reporting mechanisms of MIPS eligible clinicians' choice, and requested that we consider collecting more detailed responses in the future. Other commenters called on CMS to ensure that improvement activities chosen by MIPS eligible clinicians are relevant and useful for improving care in their practices. One commenter expressed reservations about attestation and requested that CMS verify that MIPS eligible clinicians perform the activities. Still others, however, called on CMS to continue allowing flexibility for MIPS eligible clinicians, including attestation options.

    Response: We will continue examining changes in the data collection process with the expectation that where applicable specification and data collection may be added on an activity by activity basis. We will also verify data through the data validation and audit process as necessary.

    Comment: One commenter recommended that the certifying boards be included as reporting agents for improvement activities.

    Response: We will take this suggestion into consideration for future rulemaking. To the extent possible, we will work with the patient-centered medical home and comparable specialty practice certifying bodies and other certification boards to verify practice status.

    Comment: One commenter recommended that CMS align improvement activities across the country to facilitate shared learning and prevent against waste and inefficiency, and should create a “single source” option for clinicians for reporting, measurement benchmarking and feedback, that also counts toward the improvement activities performance category.

    Response: We will take this suggestion into consideration for future rulemaking.

    After consideration of the comments received regarding the improvement activities data submission criteria we are not finalizing the policies as proposed. Specifically, we are not finalizing the data submission method of administrative claims data to supplement the improvement activities as it is not technically feasible at this time.

    We are finalizing at § 414.1360 to allow for submission of data for the improvement activities performance category using the qualified registry, EHR, QCDR, CMS Web Interface, and attestation data submission mechanisms. Regardless of the data submission method, with the exception of MIPS APMs, all MIPS eligible clinicians or groups must select activities from the improvement activities inventory provided in Table H in in the Appendix to this final rule with comment period.

    In addition, we are finalizing at § 414.1360 that for the transition year of MIPS, all MIPS eligible clinicians or groups, or third party intermediaries such as health IT vendors, QCDRs and qualified registries that submit on behalf of a MIPS eligible clinician or group, must designate a yes response for activities on the improvement activities inventory. In the case where a MIPS eligible clinician or group is using a health IT vendor, QCDR, or qualified registry for their data submission, the MIPS eligible clinician or group will certify all improvement activities have been performed and the health IT vendor, QCDR, or qualified registry will submit on their behalf.

    We are also including a designation column in the improvement activities inventory that will show which activities qualify for the advancing care information bonus finalized at § 414.1380 and refer readers to Table H in in the Appendix to this final rule with comment period.

    (b) Weighted Scoring

    While we considered both equal and differentially weighted scoring in this performance category, the statute requires a differentially weighted scoring model by requiring 100 percent of the potential score in the improvement activities performance category for patient-centered medical home participants, and a minimum 50 percent score for APM participants. For additional activities in this category, we proposed at § 414.1380 a differentially weighted model for the improvement activities performance category with two categories: Medium and high. The justification for these two weights is to provide flexible scoring due to the undefined nature of activities (that is, improvement activities standards are not nationally recognized and there is no entity for improvement activities that serves the same function as the NQF does for quality measures). Improvement activities are weighted as high based on alignment with our national public health priorities and programs such as the Quality Innovation Network-Quality Improvement Organization (QIN/QIO) or the Comprehensive Primary Care Initiative which recognizes specific activities related to expanded access and integrated behavioral health as important priorities. Programs that require performance of multiple activities such as participation in the Transforming Clinical Practice Initiative, seeing new and follow-up Medicaid patients in a timely manner in the clinician's state Medicaid Program, or an activity identified as a public health priority (such as emphasis on anticoagulation management or utilization of prescription drug monitoring programs) were weighted as high.

    The statute references certified patient-centered medical homes as achieving the highest score for the MIPS program. MIPS eligible clinicians or groups may use that to guide them in the criteria or factors that should be taken into consideration to determine whether to weight an activity medium or high. We requested comments on this proposal, including criteria or factors we should take into consideration to determine whether to weight an activity medium or high.

    The following is a summary of the comments we received regarding weighted scoring for improvement activities.

    Comment: One commenter recommended that we establish three weighting categories for the improvement activities performance category: (1) High—30 percent; (2) Medium—20 percent; and (3) Low—10 percent. The commenter stated that this weighting allocation would allow for the development of a third category for easier improvement activities.

    Response: Generally, we received comments on the two weightings, high and medium. We believe there were no activities that merited a classification as a lower weighted activity during the MIPS transition year. However, in future years, through the annual call for activities and when more data are available on which activities are most frequently reported, we will reevaluate the applicability of these weights and potential reclassification of activities into lower weights.

    Comment: Commenters noted an inconsistency regarding the weighting of activities related to the Prescription Drug Monitoring Program (PDMP). Section II.E.5.f.(3)(b) of the proposed rule (81 FR 28261) references this as a high priority activity; however, the PDMP related activity, “Annual registration in the Prescription Drug Monitoring Program” in Table H, in the Appendix of this final rule with comment period is listed as a medium-weighted activity (81 FR 28570).

    Response: There are two PDMP activities, one with a medium weight-registering for the PDMP-and one with a high weight-utilizing the PDMP. We had added some additional language to the one PDMP activity with the high weight to differentiate it from the other medium-weighted PDMP activity. We refer readers to Table H in in the Appendix to this final rule with comment period for the additional language.

    Comment: Several commenters supported the proposed list of activities but recommended that the number of required activities be reduced and that more activities be highly weighted to reduce the reporting burden for MIPS eligible clinicians.

    Response: As discussed in section II.E.5.f.(2) of this final rule with comment period, we have reduced the number of activities that MIPS eligible clinicians are required to report to no more than four medium-weighted activities, two high-weighted activities, or any combination thereof, for a total of 40 points. We are reducing the number of activities for small practices, practices located in rural areas, and geographic HSPAs and non-patient facing MIPS eligible clinicians to no more than one high-weighted activity or two medium-weighted activities, where each activity counts for doubled weighting to also achieve a total of 40 points.

    Comment: Several commenters suggested that CMS expand the number of high-weighted activities, noting that there were only 11 high-weighted activities out of 90, which may prevent MIPS eligible clinicians from reporting high-weighted improvement activities, and that the Emergency Response and Preparedness subcategory was the only subcategory with without a high-weighted activity.

    Response: We are changing one existing activity in the Emergency Response and Preparedness subcategory from “Participation in domestic or international humanitarian volunteer work. MIPS eligible clinicians and groups must be registered for a minimum of 6 months as a volunteer for domestic or international humanitarian volunteer work” to “Participation in domestic or international humanitarian volunteer work. Activities that simply involve registration are not sufficient. MIPS eligible clinicians attest to domestic or international humanitarian volunteer work for a period of a continuous 60 days or greater.” We have changed this activity so that rather than requiring MIPS eligible clinicians to be registered for 6 months, we are requiring them to participate for 60 days. This change is in line with our overall new 90-day performance period policy. The 60-day participation would fall within that new 90-day window. We are also changing this existing activity from a medium to a high-weighted activity because such volunteer work is intensive, often involves travel, and working in challenging physical and clinical circumstances. Table H in in the Appendix to this final rule with comment period reflects this revised description of the existing activity and revised weighting. We note, however, that this is a change for this transition year for the 2017 performance period only. In addition, we are changing the weight from medium to high of the one activity related to “Participating in a Rural Health Clinic (RHC), Indian Health Service Medium Management (IHS), or Federally Qualified Health Center (FQHC) in ongoing engagement activities that contribute to more formal quality reporting” which we believe is consistent with section 1848(q)(2)(B)(iii) of the Act, which requires the Secretary to give consideration to the circumstances of practices located in rural areas and geographic HPSAs. Rural health clinics would be included in that definition for consideration of practices in rural areas. Table H in in the Appendix to this final rule with comment period reflects this revised weighting.

    Comment: Some commenters recommended assigning a higher weight to QCDR-related improvement activities and QCDR functions, and one commenter recommended that use of a QCDR count for several activities.

    Response: Participating in a QCDR is not sufficient for demonstrating performance of multiple improvement activities, and we do not believe at this time that it warrants a higher weighting. In addition, QCDR participation was not proposed as a high-weighted activity because, while useful for data collection, it is neither critical for supporting certified patient-centered medical homes, which is what we considered in proposing whether an improvement activity would be high-weighted activity, nor does it require multiple actions. We also note that while QCDR participation may not automatically confer improvement activities credit, it may put MIPS eligible clinicians in a position to report multiple improvement activities, since there are several that specifically reference QCDR participation. We ask that each MIPS eligible clinician select from the broad list of activities provided in Table H in in the Appendix to this final rule with comment period in order to achieve their total score.

    Comment: Several commenters made suggestions for weighting within the improvement activities performance category. Some commenters recommended that CMS increase the number of high weight activities because they believed this would allow MIPS eligible clinicians to select activities that are more meaningful without sacrificing time and energy that should be spent with patients. Other commenters offered suggestions for additional activities that should be allocated high weight under the performance category, or suggested consolidating activities under subcategories that could be afforded high weight.

    Response: Additional reweighting, other than included in this final rule with comment period, will not occur until a revised improvement activities inventory list is finalized through the rulemaking process. We will take this recommendation into consideration for future rulemaking.

    Comment: Some commenters made several suggestions for providing additional credit to MIPS eligible clinicians under the improvement activities performance category. For example, one commenter recommended giving automatic credit to surgeons for providing 24/7 access to MIPS eligible clinicians, groups, or care teams for advice about urgent or emergent care because surgeons provide on-call coverage and are available to medical facilities that provide after-hours access. Other commenters suggested that specialists that qualify for additional credit under the Blue Cross Blue Shield of Michigan Value-Base Reimbursement program should receive full credit for improvement activities performance category. Additional commenters suggested that we consider providing automatic credit for the improvement activities performance category to MIPS eligible clinicians participating in a QCDR rather than requiring attestation for each individual improvement activity. One commenter recommended that ED clinicians automatically earn at least a minimum score of one-half of the highest potential score for this performance category simply for providing this access on an ongoing basis, noting that emergency clinicians are one of the few clinician specialties that truly provide 24/7 care.

    Response: We will consider these requests in future rulemaking for the MIPS program. As discussed in section II.E.f.(3)(c) of this final rule with comment period, we are revising our policy regarding the number of required activities for the transition year of MIPS. Specifically, we are asking MIPS eligible clinicians or groups that are not MIPS APMs, to select a reduced number of activities: Either four medium-weighted activities, or two medium-weighted and one high-weighted, or two high-weighted activities. For MIPS eligible clinicians or groups, in small practices, practices in rural areas or geographic HPSAs, or non-patient facing MIPS eligible clinicians, who are only required to select one medium-weighted activity for one-half of the credit for this performance category or two medium-weighted or one high-weighted activity for full credit for this performance category.

    Comment: Some commenters requested that the CAHPS for MIPS survey be included as a medium-weighted improvement activity.

    Response: We disagree and believe assessing patients' experiences as they interact with the health care system is a valuable indication of merit. Please note, there are no reporting thresholds for improvement activities, this allows flexibility for MIPS eligible clinicians and groups to report surveys in a way that best reflects their efforts. Therefore, the CAHPS for MIPS survey is included as a high-weighted activity under the activity called “Participation in the Consumer Assessment of Healthcare Providers and Systems Survey (CAHPS) or other Supplemental Questionnaire Items.”

    Comment: Some commenters supported patient-centered medical homes and supported these entities receiving full credit for improvement activities performance category. One commenter suggested that patient-centered medical homes stratify data by disparity variables and implement targeted interventions to address health disparities. Some commenters were concerned that groups of less than 50 would receive the highest potential score under the improvement activities performance category, while groups with greater than 50 would receive partial credit. One commenter stated that larger groups have the inherent capability of assuming greater risk. One commenter also requested that the 50 group number be stricken from the language allowing any group size that has acquired patient-centered medical home certification by a recognized entity to be given full credit for improvement activities to encourage all groups, regardless of size, to pursue patient-centered medical home certification as patient-centered medical home certification is fundamental to good practice. Additional commenters suggested including activities under the improvement activities performance category that are associated with actions conducted by a certified patient-centered medical home. One commenter recommended the following subcategories of activities for the improvement activities performance category that are aligned with elements of a patient centered medical home: Expanded practice access, population management, care coordination, beneficiary engagement, and patient safety and practice assessment. This commenter believed that the presentation of the information in this way will allow clinicians to better understand the patient-centered medical home model and decide how to best deliver care under MIPS.

    Response: We note that there is no limit on the size of a practice in a patient-centered medical home for eligibility for full improvement activities credit. We refer the commenter to section II.E.8. of this final rule with comment period on APMs to establishing thresholds of less than 50 as it relates to APM incentive payments. We encourage MIPS eligible clinicians and groups to working with appropriate certifying bodies to consider that in the future. We will also look for ways to reorganize the existing improvement activities inventory and working with clinicians and others in future years on the best way to present this list of activities.

    Comment: A few commenters supported giving 50 percent credit in the improvement activities performance category to MIPS APMs.

    Response: It is important to note that it was statutorily mandated that MIPS eligible clinicians participating in APMs receive at least one-half of the highest score in the improvement activities performance category.

    Comment: Other commenters recommended that we establish three weighting categories for the improvement activities performance category: (1) High—30 percent; (2) medium—20 percent; and (3) low—10 percent. The commenter stated that this weighting allocation would allow for the development of a third category for easier improvement activities.

    Response: We will consider other weighting options as appropriate for improvement activities in future rulemaking.

    After consideration of the comments regarding weighted scoring we are finalizing at § 414.1380 a differentially weighted model for the improvement activities performance category with two categories: Medium and high. We refer readers to the following sections of this final rule with comment period in reference to the improvement activities performance category: Section VI.H for the modified list of high-weighted and medium-weighted activities, section II.E.5.f.(3)(c) for information on the number of activities required to achieve the highest score, section II.E.6.a.(4)(a) for information on how points will be assigned, section II.E.6.a.(4)(b) how the highest potential score can be achieved, section II.E.6.a.(4)(c) on how we will recognize a MIPS eligible clinician or group for qualifying for the points for a certified patient-centered medical home or comparable specialty practices, and section II.E.6.a.(4)(d) for how the improvement performance activities will be calculated.

    (c) Submission Criteria

    We proposed at § 414.1380 to set the improvement activities submission criteria under MIPS, to achieve the highest potential score of 100 percent, at three high-weighted improvement activities (20 points each) or six medium-weighted improvement activities (10 points each), or some combination of high and medium-weighted improvement activities to achieve a total of 60 points for MIPS eligible clinicians participating as individuals or as groups (refer to Table H in in the Appendix to this final rule with comment period for improvement activities and weights). MIPS eligible clinicians or groups that select less than the designated number of improvement activities will receive partial credit based on the weighting of the improvement activity selected. To achieve a 50 percent score, one high-weighted and one medium-weighted improvement activity or three medium-weighted improvement activities are required for these MIPS eligible clinicians or groups.

    Exceptions to the above apply for: Small practices, MIPS eligible clinicians and groups located in rural areas, MIPS eligible clinicians and groups that are located in geographic HPSAs, non-patient facing MIPS eligible clinicians or groups or MIPS eligible clinicians, or groups that participate in an APM or a patient-centered medical home submitting in MIPS.

    For MIPS eligible clinicians and groups that are small practices, located in rural areas or geographic HPSAs, or non-patient facing MIPS eligible clinicians or groups, to achieve the highest score of 100 percent, two improvement activities are required (either medium or high). For MIPS eligible clinicians or groups that are small practices, located in rural areas, located in HPSAs, or non-patient facing MIPS eligible clinicians or groups, in order to achieve a 50 percent score, one improvement activity is required (either medium or high).

    MIPS eligible clinicians or groups that participate in APMs are considered eligible to participate under the improvement activities performance category unless they are participating in an Advanced APM and they have met the Qualifying APM Participant (QP) thresholds or are Partial QPs that elect not to report information. A MIPS eligible clinician or group that is participating in an APM and participating under the improvement activities performance category will receive one half of the total improvement activities score just through their APM participation. These are MIPS eligible clinicians or groups that we identify as participating in APMs for MIPS and may participate under the improvement activities performance category. To achieve the total improvement activities score, such MIPS eligible clinicians or groups will need to identify that they participate in an APM and this APM will submit the eligible clinicians' improvement activities score for that specific model type.

    For further description of MIPS eligible clinicians or groups that are required to report to MIPS under the APM scoring standard and their improvement activities scoring requirements, we refer readers to the proposed rule (81 FR 28234). For all other MIPS eligible clinicians or groups participating in APMs that would report to MIPS, this section applies and we also refer readers to the scoring requirements for these MIPS eligible clinicians or groups in the proposed rule (81 FR 28237).

    Since we cannot measure variable performance within a single improvement activity, we proposed at § 414.1380 to compare the improvement activities points associated with the reported activities against the highest number of points that are achievable under the improvement activities performance category which is 60 points. We proposed that the highest potential score of 100 percent can be achieved by selecting a number of activities that will add up to 60 points. MIPS eligible clinicians and groups, including those that are participating as an APM, and all those that select activities under the improvement activities performance category can achieve the highest potential score of 60 points by selecting activities that are equal to the 60-point maximum. We refer readers to the scoring section of the proposed rule (81 FR 28237) for additional rationale for using 60 points for the transition year of MIPS.

    If a MIPS eligible clinician or group reports only one improvement activity, we would score that activity accordingly, as 10 points for a medium-level activity or 20 points for a high-level activity. If a MIPS eligible clinician or group reports no improvement activities, then the MIPS eligible clinician or group would receive a zero score for the improvement activities performance category. We believe this proposal allows us to capture variation in the total improvement activities reported.

    In addition, we believe these are reasonable criteria for MIPS eligible clinicians or groups to accomplish within the transition year for three reasons: (1) In response to several stakeholder MIPS and APMs RFI comments, we are not recommending a minimum number of hours for performance of an activity; (2) we are offering a broad list of activities from which MIPS eligible clinicians or groups may select; and (3) also in response to MIPS and APMs RFI comments, we proposed that an activity must be performed for at least 90 days during the performance period for improvement activities credit. We intend to reassess this requirement threshold in future years. We do not believe it is appropriate to require a determined number of activities within a specific subcategory at this time. This proposal aligns with the requirements in section 1848(q)(2)(C)(iii) of the Act that states MIPS eligible clinicians or groups are not required to perform activities in each subcategory.

    Lastly, we recognize that working with a QCDR could allow a MIPS eligible clinician or group to meet the measure and activity criteria for multiple improvement activities. For the transition year of MIPS, there are several improvement activities in the inventory that incorporate QCDR participation. Each activity must be selected and achieved separately for the transition year of MIPS. A MIPS eligible clinician or group cannot receive credit for multiple activities just by selecting one activity that includes participation in a QCDR. As the improvement activities inventory expands over time we were interested in receiving comments on what restrictions, if any, should be placed around improvement activities that incorporate QCDR participation.

    The following is a summary of the comments we received regarding submission criteria.

    Comment: One commenter recommended that CMS base performance in the improvement activities performance category on participating in a number of improvement activities rather than a specific number of hours.

    Response: We would like to explain that we proposed at § 414.1380 to require MIPS eligible clinicians to submit three high-weighted improvement activities or six medium-weighted improvement activities, or some combination of high and medium-weighted improvement activities to achieve the highest possible score in this performance category (81 FR 28210). Credit awarded under the improvement activities performance category relies on the number of activities, not a specific number of hours. We refer readers to the section below entitled “Required Period of Time for Performing an Activity” below where we discuss the 90-day time period policy.

    Comment: Other commenters did not support the improvement activities performance category because of some specialty concerns on the inability to report on two or more activities, such as one commenter that indicated that doctors of chiropractic practice in clinics, often with under 15 MIPS eligible clinicians, would have problems reporting on two improvement activities. This commenter noted that during the early adopter program for the NCQA Patient-Centered Connected Care recognition program, doctors of chiropractic did not experience favorable consideration because the TCPIs focused their funding on primary care clinicians.

    Response: We believe there are a sufficient number of broad activities from which specialty practices, as well as primary care clinicians, can select. Furthermore, as discussed previously in this section, we are finalizing a policy reducing the required number of activities for MIPS eligible clinicians and groups.

    After consideration of the comments received regarding the submission criteria, we are not finalizing the policies as proposed. Rather, we are reducing the maximum number of activities required to achieve the highest possible score in this performance category. Specifically, we are finalizing at § 414.1380 to set the improvement activities submission criteria under MIPS, to achieve the highest potential score, at two high-weighted improvement activities or 4 medium-weighted improvement activities, or some combination of high and medium-weighted improvement activities which will be less than four total number of activities for MIPS eligible clinicians participating as individuals or as groups (refer to Table H in in the Appendix to this final rule with comment period for improvement activities and weights).

    Exceptions to the above apply for: Small practices, located in rural areas, practices located in geographic HPSAs, non-patient facing MIPS eligible clinicians or groups or MIPS eligible clinicians, or groups that participate in a MIPS APM or a patient-centered medical home submitting in MIPS. As discussed in sections II.E.5.h. and II.E.6. of this final rule with comment period, we are reducing the maximum number of activities required for these MIPS eligible clinicians and groups to achieve the highest possible score in this performance category.

    Specifically, for MIPS eligible clinicians and groups that are small practices, practices located in rural areas or geographic HPSAs, or non-patient facing MIPS eligible clinicians or groups, to achieve the highest score, one high-weighted or two medium-weighted improvement activities are required. For these MIPS eligible clinicians and groups, in order to achieve one-half of the highest score, one medium-weighted improvement activity is required.

    We will also provide full credit for the improvement activities performance category for a MIPS eligible clinician or group that has received certification or accreditation as a patient-centered medical home or comparable specialty practice from a national program or from a regional or state program, private payer or other body that administers patient-centered medical home accreditation and certifies 500 or more practices for patient-centered medical home accreditation or comparable specialty practice certification.

    We believe that this approach is appropriate for the transition year of MIPS since this is a new performance category of requirements for MIPS eligible clinicians and we want to ensure all MIPS eligible clinicians understand what is required of them, while not being overly burdensome.

    All clinicians identified on the Participation List of an APM receive at least one-half of the highest score. To develop the improvement activities additional score assigned to all MIPS APMs, CMS will compare the requirements of the specific APM with the list of activities in the Improvement Activities Inventory in Table H in in the Appendix to this final rule with comment period and score those activities in the same manner that they are otherwise scored for MIPS eligible clinicians according to section II.E.6.a.(4) of this final rule with comment period. For further explanation of how MIPS APMs scores will be calculated, we refer readers to section II.E.5.h of this final rule with comment period. Should the MIPS APM not receive the maximum improvement activities performance category score then the APM entity can submit additional improvement activities. All other MIPS eligible clinicians or groups that we identify as participating in APMs will need to select additional improvement activities to achieve the improvement activities highest score.

    (d) Required Period of Time for Performing an Activity

    We proposed § 414.1360 that MIPS eligible clinicians or groups must perform improvement activities for at least 90 days during the performance period for improvement activities credit. We understand there are some activities that are ongoing whereas others may be episodic. We considered setting the threshold for the minimum time required for performing an activity to longer periods up to a full calendar year. However, after researching several organizations we believe a minimum of 90 days is a reasonable amount of time. One illustrative example of organizations that used 90 days as a window for reviewing clinical practice improvements are practice improvement activities undertaken by a large Veteran's Administration health care program that set a 90-day window for reviewing improvements in the management of opioid dispensing.19

    19 Westanmo A, Marshall P, Jones E, Burns K, Krebs EE., Opioid Dose Reduction in a VA Health Care System—Implementation of a Primary Care Population-Level Initiative. Pain Med. 2015;16(5);1019-26.

    Additional clarification for how some activities meet the 90-day rule or if additional time is required are reflected in the description of that activity in Table H in in the Appendix to this final rule with comment period. In addition, we proposed that activities, where applicable, may be continuing (that is, could have started prior to the performance period and are continuing) or be adopted in the performance period as long as an activity is being performed for at least 90 days during the performance period.

    We anticipate in future years that extended improvement activities time periods will be needed for certain activities. We will monitor the time period requirement to assess if allowing for extended time requirements may enhance the value associated with generating more effective outcomes, or conversely, the extended time may reveal that more time has little or no value added for certain activities when associated with desired outcomes. We requested comments on this proposal.

    The following is a summary of the comments we received regarding the required period of time for performing an activity.

    Comment: Many commenters supported CMS's proposal to require improvement activities performance for at least 90 days during the performance period. Some commenters requested clarification about the applicable time period, noting that not all activities in Table H in in the Appendix to this final rule with comment period lend themselves to a 90-day performance period. Other commenters suggested limiting reporting to 30 days or other time periods shorter than 90 days to enable MIPS eligible clinicians to test innovative strategies for improvement activities. One commenter suggested requiring improvement activities be performed throughout the entirety of the performance period.

    Response: We note that we are requiring that each improvement activity be performed for a continuous 90-day period. Additionally, the continuous 90-day period must occur during the performance period.

    We do not believe that reporting periods as short as 30 days are sufficient to ensure that the activities being performed are robust enough to result in actual practice improvements. However, we are also cognizant of the inherent challenges associated with implementing new improvement activities, which is why we are finalizing our requirement that these activities be performed during a continuous 90-day period during the performance period. We view that reporting period as an appropriate balance for the transition year of MIPS, and will re-examine reporting periods for improvement activities in the future.

    Comment: Several commenters requested further clarification on our proposal regarding points for patient-centered medical home recognition in the improvement activities performance category. Specifically, the commenters requested clarification regarding what specific date, either as of December 31, 2017 or as of January 1, 2017, by which a practice needs to be recognized as a patient-centered medical home in order to claim optimal improvement activities performance category points.

    Response: We would like to explain that a MIPS eligible clinician or group must qualify as a certified patient-centered medical home or comparable specialty practice for at least a continuous 90 days during the performance period. Therefore, any MIPS eligible clinician or group that does not qualify by October 1st of the performance year as a certified patient-centered medical home or comparable specialty practice cannot receive automatic credit as such for the improvement activities performance category.

    Comment: Other commenters were very concerned that the required 90‐day reporting period for improvement activities was simply inapplicable to many of the improvement activities listed by CMS in the improvement activities inventory and in other cases that it is unclear what needs to be done for 90 days. The commenters believed the time period for improvement activities should be tailored to the particular activity being implemented. In some cases, positive change could occur in less than 90 days but even for activities with a longer time horizon, a practice should receive credit for the improvement activities as long as it is in place for a least one quarter. Another commenter recommended that CMS assign timeframes for each improvement activity for 2017, to gather empirical data regarding the time intervals, instead of assigning a 90-day timeframe to all activities.

    Response: While not all of the activities in the improvement activities inventory lend themselves to performance for a full 90 consecutive days for all MIPS eligible clinicians, we believe that each activity can be performed for a full 90 consecutive days by some, if not all, MIPS eligible clinicians, and that there are a sufficient number of activities included that any eligible clinician may select and perform for a continuous 90 days that will allow them to successfully report under this performance category. Therefore, we are finalizing our proposal that for the transition year of MIPS, any selected activity must be performed for at least 90 consecutive days.

    After consideration of the comments regarding the required period of time for performing an activity, we are finalizing at § 414.1360 that MIPS eligible clinicians or groups must perform improvement activities for at least 90 consecutive days during the performance period for improvement activities performance category credit. Activities, where applicable, may be continuing (that is, could have started prior to the performance period and are continuing) or be adopted in the performance period as long as an activity is being performed for at least 90 days during the performance period.

    (4) Application of Improvement Activities to Non-Patient Facing MIPS Eligible Clinicians and Groups

    We understand that non-patient facing MIPS eligible clinicians and groups may have a limited number of measures and activities to report. Therefore, we proposed at § 414.1360 allowing non-patient facing MIPS eligible clinicians and groups to report on a minimum of one activity to achieve partial credit or two activities to achieve full credit to meet the improvement activities submission criteria. These non-patient facing MIPS eligible clinicians and groups receive partial or full credit for submitting one or two activities irrespective of any type of weighting, medium or high (for example, two medium activities will qualify for full credit). For scoring purposes, non-patient facing MIPS eligible clinicians or groups receive 30 points per activity, regardless of whether the activity is medium or high. For example, one high activity and one medium activity could be selected to receive 60 points. Similarly, two medium activities could also be selected to receive 60 points.

    We anticipate the number of activities for non-patient facing MIPS eligible clinicians or groups will increase in future years as we gather more data on the feasibility of performing improvement activities. As part of the process for identifying activities, we consulted with several organizations that represent a cross-section of non-patient facing MIPS eligible clinicians and groups. An illustrative example of those consulted with include organizations that represent cardiologists involved in nuclear medicine, nephrologists who serve only in a consulting role to other clinicians, or pathologists who, while they typically function as a team, have different members that perform different roles within their specialty that are primarily non-patient facing.

    In the course of those discussions these organizations identified improvement activities they believed would be applicable. The comments on activities appropriate for non-patient facing MIPS eligible clinicians or groups are reflected in the proposed improvement activities inventory across multiple subcategories. For example, several of these organizations suggested consideration for Appropriate Use Criteria (AUC). As a result, we have incorporated AUC into some of the activities. We encourage MIPS eligible clinicians or groups who are already required to use AUC (for example, for advanced imaging) to report an improvement activity other than one related to appropriate use. Another example, under Patient Safety and Practice Assessment, is the implementation of an antibiotic stewardship program that measures the appropriate use of antibiotics for several different conditions (Upper Respiratory Infection (URI) treatment in children, diagnosis of pharyngitis, and bronchitis treatment in adults) according to clinical guidelines for diagnostics and therapeutics. In addition, we requested comments on what activities would be appropriate for non-patient facing MIPS eligible clinicians or groups to add to the improvement activities inventory in the future. We requested comments on this proposal.

    The following is a summary of the comments we received regarding the application of improvement activities to non-patient facing MIPS eligible clinicians and groups.

    Comment: Some commenters expressed their support for the general approach of reducing the improvement activities performance category requirements for non-patient facing MIPS eligible clinicians and groups, as well as MIPS eligible clinicians practicing in rural areas or health professional shortage areas. Other commenters disagreed with that approach, stating that non-patient facing MIPS eligible clinicians should be able to obtain a full score of 60 points without any special modifications to improvement activities scoring while another commenter did not support reducing the improvement activities performance category requirements for these MIPS eligible clinicians and recommended that we hold all clinicians to the same standard. Other commenters suggested increasing the number of MIPS eligible clinicians in a practice required to meet the definition of a small practice from 15 to 25 for purposes of the improvement activities performance category. The commenters were also concerned that there are several subcategories such as Beneficiary Engagement and Expanded Practice Access that may limit non-patient facing MIPS eligible clinicians from having access to a broader list of activities than other types of practices and suggested that CMS limit the number of activities in the transition year to two for non-patient facing MIPS eligible clinicians.

    Response: We believe there are several subcategories such as Beneficiary Engagement and Expanded Practice Access that may limit a non-patient facing MIPS eligible clinician from having access to the broader list of activities than for other types of practices and believe it is reasonable to limit the number of activities in the transition year for non-patient facing MIPS eligible clinicians. We refer readers to § 414.1305 for the definition of small practice for the purposes of MIPS.

    After consideration of the comments regarding the application of improvement activities to non-patient facing MIPS eligible clinicians and groups we are not finalizing the policies as proposed. Rather, based on commenters' feedback, we believe that it is appropriate to reduce the number of activities that a non-patient facing MIPS eligible clinician must select to achieve credit to meet the improvement activities data submission criteria. Specifically, we are finalizing at § 414.1380 that for non-patient facing MIPS eligible clinicians or groups, to achieve the highest score one high-weighted or two medium-weighted improvement activities are required. For these MIPS eligible clinicians and groups, in order to achieve one-half of the highest score, one medium-weighted improvement activity is required.

    (5) Special Consideration for Small, Rural, or Health Professional Shortage Areas Practices

    Section 1848(q)(2)(B)(iii) of the Act requires the Secretary, in establishing improvement activities, to give consideration to small practices and practices located in rural areas as defined at § 414.1305 and in geographic based HPSAs as designated under section 332(a)(1)(A) of the Public Health Service Act. In the MIPS and APMs RFI, we requested comments on how improvement activities should be applied to MIPS eligible clinicians or groups in small practices, in rural areas, and geographic HPSAs: if a lower performance requirement threshold or different measures should be established that will better allow those MIPS eligible clinicians or groups to perform well in this performance category, what methods should be leveraged to appropriately identify these practices, and what best practices should be considered to develop flexible and adaptable improvement activities based on the needs of the community and its population.

    We engaged high performing organizations, including several rural health clinics with 15 or fewer clinicians that are designated as geographic HPSAs, to provide feedback on relevant activities based on their specific circumstances. Some examples provided include participation in implementation of self-management programs such as for diabetes, and early use of telemedicine, as in the one case for a top performing multi-specialty rural practice that covers 20,000 people over a 25,000-mile radius in a rural area of North Dakota. Comments on activities appropriate for MIPS eligible clinicians or groups located in rural areas or practices that are designated as geographic HPSAs are reflected in the proposed improvement activities inventory across multiple subcategories.

    After consideration of comments and listening sessions, we proposed at § 414.1360 to accommodate small practices and practices located in rural areas, or geographic HPSAs for the improvement activities performance category by allowing MIPS eligible clinicians or groups to submit a minimum of one activity to achieve partial credit or two activities to achieve full credit. These MIPS eligible clinicians or groups receive partial or full credit for submitting two activities of any type of weighting (for example, two medium activities will qualify for full credit). We anticipate the requirement on the number of activities for small practices and practices located in rural areas, or practices in geographic HPSAs will increase in future years as we gather more data on the feasibility of small practices and practices located in rural areas, and practices located in geographic HPSAs to perform improvement activities. Therefore, we requested comments on what activities would be appropriate for these practices for the improvement activities inventory in future years.

    The following is a summary of the comments we received regarding special consideration for MIPS small practices, or practices located in rural areas or geographic HPSAs.

    Comment: Some commenters requested that to facilitate rapid learning in the area of improvement activities performance category, CMS should provide targeted, practical technical assistance to solo and small practices that is focused on the improvement activities tailored to their level of quality improvement activity.

    Response: We intend to provide targeted, practical technical assistance to MIPS eligible clinicians. Specifically, we intend to have a MACRA technical assistant that will be available to solo and small practices. In addition, MIPS eligible clinicians may contact the Quality Payment Program Service Center with specific questions.

    Comment: Some commenters proposed that CMS recognize improvement efforts for clinicians in small practices by awarding them “full credit” in the improvement activities for participation in a Practice Transformation Network.

    Response: Please note that Transforming Clinical Practice Initiative (TCPI) credit which includes activities such as a Practice Transformation Network is provided as a high-weighted activity for the transition year of MIPS.

    After consideration of the comments regarding special consideration for small practices, rural, or geographic HPSAs practices we are not finalizing the policies as proposed. Rather, based on stakeholders' feedback, we believe that it is appropriate to reduce the required number of activities required to achieve full credit in this performance category for small practices, rural, or health professional shortage areas practices. Specifically, we are finalizing at § 414.1380 that for MIPS eligible clinicians and groups that are small practices or located in rural areas, or geographic HPSAs, to achieve full credit, one high-weighted or two medium-weighted improvement activities are required. In addition, we are modifying our proposed definition of rural area and finalizing at § 414.1305 that a rural area means clinicians in zip codes designated as rural, using the most recent HRSA Area Health Resource File data set available. We proposed using HRSA's 2014-2015 Area Resource File but decided a non-specific reference would be more broadly applicable. In addition, we are finalizing the following definitions, as proposed, at § 414.1305: (1) small practices means practices consisting of 15 or fewer clinicians and solo practitioners; and (2) Health Professional Shortage Areas (HPSA) means areas as designated under section 332(a)(1)(A) of the Public Health Service Act.

    We refer readers to section II.E.6.a.(4) of this final rule with comment period for a more detailed explanation of the number of points and scoring for the improvement activities performance category.

    (6) Improvement Activities Subcategories

    Section 1848(q)(2)(B)(iii) of the Act provides that the improvement activities performance category must include at least the subcategories listed below. The statute also provides the Secretary discretion to specify additional subcategories for the improvement activities performance category, which have also been included below.

    • Expanded practice access, such as same day appointments for urgent needs and after-hours access to clinician advice.

    • Population management, such as monitoring health conditions of individuals to provide timely health care interventions or participation in a QCDR.

    • Care coordination, such as timely communication of test results, timely exchange of clinical information to patients and other MIPS eligible clinicians or groups, and use of remote monitoring or telehealth.

    • Beneficiary engagement, such as the establishment of care plans for individuals with complex care needs, beneficiary self-management assessment and training, and using shared decision-making mechanisms.

    • Patient safety and practice assessment, such as through the use of clinical or surgical checklists and practice assessments related to maintaining certification.

    • Participation in an APM, as defined in section 1833(z)(3)(C) of the Act.

    In the MIPS and APMs RFI, we requested recommendations on the inclusion of the following five potential new subcategories:

    • Promoting Health Equity and Continuity, including (a) serving Medicaid beneficiaries, including individuals dually eligible for Medicaid and Medicare, (b) accepting new Medicaid beneficiaries, (c) participating in the network of plans in the Federally Facilitated Marketplace or state exchanges, and (d) maintaining adequate equipment and other accommodations (for example, wheelchair access, accessible exam tables, lifts, scales, etc.) to provide comprehensive care for patients with disabilities.

    • Social and Community Involvement, such as measuring completed referrals to community and social services or evidence of partnerships and collaboration with the community and social services.

    • Achieving Health Equity, such as for MIPS eligible clinicians or groups that achieve high quality for underserved populations, including persons with behavioral health conditions, racial and ethnic minorities, sexual and gender minorities, people with disabilities, people living in rural areas, and people in geographic HPSAs.

    • Emergency preparedness and response, such as measuring MIPS eligible clinician or group participation in the Medical Reserve Corps, measuring registration in the Emergency System for Advance Registration of Volunteer Health Professionals, measuring relevant reserve and active duty uniformed services MIPS eligible clinician or group activities, and measuring MIPS eligible clinician or group volunteer participation in domestic or international humanitarian medical relief work.

    • Integration of primary care and behavioral health, such as measuring or evaluating such practices as: Co-location of behavioral health and primary care services; shared/integrated behavioral health and primary care records; or cross-training of MIPS eligible clinicians or groups participating in integrated care. This subcategory also includes integrating behavioral health with primary care to address substance use disorders or other behavioral health conditions, as well as integrating mental health with primary care.

    We recognize that quality improvement is a critical aspect of improving the health of individuals and the health care delivery system overall. We also recognize that this will be the first time MIPS eligible clinicians or groups will be measured on the quality improvement work on a national scale. We have approached the improvement activities performance category with these principles in mind along with the overarching principle for the MIPS program that we are building a process that will have increasingly more stringent requirements over time.

    Therefore, for the transition year of MIPS, we proposed at § 414.1365 that the improvement activities performance category include the subcategories of activities provided at section 1848(q)(2)(B)(iii) of the Act. In addition, we proposed at § 414.1365 adding the following subcategories: “Achieving Health Equity,” “Integrated Behavioral and Mental Health,” and “Emergency Preparedness and Response.” In response to multiple MIPS and APMs RFI comments requesting the inclusion of “Achieving Health Equity,” we proposed to include this subcategory because: (1) It is important and may require targeted effort to achieve and so should be recognized when accomplished; (2) it supports our national priorities and programs, such as Reducing Health Disparities; and (3) it encourages “use of plans, strategies, and practices that consider the social determinants that may contribute to poor health outcomes.” (CMS, Quality Innovation Network Quality Improvement Organization Scope of Work: Excellence in Operations and Quality Improvement, 2014).

    Similarly, MIPS and APMs RFI comments supported the inclusion of the subcategory of “Integrated Behavioral and Mental Health,” citing that “statistics show 50 percent of all behavioral health disorders are being treated by primary care and behavioral health integration.” Additionally, according to MIPS and APMs RFI comments, behavioral health integration with primary care is already being implemented in numerous locations throughout the country. The third additional subcategory we proposed to include is “Emergency Preparedness and Response,” based on MIPS and APMs RFI comments that encouraged us to consider this subcategory to help ensure that practices remain open during disaster and emergency situations and support emergency response teams as needed. Additionally, commenters were able to provide a sufficient number of recommended activities (that is, more than one) that could be included in the improvement activities inventory in all of these proposed subcategories and the subcategories included under section 1848(q)(2)(B)(iii) of the Act.

    We also solicited public comments on two additional subcategories for future consideration:

    • Promoting Health Equity and Continuity, including (a) serving Medicaid beneficiaries, including individuals dually eligible for Medicaid and Medicare, (b) accepting new Medicaid beneficiaries, (c) participating in the network of plans in the Federally Facilitated Marketplace or state exchanges, and (d) maintaining adequate equipment and other accommodations (for example, wheelchair access, accessible exam tables, lifts, scales, etc.) to provide comprehensive care for patients with disabilities; and

    • Social and Community Involvement, such as measuring completed referrals to community and social services or evidence of partnerships and collaboration with community and social services.

    For these two subcategories, we requested activities that can demonstrate some improvement over time and go beyond current practice expectations. For example, maintaining existing medical equipment would not qualify for an improvement activity, but implementing some improved clinical workflow processes that reduce wait times for patients with disabilities or improve coordination of care including activities that regularly provide additional assistance to find other care needed for patients with disabilities, would be some examples of activities that could show improvement in clinical practice over time.

    We requested comments on these proposals.

    The following is summary of the comments we received regarding improvement activities subcategories.

    Comment: Some commenters recommended inclusion of activities under the two additional subcategories; Promoting Health Equity and Social and Community Involvement. One commenter suggested we include the ASCO/CNS Chemotherapy Safety Administration Standards, potentially under the achieving health equity subcategory, with the highest weight. Other commenters recommended we include the following activities in this subcategory: Adhering to the U.S. Access Board standards for medical diagnostic equipment; reduced wait time for patients with disabilities for whom long wait times are a barrier to care; replacing inaccessible equipment; remodeling or redesigning an office to meet accessibility standards in areas other than medical diagnostic equipment, and training staff on best practices in serving people with disabilities, including appropriate appointment lengths, person-centered care, and disability etiquette. The commenters also suggested that CMS include people with disabilities in the subcategory of expanded practice access, stating that despite the Americans with Disabilities Act (ADA), many clinician offices remain inaccessible to people with disabilities.

    One commenter recommended that for this subcategory, CMS require both MIPS eligible clinicians and community service clinicians to demonstrate improvement in their respective functions, processes, or outcomes and consider developing metrics to evaluate the quality of health and well-being services that community-based organizations provide. Another commenter recommended that activities in the Social and Community Involvement subcategory include employing community health workers (CHWs) or integrating CHWs employed by community-based organizations into care teams, establishing a community advisory council, and creating formal linkages with social services clinicians and community-based organizations.

    Response: We will proceed with the current proposed list of subcategories included in Table H in in the Appendix to this final rule with comment period, as well as the subcategory for participation in an APM, for the transition year of MIPS. We will consider these recommendations in future years as part of the annual call for measures and activities in future rulemaking.

    Comment: A few commenters recommended that in order to encourage and allow MIPS eligible clinicians to proactively incorporate and test new technologies into their practice, while closely sharing the decision making process with patients, CMS should develop an additional improvement activities subcategory to encourage MIPS eligible clinicians and groups to engage patients to consider new technologies that may be an option for their care.

    Response: These recommendations will be considered during the call for activities and addressed in future rulemaking as necessary.

    Comment: Some commenters stated general support for the improvement activities performance category, including efforts to benefit long-term care, and the inclusion of the subcategories of Achieving Health Equity and Integration of Behavioral and Mental Health.

    Response: We have included the Achieving Health Equity and Integration of Behavioral and Mental Health subcategories.

    Comment: Other commenters recommended that CMS group similar activities together to reduce complexity and confusion, and provided an example to move all QCDR activities under the Population Health Management subcategory so MIPS eligible clinicians can easily determine which capabilities they already have or may adopt with use of a QCDR.

    Response: We believe that we have appropriately placed activities within their subcategories as proposed. However, we would like to note that we are committed to ease of reporting and we allow MIPS eligible clinicians to report across all subcategories. We will provide technical assistance through the Quality Payment Program Service Center and other resources.

    Comment: One commenter requested the ability to select an activity across any subcategory.

    Response: We are finalizing our proposed policy that MIPS eligible clinicians may select any activity across any improvement activities subcategory, as our intention is to provide as much flexibility for MIPS eligible clinicians as possible. We believe that where possible, MIPS eligible clinicians should choose activities that are most important or most appropriate for their practice across any subcategory.

    Comment: Many commenters supported CMS's flexibility in recognizing a broad range of improvement activities performance category for Care Coordination, Beneficiary Engagement, and Patient Safety and recommended that CMS include a fourth subcategory that allows practices to focus on office efficiency/operations in order to promote long term success. Some commenters also requested that CMS include two additional subcategories; Promoting Health Equity and Continuity and Social and Community Involvement.

    Response: We will proceed with the current proposed list of subcategories for the transition year of MIPS, included in Table H in in the Appendix to this final rule with comment period, as well as the subcategory for participation in an APM. Further determinations of improvement activities and subcategories will be addressed in future rulemaking and as part of the annual call for the subcategory and activities process that will occur simultaneously with the annual call for measures.

    After consideration of the comments regarding improvement activities subcategories we are finalizing at § 414.1365 that the improvement activities performance category will include the subcategories of activities provided at section 1848(q)(2)(B)(iii) of the Act. In addition, we are finalizing at § 414.1365 the following additional subcategories: “Achieving Health Equity,” “Integrated Behavioral and Mental Health,” and “Emergency Preparedness and Response.”

    (7) Improvement Activities Inventory

    To implement the MIPS program, we are required to create an inventory of improvement activities. Consistent with our MIPS strategic goals, we believe it is important to create a broad list of activities that can be used by multiple practice types to demonstrate improvement activities and activities that may lend themselves to being measured for improvement in future years.

    We took several steps to ensure the initial improvement activities inventory is inclusive of activities in line with the statutory language. We had numerous interviews with highly performing organizations of all sizes, conducted an environmental scan to identify existing models, activities, or measures that met all or part of the improvement activities performance category, including the patient-centered medical homes, the Transforming Clinical Practice Initiative (TCPI), CAHPS surveys, and AHRQ's Patient Safety Organizations. In addition, we reviewed the CY 2016 PFS final rule with comment period (80 FR 70886) and the comments received in response to the MIPS and APMs RFI regarding the improvement activiies performance category. The improvement activities inventory was compiled as a result of the stakeholder input, an environmental scan, MIPS and APMs RFI comments, and subsequent working sessions with AHRQ and ONC and additional communications with CDC, SAMHSA and HRSA.

    Based on the above discussions we established guidelines for improvement activities inclusion based on one or more of the following criteria (in any order):

    • Relevance to an existing improvement activities subcategory (or a proposed new subcategory);

    • Importance of an activity toward achieving improved beneficiary health outcome;

    • Importance of an activity that could lead to improvement in practice to reduce health care disparities;

    • Aligned with patient-centered medical homes;

    • Representative of activities that multiple MIPS eligible clinicians or groups could perform (for example, primary care, specialty care);

    • Feasible to implement, recognizing importance in minimizing burden, especially for small practices, practices in rural areas, or in areas designated as geographic HPSAs by HRSA;

    • CMS is able to validate the activity; or

    • Evidence supports that an activity has a high probability of contributing to improved beneficiary health outcomes.

    Activities that overlap with other performance categories were included if there was a strong policy rationale to include it in the improvement activities inventory. We proposed to use the improvement activities inventory for the transition year of MIPS, as provided in Table H in in the Appendix to this final rule with comment period. For further description of how MIPS eligible clinicians or groups would be designated to submit to MIPS for improvement activities, we refer readers to the proposed rule (81 FR 28177). For all other MIPS eligible clinicians or groups participating in APMs that would report to MIPS, this section applies and we also refer readers to the scoring requirements for these MIPS eligible clinicians or groups in the proposed rule (81 FR 28234).

    We requested comments on the improvement activities inventory and suggestions for improvement activities for future years as well.

    The following is a summary of the comments we received regarding the statutory requirements for improvement activities related to the activities that must be specified under the improvement activities performance category. We refer readers to Table H in in the Appendix to this final rule with comment period.

    General Comments Related to Activities Across More Than One Subcategory

    Comment: We received several comments supporting the broad descriptions provided for activities in the MIPS transition year to enable MIPS eligible clinicians to effectively and appropriately implement and report in a manner that best represents their performance. Other commenters requested more detail about the methodology used to assign weights to the activities, and questioned whether CMS intends to develop specifications for activities as it does for quality measures.

    Response: We appreciate the requests to provide further details around the methodology and specifications for improvement activities. Under the statute, we may contract with various entities to assist in identifying activities and specifying criteria for the activities. Accordingly, the methodology we used to assign weights to the activities was to engage multiple stakeholder groups, including the Centers for Disease Control, Health Resources and Services Administration, Office of the National Coordinator for Health Information Technology, SAMHSA, Agency for Healthcare Research and Quality, Food and Drug Administration, the Department of Veterans Affairs, and several clinical specialty groups, small and rural practices and non-patient facing clinicians to define the criteria and establish weighting for each activity. Activities were proposed to be weighted as high based on the extent to which they align with activities that support the patient-centered medical home, since that is the standard under section 1848(q)(5)(C)(i) of the Act for achieving the highest potential score for the improvement activities performance category, as well as with our priorities for transforming clinical practice. Activities that require performance of multiple actions, such as participation in the Transforming Clinical Practice Initiative, participation in a MIPS eligible clinician's state Medicaid program, or an activity identified as a public health priority (such as emphasis on anticoagulation management or utilization of prescription drug monitoring programs) were also proposed to be weighted as high. Future revisions and specifications to the activities may be provided through future rulemaking, consistent with the needs and maturation process of the MIPS program in future years.

    Comment: Several commenters supported the proposed list of activities but recommended that the number of required activities be reduced and that more activities be highly weighted.

    Response: As discussed in section II.E.5.f.(2) of this final rule with comment period, we have reduced the number of activities that MIPS eligible clinicians are required to report on to no more than four medium-weighted activities or two high-weighted activities, or any combination thereof which would be less than four activities. We are reducing the number of activities for small practices, practices located in rural and geographic HPSAs and non-patient facing clinicians to no more than one high-weighted activities or two medium-weighted activities to achieve the highest score.

    Comment: Some comments recommended assigning a higher weight to QCDR-related improvement activities and QCDR functions, and one commenter recommended that use of a QCDR count for several activities.

    Response: Participating in a QCDR is not sufficient for demonstrating performance of multiple improvement activities and we do not believe at this time it warrants a higher weighting. In addition, QCDR participation was not proposed as a high-weighted activity because, while useful for data collection, it is neither critical for supporting certified patient-centered medical homes nor requires multiple actions, which are criteria we considered for high-weighting. We also note that while QCDR participation may not automatically confer improvement activities performance category credit, it may put MIPS eligible clinicians in a position to report multiple improvement activities, since there are several that specifically reference QCDR participation. We ask that each MIPS eligible clinician or group select from the broad list of activities that is included in Table H in in the Appendix to this final rule with comment period.

    Comment: One commenter suggested that we list ID numbers for activities listed in the improvement activities inventory.

    Response: We will include IDs in the on-line portal, as well as a short title.

    Comment: Many commenters suggested that we adopt more specialty-specific activities, citing their belief that many improvement activities are focused on primary care. The commenters made many suggestions for specialty-specific activities, including care coordination, patient safety, and other activities.

    Response: There are many future activities that we would like to develop and consider for inclusion in MIPS, including those specific to specialties. We intend to take these comments into account in future rulemaking and as part of the annual call for the subcategory and activities process that will occur simultaneously with the annual call for measures. We note that the current improvement activities inventory does offer activities that can benefit all practice types and we believe specialists will be able to successfully report under this performance category.

    Comment: One commenter requested that CMS clarify and distinguish between activities under the direction and ability of a user, as opposed to activities under the clinical supervision and control of MIPS eligible clinicians or groups. Another commenter stated that activities under the improvement activities performance category needed to reward active participation in an activity rather than rewarding the MIPS eligible clinicians for being part of an entity that pays for the activity. For example, the commenter stated that a teaching hospital might be the awardee in a BPCI contract, but the faculty practice clinicians are leading the effort to redesign care.

    Response: To reward for active participation in an activity rather than rewarding for being part of an entity that pays for the activity, we believe that the requirement that the MIPS eligible clinician or group must actually perform the activity for a continuous 90-day period addresses that concern since the clinician would need to perform that activity for that period of time. In the example that the commenter provided, the practices reporting at the TIN/NPI level would receive the credit for the improvement activities.

    Comment: Some commenters believe that the activities in this performance category would not lead to improvement.

    Response: For the transition year of MIPS, we intend for MIPS eligible clinicians to focus on achievement of these activities; they do not need to show that the activity led to improvement. We believe these activities are important for all MIPS eligible clinicians because their purpose is to encourage movement toward clinical practice improvement.

    Comment: Another commenter noted that the proposal that MIPS eligible clinicians are required to consult with clinical decision support (CDS) under this mandate “are encouraged” to select improvement activities other than those related to the use of CDS. The commenter suggested that CMS maintain this statement as a recommendation and not require that a MIPS eligible clinician or group report another improvement activity if they are participating under the mandate and report an improvement activity related to CDS.

    Response: We would like to note that we encourage MIPS eligible clinicians or groups who are already required to use AUC (for example, for advanced imaging) to report an improvement activity other than one related to appropriate use. We do not mandate any activity that must be reported. Further, we do not require MIPS eligible clinicians to consult with CDS. We also do not require that an MIPS eligible clinician or group report another improvement activity if they are already participating and reporting on an existing activity related to CDS.

    Comment: One commenter suggested that CMS consider the existing reporting burdens on hospital-based MIPS eligible clinicians, and encouraged CMS to work closely with third party recognition programs to ensure that information on recognized MIPS eligible clinicians can be accurately reported directly to CMS and linked to MIPS eligible clinicians accordingly. Another commenter suggested that CMS ensure that specifications for improvement activities undergo proper stakeholder comment, including a public comment period prior to finalization. A few commenters also requested that CMS allow additional stakeholder comment on the improvement activities specifications.

    Response: We intend to continue assessing hospital based MIPS eligible clinician's reporting burden under the MIPS program. While the current activity list is expansive, there remain opportunities to expand the list further in future years. The current list, however, does offer activities that can benefit all practice types and we believe hospital based specialists will be able to successfully report improvement activities. Additionally, we provided earlier opportunities for public input and comment on activities as part of both the 2015 MIPS and APM RFI and the 2016 proposed rule.

    Comment: Another commenter recommended that CMS change language regarding the definition of medical homes to those that are “nationally recognized accredited or certified” as the commenter regularly uses certified and accredited interchangeably.

    Response: We refer readers to section II.E.5.f. of this final rule with comment period for discussions on the definition of recognized certifying or accrediting bodies for patient-centered medical homes.

    Comment: One commenter recommended a flexible approach to quality assessment that emphasizes outcomes of care and that favors continuous quality improvement methodologies rather than rigid, process-oriented patient-centered medical home certification models. The commenter believed that relying on patient-centered medical home certification as a means of quality assessment runs the risk of practices not actually realigning efforts to produce higher quality and more cost effective care.

    Response: We refer readers to section II.E.6.a.(4)(c) of this final rule with comment period where we discuss patient-centered medical home certification models.

    Activities Related to the Patient Safety and Practice Assessment Subcategory

    Comment: We received more than 25 comments requesting changes or additions to activities under the Patient Safety and Practice Assessment subcategory. Under this subcategory, several commenters suggested that CMS consider Maintenance of Certification (MOC) Part IV participation as an improvement activity in all improvement activities subcategories, not just the Patient Safety/Practice Assessment subcategory. Other commenters suggested that Participation in Maintenance of Certification Part IV should be re-designated as a high priority. A few commenters also pointed out inconsistencies with reference to PDMP as a high-weighted activity in this section compared to what is included in the improvement activities inventory and requested for the change to a high weight be made for this activity in the inventory list.

    Response: We recognize that some activities may align with more than one subcategory but have assigned each activity to one and only one subcategory to minimize confusion and avoid an unwieldy list of too many or duplicative activities that may be difficult to select from for the transition year of MIPS. MIPS eligible clinicians may select any activity across any subcategories to meet the criteria for the improvement activities performance category. We look forward to working with stakeholders on activity alignments with subcategories in future years. We also believe that high weighting should be used for activities that directly address practice areas with the greatest impact on beneficiary care, safety, health, and well-being. We have focused high weighting under the subcategories on those activities. We do not believe there is an inconsistency as PDMP Consultation is listed as a high-weighted activity and annual registration in a PDMP is listed as a medium-weighted activity. We have made a revision in the Consultation of PDMP activity to further elaborate and explain the requirements.

    Comment: Many commenters suggested that CMS recognize continuing medical education (CME) activities provided by national recognized accreditors, completion of other state/local licensing requirements and providing free care to those in need as improvement activities, particularly those CME activities that involve assessment and improvement of patient outcomes or care quality, best practice dissemination and aid in the application of the “three aims” (better care; healthier people and communities; smarter spending), the National Quality Standards and the CMS Quality Strategy. The commenters also recommended that inclusion of surveys or interviewing clinicians to determine if they have applied lessons learned to their practice for at least 90 days following an activity should meet compliance requirements.

    Response: We appreciate the suggestions that we grant improvement activities credit for activities already certified as CME activities, however, for the transition year of the MIPS program we do not have sufficient data to identify which CMEs could be included as activities. We will consider these recommendations for additional activities in future years as part of the nomination process.

    Comment: One commenter recommended that the improvement activities performance category be used to evaluate what activities, in what quantity, contribute to increased value and improve quality, and that CMS avoid using overly prescriptive thresholds or quantities of activities requirements, such as those used in CPC, that show no correlation to outcomes, quality, or costs. The commenter suggested that CMS align its criteria for improvement activities with activities that are included as components of patient-centered home model. Another commenter advised significantly reducing process-oriented measures in the improvement activities performance category and building on activities that clinicians were already completing, because process-oriented measures could be perceived as busy work. This commenter also stated that when relevant improvement activities were not otherwise available, CMS could reduce the burden by allowing certified improvement activities as partial or complete satisfaction of improvement activities requirements.

    Response: We believe that MIPS eligible clinicians are dedicated to the care of beneficiaries and will only attest to activities that they have undertaken in their practice that follow the specific guideline of each improvement activity. We note we have not proposed prescriptive thresholds for activities beyond an attestation that a certain percentage of patients were impacted by a given activity and that in establishing the improvement activities performance category we included activities that align with those patient-centered medical homes typically perform. We are not reducing process-oriented improvement activities in this performance category because these were activities that multiple practices recommended as contributing to practice improvements. We are also not allowing partial completion of an activity to count toward the improvement activities score. We refer readers to section II.E.5.f.(3)(c) of this final rule with comment period for discussions on how we have reduced the number of activities required for the improvement activities performance category which we believe also addresses burden. In addition, we would like to explain that the activities in the improvement activities inventory were identified by different types of practices such as rural and small practices, as well as large practices, who indicated these are improvement activities that clinicians are already performing and believed they should be included in the improvement activities inventory.

    Activities Related to the Population Management Subcategory

    Comment: We received more than 10 comments related to the Population Management subcategory. One commenter expressed support for the 2014 AHA/ACC/HRS Guideline for the Management of Patients with Atrial Fibrillation, noting that comprehensive patient education, care coordination, and appropriate dosing decisions are important for managing patients on anticoagulants, including warfarin and novel oral anticoagulants. The commenter also indicated that the use of validated electronic decision support and clinical management tools, particularly those that support shared decision making, may benefit all patients treated with anticoagulants. The commenter recommended that improvement activities be inclusive of patients treated with all anticoagulants while recognizing differences in management requirements.

    Response: We agree that comprehensive patient education, care coordination, and appropriate dosing decisions are important for managing patients on anticoagulants. We acknowledge that that the use of validated electronic decision support and clinical management tools, particularly those that support shared decision making, may benefit all patients treated with anticoagulants. We refer the readers to section II.E.5.g. of this final rule with comment period for more information on electronic decision support. We also acknowledge that improvement activities should be inclusive of patients treated with all anticoagulants while recognizing differences in management requirements.

    We note that because anticoagulants have been consistently identified as the most common causes of adverse drug events across health care settings, the Population Management activity starting with “Participation in a systematic anticoagulation program (coagulation clinic, patient self-reporting program, patient self-management program highlights)” highlights the importance of close monitoring of Vitamin K antagonist therapy (warfarin) and the use of other coagulation cascade inhibitors.

    Comment: One commenter suggested adding the NCQA Heart/Stroke Recognition Program as an activity for the Population Management subcategory. The commenter expressed their belief that attending an educational seminar on new treatments that covers medication management and side effects for cancer treatments such as neutropenia or immune reactions would improve safety and result in better care for beneficiaries.

    Response: We appreciate this additional recommendation and will consider it in future years.

    Activities Related to the Behavioral Health Subcategory

    Comment: We received more than 20 comments related to activities under the Behavioral Health subcategory. One commenter agreed with our proposed activity: “Tobacco use: Regular engagement of MIPS eligible clinicians or groups in integrated prevention and treatment interventions, including tobacco use screening and cessation interventions (refer to NQF #0028) for patients with co-occurring conditions of behavioral or mental health and at risk factors for tobacco dependence,” and in addition, requested that CMS consider adding features from a successful model such as the Million Hearts Multidisciplinary Approach to Increase Smoking Cessation Interventions that was demonstrated in New York City.

    Response: We will consider the best way to incorporate additional smoking cessation efforts in MIPS and our other quality programs in the future.

    Comment: Several commenters requested that CMS expand various descriptions in the improvement activities inventory list, such as for the activity “Participation in research that identifies interventions, tools or processes that can improve a targeted patient population,” to include reference to engagement in federally funded clinical research.

    Response: We will take this suggestion into consideration for future rulemaking.

    Activities Under the Expand Practice Access Subcategory

    Comment: We received only a few unique comments related to Expanding Practice Access, most related to telehealth. These commenters suggested that we consider additional activities under the improvement activities performance category, potentially including telehealth services or other activities nominated by MIPS eligible clinicians or groups. The commenters made specific suggestions ranging from follow-up inpatient telehealth consultations furnished to beneficiaries in hospitals or SNFs, office or other outpatient visits to transitional care management services with high medical decision complexity, psychoanalysis, and family psychotherapy.

    Response: In developing improvement activities, some of the developer's considerations should include whether the activity is evidenced based and applicable across service settings, and aligns with the National Quality Strategy and CMS Quality Strategy. We will take the commenters' suggestions into account for future rulemaking.

    Activities Related to the Beneficiary Engagement Subcategory

    Comment: Commenters suggested numerous nomenclatural changes within the Activities Under Beneficiary Engagement subcategory. For example, one commenter suggested that we refer to “clinical registries” in general rather than QCDRs, since many MIPS eligible clinicians may participate in clinical registries without using them for MIPS participation. Other commenters suggested that we revise the wording of the proposed activity “Participation in CMMI models such as Million Hearts Campaign” to reflect that this is a model, not a “campaign,” and suggested that we include the wording “standardized treatment protocols” in the proposed activity “Use decision support and protocols to manage workflow in the team to meet patient needs.” Other commenters suggested changes to the activities labels in Table H in in the Appendix to this final rule with comment period.

    Response: We have revised the wording of the Million Hearts activity to read “Participation in CMMI models such as the Million Hearts Cardiovascular Risk Reduction Model.” In addition, we have revised the decision support activity to read “Use decision support and standardized, evidence-based treatment protocols to enhance effective workflow in the team to meet patient needs.”

    Comment: Another commenter expressed concern that the proposed activity “Use tools to assist patients in assessing their need for support for self-management (for example, the Patient Activation Measure or How's My Health)” mentioned the Patient Activation Measure, which the commenter stated was proprietary and expensive if widely used. The commenter recommended that we consider the variety of psychometric tools that can be used to measure not only patient motivation, but also confidence and intent to act. The commenter stated that for example, specifically calling out activation inhibits health behavior change innovation. The commenter stated that it is possible to measure the burden of patient symptoms by using instruments like impact index assessments. The commenter further stated that asking patients about how much they are bothered by their symptoms can help healthcare professionals assess the quality of life a patient is experiencing.

    Response: We recognize that the Patient Activation Measure (PAM) survey is proprietary and does require an investment on the practices' part if they choose to utilize it. However, in the activity noted above related to PAM, we explain that this is an example of a tool that could be used. Other tools to assist patients in assessing their need for support for self-management would be acceptable for this activity.

    Comment: Some commenters questioned whether a Million Hearts award received in prior years can count for improvement activities credit as prior awardees are not allowed to compete again. The commenters suggested that prior year awards should count for improvement activities credit and bonus points as well.

    Response: We recognize the importance of the Million Hearts Cardiovascular Risk Reduction Model and have included that activity in the improvement activities inventory. All activities within the improvement activities inventory, however, must be performed for a continuous 90-day period that must occur within the performance period.

    Activities Related to the Emergency Preparedness and Readiness Subcategory

    Comment: Some commenters noted that the Emergency Response and Preparedness subcategory was the only subcategory with no high-weighted activities and several asked for more high-weighted activities.

    Response: We are changing one existing activity in the Emergency Response and Preparedness Subcategory “Participation in domestic or international humanitarian volunteer work. MIPS eligible clinicians and groups must be registered for a minimum of 6 months as a volunteer for domestic or international humanitarian volunteer work” to a high-weighted activity that is “Participation in domestic or international humanitarian volunteer work. Activities that simply involve registration are not sufficient. MIPS eligible clinicians must attest to domestic or international humanitarian volunteer work for a period of a continuous 60 days or greater.” We have changed this activity from requiring being registered for 6 months to participating for 60 days to be in line with our overall new performance period policy which only requires a 90-day period. The 60-day participation would fall within that new 90-day window. We are also changing this to a high-weighted activity because such volunteer work is intensive, often involves travel and working under challenging physical and clinical circumstances. Table H in in the Appendix to this final rule with comment period reflects this revised description of the existing activity and revised weighting.

    Comment: One commenter recommended the exclusion of “Participation in domestic or international humanitarian volunteer work” activity, stating that it is unlikely to lead to improvements in the quality or experience of care for a MIPS eligible clinician's patients. Another commenter expressed concern that their patient satisfaction ratings will suffer because they are actively attempting to reduce prescription drug overdoses. The commenter suggested removing the patient satisfaction component.

    Response: We disagree that this activity is unlikely to improve quality of care. Caring for injured and medically unwell patients during disasters is widely described by the generations of clinicians who have volunteered for these efforts as an excellent learning experience and that their volunteer work improved their clinical skills in their routine practice upon their patients. We believe that “Participation in domestic or international humanitarian volunteer work” will have a similar positive impact for MIPS eligible clinicians and their patients.

    Comment: A few commenters believed that the Congress expressly defined remote monitoring and telehealth as a component of care coordination in improvement activities and understood the vital role of personal connected health in delivery of high quality clinical practice. The commenters suggested that CMS modify improvement activities in a manner that would reflect statutory language and provide incentive for the conduct of improvement activities using digital, interoperable communications.

    Response: We have provided appropriate incentives through other performance categories aligned with the policy goals for interoperability of EHRs and for achieving widespread exchange of health information. We also note the statutory example of “use of remote monitoring or telehealth)” in several activities, which include under the Care Coordination subcategory, “Ensuring that there is bilateral exchange of necessary patient information to guide patient care that could include participating in a Health Information Exchange.” This would require interoperable communications. Under the Population Management subcategory, we provide incentive for using remote monitoring or telehealth through the activity related to Oral Vitamin K antagonist therapy (warfarin) that includes, for rural or remote patients, that they can be managed using remote monitoring or telehealth options.

    Comment: Other commenters supported the MIPS program in including improvement activities as a new performance category for clinician performance, particularly incentivizing the use of health IT, telehealth and connection of patients to community-based services. In addition, specifically for the improvement activities performance category activities regarding connections to community-based services and the use of health IT and telehealth, the commenters supported CMS increasing their weight by rating them as “high” in the final rule with comment period.

    Response: We believe that high weighting should be used for activities that directly address areas with the greatest impact on beneficiary care, safety, health, and well-being. We have focused high weighting under the subcategory on those activities.

    Comment: Another commenter recommended that we enhance the clarity of the improvement activities definitions in the final rule with comment period and with subregulatory guidance so that MIPS eligible clinicians know what they must do to qualify for a given improvement activity. For example, where a general and non-specific definition is intentional to permit clinicians flexibility, commenter requested that CMS define expectations on how MIPS eligible clinicians can meet and substantiate such an improvement activity requirement and specify the evidence that MIPS eligible clinicians would be expected to retain as documentation for a potential audit including documentation for non-percentage-based measures. The commenter stated their concern that, given short and ambiguous definitions in Table H in in the Appendix to this final rule with comment period, clinicians may avoid a given improvement activity based on varied understandings of what satisfying the activity entails.

    Response: MIPS eligible clinicians may retain any documentation that is consistent with the actions they took to perform each activity. We also note that any MIPS eligible clinician may report on any activity; for example, a cardiologist may choose to select an improvement activity related to an emergency response and preparedness, if applicable. We will provide MIPS eligible clinicians more information about documentation expectations for the transition year of MIPS in subregulatory guidance.

    Activities Related to the Health Equity Subcategory

    Comment: We received over 10 comments related to activities under Health Equity. One commenter recommended that we add an activity that encourages referrals to a clinical trial for a minority population. Another commenter requested inclusion of an established health equity council. Another commenter supported a Promoting Health Equity and Continuity subcategory, and recommended including the Bravemann et al. definition of health equity and the Tool for Health and Resilience in Vulnerable Environments or THRIVE framework.

    Response: We will consider these recommendations in future years as part of the nomination process.

    Activities Related to the Care Coordination Subcategory

    Comment: We received at least 10 comments related to Care Coordination activities. One commenter recommended that we expand the subset of activities listed for the Care Coordination subcategory in the improvement activities inventory list to include long-term services and supports. Another commenter supported our proposal to retain the activities related to care management and individualized plans of care in the proposed improvement activities inventory, and refine these activities over time by incorporating the concept of principles of person-centered care to coordinate care and identifying, tracking and updating individual goals as they relate to the care plan. One commenter recommended that participation in a Rural Health Innovation Collaborative (RHIC) count as an improvement activity since RHIC are recognized by Congress as organizations that can give technical support to small practices, rural practices, and areas experiencing a shortage of clinicians.

    Response: We will work with stakeholders as part of the future nomination process to identify additional activities.

    After consideration of the comments regarding the improvement activities inventory, we are finalizing the improvement activities and weighting provided in Table H in the Appendix to this final rule with comment period as proposed with the exception of the following: One change for one activity in the Emergency Response and Preparedness Subcategory from a medium to a high-weighted activity; one change for one activity in the Population Management Subcategory from a medium to a high-weighted activity; we have included the addition of an asterisk (*) in Table H in the Appendix to this final rule with comment period, next to activities that also qualify for the advancing care information bonus, and refer readers to section II.E.6.a.(5) of this final rule with comment period. We also included language, elaborating on the requirements for the Consultation PDMP activity. We are correcting the reference to Million Hearts Cardiovascular Risk Reduction Model instead of describing it as a “campaign;” and revising the wording of the proposed activity “Use decision support and protocols to manage workflow in the team to meet patient needs” to read “Use decision support and standardized treatment protocols to manage workflow in the team to meet patient needs;” and “removing the State Innovation Model participation activity.” Our reasoning for these changes is to alleviated confusion related to the activity based on comments, to correct a previous incorrect term such as the use of the word “campaign” or as a result of some other change in another section of the final rule with comment period, specifically inclusion of qualifying improvement activities for the advancing care information bonus. Our reasoning for changing the CAHPS for MIPS survey weighting to high is because the CAHPS for MIPS survey will be optional for large groups under the quality performance category and we want to encourage use of this survey. Another contributing element was the need to ensure options beyond the CAHPS for MIPS survey were available to provide credit for surveying and for CAHPS that did not meet thresholds/standards for reporting in measure category (largely because they did not have enough beneficiaries). Our reasoning for removing the State Innovation Model (SIM) activity is that SIM is a series of a different agreements between CMS and states. Clinicians are not direct participants. In addition, we do not collect TIN/NPI combinations, so there is no way to validate participation based on attestation. Our reasoning for changing the weighting on the Emergency Response and Preparedness activity is that this improvement activity requires the clinician pay out of pocket to travel and do volunteer work (personal costs/risks), likely contributing some donated medical durables/expendables (practice material resources). In addition, the clinician also misses scheduled appointments with patients (foregoing practice financial revenue). Our reasoning for changing the weighting on the Population Management activity is that this improvement activity is consistent with section 1848(q)(2)(B)(iii) of the Act, which requires the Secretary to give consideration to the circumstances of practices located in rural areas and geographic HPSAs. Rural health clinics would be included in that definition for consideration of practices in rural areas. All of these changes are reflected in Table H in the Appendix to this final rule with comment period.

    (a) CMS Study on Improvement Activities and Measurement (1) Study Purpose

    Previous experience with the PQRS, VM, and Medicare EHR Incentive programs have shown that many clinicians have errors within their data sets, as well as problems in understanding and choosing the data that corresponds to their selected quality measures. In CMS' quest to create a culture of improvement using evidence based medicine on a consistent basis, fully understanding the strengths and limitations of the current processes is crucial to better understand the current processes, we proposed to conduct a study on clinical improvement activities and measurement to examine clinical quality workflows and data capture using a simpler approach to quality measures.

    The lessons learned in this study on practice improvement and measurement may influence changes to future MIPS data submission requirements. The goals of the study are to see whether there will be improved outcomes, reduced burden in reporting, and enhancements in clinical care by selected MIPS eligible clinicians desiring:

    • A more data driven approach to quality measurement.

    • Measure selection unconstrained by a CEHRT program or system.

    • Improving data quality submitted to CMS.

    • Enabling CMS get data more frequently and provide feedback more often.

    (2) Study Participation Credit and Requirements: Study Participation Eligibility

    This present study will select 10 non-rural individual MIPS eligible clinicians or groups of less than three non-rural MIPS eligible clinicians, 10 rural individual MIPS eligible clinicians or groups of less than three rural MIPS eligible clinician's, 10 groups of three to eight MIPS eligible clinicians, five groups of nine to 20 MIPS eligible clinicians, three groups of 21 to 100 MIPS eligible clinicians, two groups of greater than 100 MIPS eligible clinicians, and two specialist groups of MIPS eligible clinicians. Participation would be open to a limited number of MIPS eligible clinicians in rural settings and non-rural settings. A rural area is defined at § 414.1305 and a non-rural area would be any MIPS eligible clinicians or groups not included as part of the rural definition. MIPS eligible clinicians and groups would need to sign up from January 1, 2017, to January 31, 2017. The sign up process will utilize a web-based interface. Participants would be approved on a first come first served basis and must meet all the required criteria. Selection criteria will also be based on different states and also within different clinician settings that falls in the participation eligibility criteria.

    MIPS eligible clinicians and groups in the CMS study on practice improvement and measurement will receive full credit (40 points) for the improvement activities performance category of MIPS after successfully electing, participating and submitting data to the study coordinators at CMS for the full calendar year.

    (3) Procedure

    Based on feedback and surveys from MIPS eligible clinicians, study measurement data will be collected at baseline and at every three months (quarterly basis) afterwards for the duration of the calendar year. Study participants who can submit data on a more frequent basis will be encouraged to do so.

    Participants will be required to attend a monthly focus group to share lessons learned along with providing survey feedback to monitor effectiveness. The focus group would also include providing visual displays of data, workflows, and best practices to be shared amongst the participants to obtain feedback and make further improvements. The monthly focus groups would be used to learn from the practices on how to be more agile as we test new ways of measure recording and workflow.

    For CY 2017, the participating MIPS eligible clinicians or groups would submit their data and workflows for a minimum of three MIPS CQMs that are relevant and prioritized by their practice. One of the measures must be an outcome measure, and one must be a patient experience measure. The participating MIPS eligible clinicians could elect to report on more measures as this would provide more options from which to select in subsequent years for purposes of measuring improvement.

    If MIPS eligible clinicians or groups calculate the measures working with a QCDR, qualified registry, or CMS-approved third party intermediary, we would use the same data validation process described in the proposed rule (81 FR 28279). We would only collect the numerator and denominator for the measures selected for the overall population, all patients/all payers. This would enable the practices to build the measures based on what is important for their area of practice while increasing the quality of care.

    The first round of the study will last for 1 year after which new participants will be recruited. Participants electing to continue in future years would be afforded the opportunity to opt-in or opt-out following the successful submission of data to us. The first opportunity to continue in the study would be at the end of the 2017 performance period. Eligible clinicians who elect to join the study but fail to participate in the study requirements and/or fail to successfully submit the data required will be removed from the study. Unsuccessful study participants will then be subject to the full requirements for the improvement activities performance category.

    In future years, participating MIPS eligible clinicians or groups would select three of the measures for which they have baseline data from the 2017 performance period to compare against later performance years.

    We requested comment on the study and welcome suggestions on future study topics.

    The following is a summary of the comments we received regarding the CMS study on improvement activities and measurement.

    Comment: Commenters recommended that CMS monitor performance of the activities by the various MIPS eligible clinicians and groups for trends and consider whether activities result in better outcomes.

    Response: We will consider these issues as we develop the study.

    Comment: Some commenters supported CMS' proposal to conduct a study on improvement activities and measurement, in general, to examine clinical quality workflows and data capture using a simpler approach to quality measures. The commenters believed that CMS proposes an appropriate incentive by allowing a limited number of selected clinicians and groups to receive full credit (60 points) for the improvement activities performance category if they participate in the study. However, the commenters recommended that CMS expand this opportunity so that it is available to a broader and more diverse swath of practices, including emergency medicine practices. Other commenters supported our plans to conduct an annual call for activities to build the improvement activities inventory and our plans to study measurement, workflow, and current challenges for clinical practices. The commenters suggested that we ensure that we study a diverse range of participants when conducting that analysis.

    Response: We plan to expand as we learn from the initial study, which is currently open to all types of practices. We acknowledge that there are many variables affecting measurement and will continue to make sure we look at this diversification as we study different methods of measurement.

    Comment: One commenter was concerned about the study and wanted to know if CMS expects vendors to develop EHR workflows and reports for study measures and if vendors would be expected to support the study's requirements for more frequent data submission.

    Response: We will work with these vendors and others as the study evolves. We note that for this study, we will use measures that already exist in programs, so that new development is required for technical workflows or documentation requirements for those products included on the ONC certified health IT product list (CHPL).

    Comment: Another commenter agreed that improvement activities study participants should receive full credit for improvement activities and that those participants that do not adhere to the study guidelines should be removed and subject to typical improvement activities requirements. This commenter recommended that CMS provide a final date by which it plans to make these exclusion determinations and that after this date, CMS can work with the ex-participant to help them complete the year. They also recommended that all participants who get excluded from the study not be allowed to participate in the study the following year.

    Response: We will work with stakeholders to further define future participation requirements as this study evolves.

    After consideration of the comments regarding the CMS study on improvement activities and measurement we are finalizing the policies with the exception that successful participation in the pilot would result in full credit for the improvement activities performance category of 40 points, not 60 points, in accordance with the revised finalized scoring. If participants do not meet the study guidelines they will be removed from the study and need to follow the current improvement activities guidelines.

    (8) Improvement Activities Policies for Future Years of the MIPS Program (a) Proposed Approach for Identifying New Subcategories

    We proposed, for future years of MIPS, to consider the addition of a new subcategory to the improvement activities performance category only when the following criteria are met:

    • The new subcategory represents an area that could highlight improved beneficiary health outcomes, patient engagement and safety based on evidence.

    • The new subcategory has a designated number of activities that meet the criteria for an improvement activity and cannot be classified under the existing subcategories.

    • Newly identified subcategories would contribute to improvement in patient care practices or improvement in performance on quality measures and cost performance categories.

    In future years, MIPS eligible clinicians or groups would have an opportunity to nominate additional subcategories, along with activities associated with each of those subcategories that are based on criteria specified for these activities, as discussed in the proposed rule. We requested comments on this proposal.

    We did not receive any comments regarding policies for identifying new improvement activities subcategories in future years of the MIPS program. We therefore are finalizing the addition of a new subcategory to the improvement activities performance category only when the following criteria are met:

    • The new subcategory represents an area that could highlight improved beneficiary health outcomes, patient engagement and safety based on evidence.

    • The new subcategory has a designated number of activities that meet the criteria for an improvement activity and cannot be classified under the existing subcategories.

    • Newly identified subcategories would contribute to improvement in patient care practices or improvement in performance on quality measures and cost performance categories.

    (b) Request for Comments on Call for Measures and Activities Process for Adding New Activities

    We plan to develop a call for activities process for future years of MIPS, where MIPS eligible clinicians or groups and other relevant stakeholders may recommend activities for potential inclusion in the improvement activities inventory. As part of the process, MIPS eligible clinicians or groups would be able to nominate additional activities that we could consider adding to the improvement activities inventory. The MIPS eligible clinician or group or relevant stakeholder would be able to provide an explanation of how the activity meets all the criteria we have identified. This nomination and acceptance process would, to the best extent possible, parallel the annual call for measures process already conducted by CMS for quality measures. The final improvement activities inventory for the performance year would be published in accordance with the overall MIPS rulemaking timeline. In addition, in future years we anticipate developing a process and establishing criteria to remove or add new activities to improvement activities performance category.

    Additionally, prospective activities that are submitted through a QCDR could also be included as part of a beta-test process that may be instrumental for future years to determine whether that activity should be included in the improvement activities inventory based on specific criteria noted above. MIPS eligible clinicians or groups that use QCDRs to capture data associated with an activity, for example the frequency in administering depression screening and a follow-up plan, may be requested to voluntarily submit that same data in year 2 to begin identifying a baseline for improvement for subsequent year analysis. This is not intended to require any MIPS eligible clinician or group to submit improvement activities only via QCDR from 1 year to the next or to require the same activity from 1 year to the next. Participation in doing so, however, can help to identify how activities can contribute to improve outcomes. This data submission process will be considered part of a beta-test to: (1) Determine if the activity is being regularly conducted and effectively executed and (2) if the activity warrants continued inclusion on the improvement activities inventory. The data would help capture baseline information to begin measuring improvement and inform the Secretary of the likelihood that the activity would result in improved outcomes. If an activity is submitted and reported by a QCDR, it would be reviewed by us for final inclusion in the improvement activities inventory the following year, even if these activities are not submitted through the future call for measures and activities process. We intend, in future performance years, to begin measuring improvement activities data points for all MIPS eligible clinicians and to award scores based on performance and improvement. We solicited comment on how best to collect such improvement activities data and factor it into future scoring under MIPS.

    We requested comments on these approaches and on any other considerations we should take into account when developing these type of approaches for future rulemaking.

    The following is summary of the comments we received regarding improvement activities policies for identifying new improvement activities in future years of the MIPS program.

    Comment: Some commenters recommended that CMS limit participants from reporting on the same activity over several performance periods in future years.

    Other commenters recommended that CMS allow MIPS eligible clinicians to maintain improvement activities over time and opposed CMS proposals to have more stringent requirements. These commenters were concerned that by imposing limits on frequency of reporting of the same activity over several years, CMS would be encouraging practices to implement temporary instead of permanent improvements and would risk creating short-lived activities that lack consistency across time, which is not beneficial to patients and is confusing and disruptive to MIPS eligible clinicians' workflow.

    A few commenters recommended that CMS permit MIPS eligible clinicians to select from a wide range of improvement activities, allow MIPS eligible clinicians to perform them in a way that is effective and reasonable for both the MIPS eligible clinicians and their patient population, and refrain from imposing restrictive specifications regarding how MIPS eligible clinicians document and report their activities. One commenter suggested that CMS keep the broad list of improvement activities and publish additional detail through non-binding clarification or guidance, rather than in regulatory text, which may limit innovation and flexibility.

    Response: We recognize that some activities may be improved upon over time which would support reporting on the same activity across multiple performance periods. We also note that other activities, such as providing 24/7 access may provide limited opportunity to demonstrate improvement over time and would minimize the value of reporting this same activity over subsequent years. We will consider this for future rulemaking. It is our intention to continue to allow MIPS eligible clinicians to select from a wide range of improvement activities, allow MIPS eligible clinicians to perform them in a way that is effective and reasonable for both the MIPS eligible clinicians and their patient population, and refrain from imposing restrictive specifications regarding how MIPS eligible clinicians document and report their activities. In addition, we intend to keep the broad list of improvement activities and publish additional detail through non-binding clarification or guidance as we are able.

    Comment: Other commenters suggested that in the future, CMS evaluate whether: (1) Improvement activities should be worth more than 15 percent of the final score; (2) individual activity weights should be increased; the number and type of MIPS eligible clinicians reporting on health equity improvement activities should be changed; (3) how performance on health equity improvement activities correlates with quality performance; (4) whether improvement activities result in better outcomes; and (5) what additional improvement activities should be included in MIPS. Some commenters suggested that some activities in the improvement activities performance category require considerable additional resources, and may warrant more points than 20—the proposed standard for “high.” Other commenters expressed concern about the proposed scoring for improvement activities, noting that the category is a new one that has not been implemented in previous programs and that activities may favor outpatient primary care.

    Response: We intend to consider these comments in future rulemaking, and will monitor MIPS eligible clinicians' performance in the improvement activities performance category carefully to inform those policy decisions. We welcome commenters' specific suggestions for additional activities or activities that may merit additional points beyond the “high” level we are adopting in the future. We refer readers to the section II.E.6. of this final rule with comment period for additional discussion of the public comments that we received on the MIPS program's scoring methodology.

    Comment: A few commenters agreed with the proposal that future scores for improvement activities should be based on outcomes and improvement. The commenters believed that MIPS eligible clinicians engaged in improvement activities should submit quality measures that reflect the focus of their improvement activities and demonstrate the quality improvement by engaging in those improvement activities. Other commenters suggested that we use improvement activities as a test bed for innovation to identify how activities could lead to improved outcomes and readiness for APM participation. The commenters encouraged collaboration with specialty physicians, medical societies, and other stakeholders to evaluate improvement activities continually.

    Response: We will take the commenter's suggestion that we should more closely link measures selected under the quality performance category with activities selected under the improvement activities performance category into consideration in the future. We note that for the transition year of MIPS, we believe we should provide MIPS eligible clinicians with flexibility in selecting measures and activities that are relevant to their practices.

    We intend to monitor MIPS eligible clinicians' participation in improvement activities carefully, and as the commenters suggested, we will continue examining potential relationships to quality measurement, advancing care information measures leveraging CEHRT, and APM participation readiness. We intend to continue collaborating with specialty clinicians, medical societies, and other stakeholders when conducting these evaluations.

    Comment: Some commenters opposed adding additional measurement and reporting requirements for improvement activities in future years and stated that this would increase MIPS eligible clinician burden and is not in line with CMS's objective to simplify MIPS. The commenters suggested that CMS view the improvement activities inventory as fluid and to formalize a standard process to add new activities each year.

    Response: We will take these comments into account as we consider improvement activities policy for future program years. Our intent, however, is to minimize burden on MIPS eligible clinicians. We will consider whether or not we should adopt a standard process for adding activities in the future.

    Comment: Some commenters recommended that CMS allow MIPS eligible clinicians or groups to nominate additional activities that CMS would consider adding to the improvement activities inventory. Specifically, they recommended that CMS draw upon working sessions with groups such as AHRQ, ONC, HRSA, and other federal agencies to create a patient-generated health data framework which would seek to identify best practices, gaps, and opportunities for progress in the collection and use of health data for research and care delivery.

    Response: We intend to follow a similar process that is now employed in the annual Call for Measures for changes in the improvement activities inventory. It is important to keep in mind that in developing activities, some of the developer's considerations should include whether the activity is evidenced based and applicable across service settings and aligns with the National Quality Strategy and CMS Quality Strategy.

    Comment: Several commenters stated, as CMS implements new improvement activities in future years, the commenters were in support of a process similar to the current CMS Call for Quality Measures and recommended that CMS clearly communicate the timelines and requirements to the public early and often to allow for the preparation of submissions.

    Response: Our intent is to proceed with this process for the transition year of MIPS.

    Comment: A few commenters expressed concern about program requirements for MIPS eligible clinicians reporting as a group and future changes in the program. The commenters also requested more direction regarding documentation to maintain for these activities in the event of an audit.

    Response: We will verify data through the data validation and audit process as necessary. MIPS eligible clinicians may retain any documentation that is consistent with the actions they took to perform each activity.

    Comment: Other commenters proposed that CMS allow, for the improvement activities performance category, that individual activities may be pursued by an individual MIPS eligible clinician for up to 3 years, but that following this period, MIPS eligible clinicians be required to select a different area of focus.

    Response: We will consider this in the future.

    Comment: One commenter supported CMS's proposal to study workflow and data capture to understand the limitations. This commenter encouraged CMS to include MIPS eligible clinicians from specialty behavioral health organizations as part of this study.

    Response: We will work with key stakeholders on the workflow and data capture for better understanding of how to measure improvement of activities.

    Comment: Some commenters expressed support for the approach for identifying new subcategories and activities in the future and one suggested that CMS develop a template designed to ensure that proposed improvement activities are clearly measurable and also that the “value” of the improvement activity can be related to an existing improvement activity.

    Response: We will work with stakeholders to further refine this approach for future consideration.

    Comment: Another commenter suggested rather than looking to restrictions on the use of QDCRs as improvement activities, in future years, we should include an assessment of how well an improvement activity was accomplished, including demonstration of resulting improvements in outcomes and/or patient experience from the improvement activity. This commenter believed that we should take this more positive approach to ensure improvement activities are being effective rather than trying to determine whether the clinician is using a QCDR to achieve “too many” improvement activities.

    Response: We will work with the stakeholder community in future years for how this could be best addressed.

    Comment: One commenter was concerned that MIPS did not recognize practices are likely to develop multi-year improvement strategies and that removal of an approved improvement activity in the annual update would undermine program stability. To address this concern, this commenter recommended that improvement activity topics identified for termination should be allowed to continue for the transition year beyond initial notification to allow for sufficient notice to participating practices.

    Response: We will work with the stakeholder community in future years to best determine how to maintain the annual activity list.

    We will take the comments regarding improvement activities policies for identifying new improvement activities in future years of the MIPS program into consideration for future rulemaking.

    (c) Request for Comments on Use of QCDRs for Identification and Tracking of Future Activities

    In future years, we expect to learn more about improvement activities and how the inclusion of additional measures and activities captured by QCDRs could enhance the ability of MIPS eligible clinicians or groups to capture and report on more meaningful activities. This is especially true for specialty groups. In the future, we may propose use of QCDRs for identification and acceptance of additional measures and activities which is in alignment with section 1848(q)(1)(E) of the Act which encourages the use of QCDRs, as well as under section 1848(q)(2)(B)(iii)(II) of the Act related to the population management subcategory. We recognize, through the MIPS and APMs RFI comments and interviews with organizations that represent non-patient facing MIPS eligible clinicians or groups and specialty groups that QCDRs may provide for a more diverse set of measures and activities under improvement activities than are possible to list under the current improvement activities inventory. This diverse set of measures and activities, which we can validate, affords specialty practices additional opportunity to report on more meaningful activities in future years. QCDRs may also provide the opportunity for longer-term data collection processes which will be needed for future year submission on improvement, in addition to achievement. Use of QCDRs also supports ongoing performance feedback and allows for implementation of continuous process improvements. We believe that for future years, QCDRs would be allowed to define specific improvement activities for specialty and non-patient facing MIPS eligible clinicians or groups through the already-established QCDR approval process for measures and activities. We requested comments on this approach. We did not receive any comments regarding the use of QCDRs for identification and tracking of future activities.

    (d) Request for Comments on Activities That Will Advance the Usage of Health IT

    The use of health IT is an important aspect of care delivery processes described in many improvement activities. In this final rule with comment period we have finalized a policy to allow MIPS eligible clinicians to achieve a bonus in the advancing care information performance category when they use functions included in CEHRT to complete eligible activities from the improvement activities inventory. Please refer to section II.E.5.g. of this final rule with comment period for details on how improvement activities using CEHRT relate to the objectives and measures of the advancing care information and improvement activities performance categories.

    In addition to those functions included under the CEHRT definition, ONC certifies technology for additional emerging health IT capabilities which may also be important for enabling activities included in the improvement activities inventory, such as technology certified to capture social, psychological, and behavioral data according to the criterion at 80 FR 62631, and technology certified to generate and exchange an electronic care plan (as described at 80 FR 62648). In the future, we may consider including these emerging certified health IT capabilities as part of activities within the improvement activities inventory. By referencing these certified health IT capabilities in improvement activities, clinicians would be able to earn credit under the improvement activities performance category while gaining experience with certification criteria that may be reflected as part of the CEHRT definition at a later time. Moreover, health IT developers will be able to innovate around these relevant standards and certification criteria to better serve clinicians' needs.

    We invite comments on this approach to encourage continued innovation in health IT to support improvement activities.

    g. Advancing Care Information Performance Category (1) Background and Relationship to Prior Programs (a) Background

    The American Recovery and Reinvestment Act of 2009 (ARRA), which included the Health Information Technology for Economic and Clinical Health Act (HITECH Act), amended Titles XVIII and XIX of the Act to authorize incentive payments and Medicare payment adjustments for EPs to promote the adoption and meaningful use of CEHRT. Section 1848(o) of the Act provides the statutory basis for the Medicare incentive payments made to meaningful EHR users. Section 1848(a)(7) of the Act also establishes downward payment adjustments, beginning with CY 2015, for EPs who are not meaningful users of CEHRT for certain associated EHR reporting periods. (For a more detailed explanation of the statutory basis for the Medicare and Medicaid EHR Incentive Programs, see the July 28, 2010 Stage 1 final rule titled, “Medicare and Medicaid Programs; Electronic Health Record Incentive Program; Final Rule” (75 FR 44316 and 44317).)

    A primary policy goal of the EHR Incentive Program is to encourage and promote the adoption and use of CEHRT among Medicare and Medicaid health care providers to help drive the industry as a whole toward the use of CEHRT. As described in the final rule titled “Medicare and Medicaid Programs; Electronic Health Record Incentive Program—Stage 3 and Modifications to Meaningful Use in 2015 Through 2017” (hereinafter referred to as the “2015 EHR Incentive Programs final rule”) (80 FR 62769), the HITECH Act outlined several foundational requirements for meaningful use and for EHR technology. CMS and ONC have subsequently outlined a number of key policy goals which are reflected in the current objectives and measures of the program and the related certification requirements (80 FR 62790). Current Medicare EP performance on these key goals is varied, with EPs demonstrating high performance on some objectives while others represent a greater challenge.

    (b) MACRA Changes

    Section 1848(q)(2)(A) of the Act, as added by section 101(c) of the MACRA, includes the meaningful use of CEHRT as a performance category under the MIPS, referred to in the proposed rule and in this final rule with comment period as the advancing care information performance category, which will be reported by MIPS eligible clinicians as part of the overall MIPS program. As required by sections 1848(q)(2) and (5) of the Act, the four performance categories shall be used in determining the MIPS final score for each MIPS eligible clinician. In general, MIPS eligible clinicians will be evaluated under all four of the MIPS performance categories, including the advancing care information performance category. This includes MIPS eligible clinicians who were not previously eligible for the EHR Incentive Program incentive payments under section 1848(o) of the Act or subject to the EHR Incentive Program payment adjustments under section 1848(a)(7) of the Act, such as physician assistants, nurse practitioners, clinical nurse specialists, certified registered nurse anesthetists, and hospital-based EPs (as defined in section 1848(o)(1)(C)(ii) of the Act). Understanding that these MIPS eligible clinicians may not have prior experience with CEHRT and the objectives and measures under the EHR Incentive Program, we proposed a scoring methodology within the advancing care information performance category that provides flexibility for MIPS eligible clinicians from early adoption of CEHRT through advanced use of health IT. In the proposed rule (81 FR 28230 through 28233), we also proposed to reweight the advancing care information performance category to zero in the MIPS final score for certain hospital-based and other MIPS eligible clinicians where the measures proposed for this performance category may not be available or applicable to these types of MIPS eligible clinicians.

    (c) Considerations in Defining Advancing Care Information Performance Category

    In implementing MIPS, we intend to develop the requirements for the advancing care information performance category to continue supporting the foundational objectives of the HITECH Act, and to encourage continued progress on key uses such as health information exchange and patient engagement. These more challenging objectives are essential to leveraging CEHRT to improve care coordination and they represent the greatest potential for improvement and for significant impact on delivery system reform in the context of MIPS quality reporting.

    In developing the requirements and structure for the advancing care information performance category, we considered several approaches for establishing a framework that would naturally integrate with the other MIPS performance categories. We considered historical performance on the EHR Incentive Program objectives and measures, feedback received through public comment, and the long term goals for delivery system reform and quality improvement strategies.

    One approach we considered would be to maintain the current structure of the Medicare EHR Incentive Program and award full points for the advancing care information performance category for meeting all of the objectives and measures finalized in the 2015 EHR Incentive Programs final rule, and award zero points for failing to meet all of these requirements. This method would be consistent with the current EHR Incentive Program and is based on objectives and measures already established in rulemaking. However, we considered and dismissed this approach as it would not allow flexibility for MIPS eligible clinicians and would not allow us to effectively measure performance for MIPS eligible clinicians in the advancing care information performance category who have taken incremental steps toward the use of CEHRT, or to recognize exceptional performance for MIPS eligible clinicians who have excelled in any one area. This is particularly important as many MIPS eligible clinicians may not have had past experience relevant to the advancing care information performance category and use of EHR technology because they were not previously eligible to participate in the Medicare EHR Incentive Program. This approach also does not allow for differentiation among the objectives and measures that have high adoption and those where there is potential for continued advancement and growth.

    We subsequently considered several methods which would allow for more flexibility and provide CMS the opportunity to recognize partial or exceptional performance among MIPS eligible clinicians for the measures under the advancing care information performance category. We decided to design a framework that would allow for flexibility and multiple paths to achievement under this category while recognizing MIPS eligible clinicians' efforts at all levels. Part of this framework requires moving away from the concept of requiring a single threshold for a measure, and instead incentivizes continuous improvement, and recognizes onboarding efforts among late adopters and MIPS eligible clinicians facing continued challenges in full implementation of CEHRT in their practice.

    Below is a summary of the comments received on our overall approach to the advancing care information performance category under MIPS:

    Comment: A commenter did not support the name change, expressing concern that it is attempting to draw a distinction without a difference and is going to cause confusion. The commenter urged CMS to return to the term “meaningful use”.

    Response: We believe that the name “advancing care information” is appropriate to distinguish the MIPS performance category from meaningful use under the EHR Incentive Programs. We note that the term “meaningful use,” still applies for purposes of the Medicare and Medicaid EHR Incentive Programs. The reporting requirements and scoring to demonstrate meaningful use were established in regulation under the EHR Incentive Programs and vary substantially from the requirements and scoring finalized for the advancing care information performance category in the MIPS program.

    (2) Advancing Care Information Performance Category Within MIPS

    In defining the advancing care information performance category for the MIPS, we considered stakeholder feedback and lessons learned from our experience with the Medicare EHR Incentive Program. Specifically, we considered feedback from the Stage 1 (75 FR 44313) and Stage 2 (77 FR 53967) EHR Incentive Program rules, and the 2015 EHR Incentive Programs final rule (80 FR 62769), as well as comments received from the MIPS and APMs RFI (80 FR 59102). We have learned from this feedback that clinicians desire flexibility to focus on health IT implementation that is right for their practice. We have also learned that updating software, training staff and changing practice workflows to accommodate new technology can take time, and that clinicians need time and flexibility to focus on the health IT activities that are most relevant to their patient population. Clinicians also desire consistent timelines and reporting requirements to simplify and streamline the reporting process. Recognizing this, we have worked to align the advancing care information performance category with the other MIPS performance categories, which would streamline reporting requirements, timelines and measures in an effort to reduce burden on MIPS eligible clinicians.

    The implementation of the advancing care information performance category is an important opportunity to increase clinician and patient engagement, improve the use of health IT to achieve better patient outcomes, and continue to meet the vision of enhancing the use of CEHRT as defined under the HITECH Act. In the proposed rule (81 FR 28220), we proposed substantial flexibility in how we would assess MIPS eligible clinician performance for the new advancing care information performance category. We proposed to emphasize performance in the objectives and measures that are the most critical and would lead to the most improvement in the use of health IT to advance health care quality. We intend to promote innovation so that technology can be interconnected easily and securely, and data can be accessed and directed where and when it is needed to support patient care. These objectives include Patient Electronic Access, Coordination of Care Through Patient Engagement and Health Information Exchange, which are essential to leveraging CEHRT to improve care. At the same time, we proposed to eliminate reporting on objectives and measures in which the vast majority of clinicians already achieve high performance—which would reduce burden, encourage greater participation and direct MIPS eligible clinicians' attention to higher-impact measures. Our proposal balances program participation with rewarding performance on high-impact objectives and measures, which we believe would make the overall program stronger and further the goals of the HITECH Act.

    (a) Advancing the Goals of the HITECH Act in MIPS

    Section 1848(o)(2)(A) of the Act requires that the Secretary seek to improve the use of EHRs and health care quality over time by requiring more stringent measures of meaningful use. In implementing MIPS and the advancing care information performance category, we sought to improve and encourage the use of CEHRT over time by adopting a new, more flexible scoring methodology, as discussed in the proposed rule (81 FR 28220) that would more effectively allow MIPS eligible clinicians to reach the goals of the HITECH Act, and would allow MIPS eligible clinicians to use EHR technology in a manner more relevant to their practice. This new, more flexible scoring methodology puts a greater focus on Patient Electronic Access, Coordination of Care Through Patient Engagement, and Health Information Exchange—objectives we believe are essential to leveraging CEHRT to improve care by engaging patients and furthering interoperability. This methodology would also de-emphasize objectives in which clinicians have historically achieved high performance with median performance rates of over 90 percent for the last 2 years. We believe shifting focus away from these objectives would reduce burden, encourage greater participation, and direct attention to other objectives and measures which have significant room for continued improvement. Through this flexibility, MIPS eligible clinicians would be incentivized to focus on those aspects of CEHRT that are most relevant to their practice, which we believe would lead to improvements in health care quality.

    We also sought to increase the adoption and use of CEHRT by incorporating such technology into the other MIPS performance categories. For example, in section II.E.6.a.(2)(f) of the proposed rule (81 FR 28247), we proposed to incentivize electronic reporting by awarding a bonus point for submitting quality measure data using CEHRT. Additionally, in section II.E.5.f. of the proposed rule (81 FR 28209), we aligned some of the activities under the improvement activities performance category such as Care Coordination, Beneficiary Engagement and Achieving Health Equity with a focus on enhancing the use of CEHRT. We believe this approach would strengthen the adoption and use of certified EHR systems and program participation consistent with the provisions of section 1848(o)(2)(A) of the Act.

    Below is a summary of the comments received regarding our overall approach to requirements under the advancing care information performance category:

    Comment: Many commenters noted that what we proposed is even more complicated than Stage 3 of meaningful use. Most commenters appreciated the increased flexibility. One commenter appreciated the proposal but did not believe that it went far enough. They noted that there should be widespread health data interoperability throughout the clinical data ecosystem and not just between meaningful users. Many commenters did not support the retention of the all-or-nothing approach to scoring for the advancing care information performance category. Many wanted a less prescriptive approach to allow clinicians to be creative in applying technology to their own unique workflows. Some noted that clinicians should not be penalized for actions that they cannot control such as patient actions in certain measures. One recommended that CMS focus its efforts on increasing functional interoperability between and among EHR vendors. Another commenter explained that the CMS efforts to date do not go far enough toward the attainment of widespread health data interoperability. Further CMS should provide advancing care information performance category credit for activities that demonstrate a MIPS eligible clinician's use of digital clinical data to inform patient care. Many noted that this category is too similar to the existing meaningful use framework and should be further modified.

    Response: We have carefully considered and will address these comments in more detail in the following sections of this final rule with comment period as we further describe the final policies for the advancing care information performance category. We note that within the proposed requirements for the performance category, we sought to balance the new requirements under MACRA with our goal to allow greater flexibility and providing consistency for clinicians with prior experience in the Medicare and Medicaid EHR Incentive Programs. This consistency includes maintaining the definition of CEHRT (as adapted from the EHR Incentive Program) and specifications for the applicable measures. We believe this consistency will ease the transition to MIPS and allow MIPS eligible clinicians to adapt to the new program requirements quickly and with ease. We also believe this will aid EHR vendors in their development efforts for MIPS as many of the requirements are consistent with prior policy finalized for the EHR Incentive Program in previous years.

    We hope to continue to work with our stakeholders over the coming years so that we can continue to improve the framework and implementation of this performance category in order to improve health outcomes for patients across the country.

    (b) Future Considerations

    The restructuring of program requirements described in this final rule with comment period are geared toward increasing participation and EHR adoption. We believe this is the most effective way to encourage the adoption of CEHRT, and introduce new MIPS eligible clinicians to the use of certified EHR technology and health IT overall.

    We will continue to review and evaluate MIPS eligible clinician performance in the advancing care information performance category, and will consider evolutions in health IT over time as it relates to this performance category. Based on our ongoing evaluation, we expect to adopt changes to the scoring methodology for the advancing care information performance category to ensure the efficacy of the program and to ensure increased value for MIPS eligible clinicians and the Medicare Program, as well as to adopt more stringent measures of meaningful use as required by section 1848(o)(2)(A) of the Act.

    Potential changes may include establishing benchmarks for MIPS eligible clinician performance on the advancing care information performance category measures, and using these benchmarks as a baseline or threshold for future reporting. This may include scoring for performance improvement over time and the potential to reevaluate the efficacy of measures based on these analyses. For example, in future years we may use a MIPS eligible clinician's prior performance on the advancing care information performance category measures as comparison for the subsequent year's performance category score, or compare a MIPS eligible clinician's performance category score to peer groups to measure their improvement and determine a performance category score based on improvement over those benchmarks or peer group comparisons. This type of approach would drive continuous improvement over time through the adoption of more stringent performance standards for the advancing care information performance category measures.

    We are committed to continual review, improvement and increased stringency of the advancing care information performance category measures as directed under section 1848(o)(2)(A) of the Act both for the purposes of ensuring program efficacy, as well as ensuring value for the MIPS eligible clinicians reporting the advancing care information performance category measures. We solicited comment on further methods to increase the stringency of the advancing care information performance category measures in the future.

    We additionally solicited comment on the concept of a holistic approach to health IT—one that we believe is similar to the concept of outcome measures in the quality performance category in the sense that MIPS eligible clinicians could potentially be measured more directly on how the use of health IT contributes to the overall health of their patients. Under this concept, MIPS eligible clinicians would be able to track certain use cases or patient outcomes to tie patient health outcomes with the use of health IT.

    We believe this approach would allow us to directly link health IT adoption and use to patient outcomes, moving MIPS beyond the measurement of EHR adoption and process measurement and into a more patient-focused health IT program. From comments and feedback we have received from the health care provider community, we understand that this type of approach would be a welcome enhancement to the measurement of health IT. At this time, we recognize that technology and measurement for this type of program is currently unavailable. We solicited comment on what this type of measurement would look like under MIPS, including the type of measures that would be needed within the advancing care information performance category and the other performance categories to measure this type of outcome, what functionalities with CEHRT would be needed, and how such an approach could be implemented.

    The following is a summary of the comments we received:

    Comment: Several commenters expressed an interest in advancing the use of certified health IT in a clinical setting. Some commenters suggested combining advancing care information performance category measures and improvement activities in the improvement activities performance category, though cautioned that improvement activities should not require the use of CEHRT, more so that CEHRT should be optional for improvement activities and should allow MIPS eligible clinicians to earn credit in the advancing care information performance category. Some commenters recommended that CMS award credit in both the advancing care information performance category and improvement activities performance category for overlapping activities.

    Response: We agree that tying applicable improvement activities under the improvement activities performance category to the objectives and measures under the advancing care information performance category would reduce reporting burden for MIPS eligible clinicians. Our first step toward that goal of reducing reporting burden, and toward a more holistic approach to EHR measurement is to award a bonus score in the advancing care information performance category if a MIPS eligible clinician attests to completing certain improvement activities using CEHRT functionality. We believe tying these performance categories encourages MIPS eligible clinicians to use their CEHRT products not only for documenting patient care, but also for improving their clinical practices by using their CEHRT in a meaningful manner that supports clinical practice improvement. The objectives and measures of the advancing care information performance category measure specific functions of CEHRT which are the building blocks for advanced use of health IT. In the improvement activities performance category, these same functions may be tied to improvement activities which focus on a specific improvement goal or outcome for continuous improvement in patient care.

    In Table 8, we identify a set of improvement activities from the improvement activities performance category that can be tied to the objectives, measures, and CEHRT functions of the advancing care information performance category and would thus qualify for the bonus in the advancing care information performance category. For further explanation of these improvement activities, we refer readers to the discussion in section II.E.5.f. of this final rule with comment period. While we note that these activities can be greatly enhanced through the use of CEHRT, we are not suggesting that these activities require the use of CEHRT for the purposes of reporting in the improvement activities performance category. More so, we are suggesting that the use of CEHRT in carrying out these activities can further the outcomes of clinical practice improvement, and thus, we are awarding a bonus score in the advancing care information performance category if a MIPS eligible clinician can attest to using the associated CEHRT functions when carrying out the activity. A MIPS eligible clinician attesting to using CEHRT for improvement activities would use the same certification criteria in completing the improvement activity as they would for the measures under advancing care information as listed in Table 8; for the 2017 performance period, this may include 2014 or 2015 Edition CEHRT. For example, for the first improvement activity in Table 8, in which a MIPS eligible clinician would provide 24/7 access for advice about urgent and emergent care, a MIPS eligible clinician may accomplish this through expanded practice hours, use of alternatives to increase access to the care team such as e-visits and phone visits, and/or provision of same-day or next-day access. The Secure Messaging measure under the advancing care information performance category requires that a secure message was sent using the electronic messaging function of CEHRT to the patient (or the patient-authorized representative), or in response to a secure message sent by the patient (or the patient-authorized representative). If secure messaging functionality is used to provide 24/7 access for advice about urgent and emergent care(for example, sending or responding to secure messages outside business hours), this would meet the requirement of using CEHRT to complete the improvement activity and would qualify for the advancing care information bonus score.

    ER04NO16.000 ER04NO16.001 ER04NO16.002 ER04NO16.003 ER04NO16.004 ER04NO16.005 ER04NO16.006 BILLING CODE 4120-01-C

    After consideration of the comments, we will award a 10 percent bonus in the advancing care information performance category if a MIPS eligible clinician attests to completing at least one of the improvement activities specified in Table 8 using CEHRT. We note that 10 percent is the maximum bonus a MIPS eligible clinician will receive whether they attest to using CEHRT for one or more of the activities listed in the table. This bonus is intended to support progression toward holistic health IT use and measurement; attesting to even one improvement activity demonstrates that the MIPS eligible clinician is working toward this holistic approach to the use of their CEHRT. We additionally note that the weight of the improvement activity has no bearing on the bonus awarded in the advancing care information performance category.

    We are seeking comment on this integration of the improvement activities with the advancing care information performance category, and other ways to further the advancement of health IT measurement.

    (3) Clinical Quality Measurement

    Section 1848(o)(2)(A)(iii) of the Act requires the reporting of CQMs using CEHRT. Section 1848(q)(5)(B)(ii)(II) of the Act provides that under the methodology for assessing the total performance of each MIPS eligible clinician, the Secretary shall, for a performance period for a year, for which a MIPS eligible clinician reports applicable measures under the quality performance category through the use of CEHRT, treat the MIPS eligible clinician as satisfying the CQM reporting requirement under section 1848(o)(2)(A)(iii) of the Act for such year. We note that in the context and overall structure of MIPS, the quality performance category allows for a greater focus on patient-centered measurement, and multiple pathways for MIPS eligible clinicians to report their quality measure data. Therefore, we did not propose separate requirements for CQM reporting within the advancing care information performance category and instead would require submission of quality data for measures specified for the quality performance category, in which we encourage reporting of CQMs with data captured in CEHRT. We refer readers to section II.E.5.a.of the proposed rule (81 FR 28184-28196) for discussion of reporting of CQMs with data captured in CEHRT under the quality performance category.

    Below is a summary of the comments received regarding CQM reporting for the advancing care information category:

    Comment: Many commenters supported our proposal not to include the submission of CQMs in this category. Several noted that this elimination will reduce burden for MIPS eligible clinicians, streamline reporting and reduce overlap. Others supported the elimination of duplicative reporting that existed under PQRS and the EHR Incentive Programs.

    Response: We appreciate commenters' support and note that the submission of CQMs is a requirement for the Medicare EHR Incentive Program. For the advancing care information performance category, we will require submission of quality data for measures specified for the quality performance category, in which we encourage reporting of CQMs with data captured in CEHRT. This approach helps to avoid unnecessary overlap and duplicative reporting. Therefore, we have not included separate requirements for clinical quality measurement in the advancing care information performance category, and direct readers to the quality performance category discussed in section II.E.5.b. of this final rule with comment period for information on clinical quality measurement.

    (4) Performance Period Definition for Advancing Care Information Performance Category

    In the Medicare and Medicaid Programs; Electronic Health Record Incentive Program—Stage 3 proposed rule, we proposed to eliminate the 90-day EHR reporting period beginning in 2017 for EPs who had not previously demonstrated meaningful use, with a limited exception for the Medicaid EHR Incentive Program (80 FR 16739-16740, 16774-16775). We received many comments from respondents stating their preference for maintaining the 90-day EHR reporting period to allow first time participants to avoid payment adjustments. In addition, commenters indicated that the 90-day time period reduced administrative burden and allowed for needed time to adapt their EHRs to ensure they could achieve program objectives. As a result, we did not finalize our proposal and established a 90-day EHR reporting period for all EPs in 2015 and for new participants in 2016, as well as a 90-day EHR reporting period for new participants in 2015, 2016, and 2017 with regard to the payment adjustments (80 FR 62777-62779; 62904-62906). In addition we have proposed a 90-day EHR reporting period in 2016 for the EHR Incentive Programs in a recent proposed rule, the Calendar Year (CY) 2017 Changes to the Hospital Outpatient Prospective Payment System (OPPS) and Ambulatory Surgical Center (ASC) (81 FR 45753).

    Moving forward, the implementation of MIPS creates a critical opportunity to align performance periods to ensure that quality, improvement activities, cost, and the advancing care information performance categories are all measured and scored based on the same period of time. We believe this would lower reporting burden, focus clinician quality improvement efforts and align administrative actions so that MIPS eligible clinicians can use common systems and reporting pathways.

    Under MIPS, we proposed to align the performance period for the advancing care information performance category to the proposed MIPS performance period of one full calendar year and the intent of the proposal was to reduce reporting burden and streamline requirements so that MIPS eligible clinicians and third party intermediaries, such as registries and QCDRs, would have a common timeline for data submission to all performance categories (81 FR 28179-28181). Therefore, we noted there would not be a separate 90-day performance period for the advancing care information performance category and MIPS eligible clinicians would need to submit data based on performance period starting January 1, 2017, and ending December 31, 2017 for the first year of MIPS. We also stated that MIPS eligible clinicians that only have data for a portion of the year can still submit data, be assessed and be scored for the advancing care information performance category (81 FR 28179-28181). Under that proposal, MIPS eligible clinicians would need to possess CEHRT and report on the objectives and measures (without meeting any thresholds) during the calendar year performance period to achieve the advancing care information performance category base score. Finally, we stated that MIPS eligible clinicians would be required to submit all of the data they have available for the performance period, even if the time period they have data for is less than one full calendar year.

    The following is a summary of the comments we received regarding our advancing care information performance period proposal.

    Comment: The majority of commenters did not support our proposal for a performance period of one full calendar year. Instead they overwhelmingly recommended a 90-day performance period in 2017. Commenters noted the need for time and resources to understand and adjust to the new MIPS program. Others suggested that 90 days would give MIPS eligible clinicians flexibility to acquire and implement health IT products. A commenter noted that a shorter performance period would enable MIPS eligible clinicians to adopt innovative uses of technology as it would permit them to test new health IT solutions. Additionally with the final rule with comment period not expected until late in 2016, commenters noted there is not sufficient time to review and understand the rule and begin data collection on January 1, 2017.

    Other commenters noted that MIPS eligible clinicians must perform improvement activities for the improvement activities performance category for at least a 90-day performance period, and suggested adopting the same for the advancing care information performance category as it would create alignment. Some commenters requested a performance period of 90-days for the first several years of the program. A few recommended a 90-day performance period every time a new edition of CEHRT is required. Others suggested partial year reporting or reporting for a quarter. One recommended that solo practitioners report for 60 days. We note that only a few commenters supported our proposal.

    Response: We understand the challenges of a full year performance period. As discussed in the proposed rule (81 FR 28179 through 28181), MIPS eligible clinicians that only have data for a portion of the year can still submit data, be assessed and be scored for the advancing care information performance category, and thus, would not need to report for one full year, rather, they could report whatever data they had available even if that data represented less than a full-year period.

    Additionally, we understand the commenters' concerns and rationale for requesting a 90-day performance period. As discussed in section II.E.4. of this final rule with comment period, for the first performance period of CY 2017, we will accept a minimum of 90 days of data within CY 2017, although we greatly encourage MIPS eligible clinicians to submit data for the full year performance period. Also in recognition of the switch from CEHRT certified to the 2014 Edition to CEHRT certified to the 2015 Edition, for the 2018 performance period we will also accept a minimum of 90 days of data within CY 2018. We refer readers to section II.E.4. of this final rule with comment period for further discussion about the MIPS performance period and the 90-day minimum.

    Comment: One commenter encouraged CMS to extend the transition timeframe to performance periods under MIPS in 2017 and 2018. They indicated that their vendors struggle to provide budgetary estimates needed to plan staff and financial resources due to the lack of clarity on what would be required for the MIPS program.

    Response: We recognize that vendors will require varying levels of effort to transition their technology to the MIPS reporting requirements. We note that our proposal to adopt substantively the same definition of CEHRT for the 2015 Edition under MIPS that was adopted in the 2015 EHR Incentive Programs final rule was intended to provide consistency for MIPS eligible clinicians, as well as to allow EHR vendors to begin development based on the specifications finalized in October of 2015 and released by ONC for testing beginning in 2016 unimpeded by the timeline related to any rulemaking for the MIPS program. This would allow vendors to work toward certification on a longer timeline and allow MIPS eligible clinicians to adopt an implement the technology in preparation for the performance period in 2018. The MIPS performance period in 2017 will serve as a transition year for MIPS eligible clinicians, vendors and others parties supporting MIPS eligible clinicians. Further, in section II.E.5.a. of this final rule with comment period, we have established multiple reporting mechanisms to allow MIPS eligible clinicians to report their advancing care information data in the event that their vendor is unable to support new submission requirements. We are adopting for MIPS the 2017 Advancing Care Information Transition objectives and measures (referred to in the proposed rule as Modified Stage 2 objectives and measures) and Advancing Care Information objectives and measures (referred to in the proposed rule as adapted from the Stage 3 objectives and measures) and allowing MIPS eligible clinicians and groups to use technology certified to either the 2014 Edition or the 2015 Edition or a combination of the two editions to support their selection of objectives and measures for 2017. We intend this consistency with prior programs to help ease the transition and reduce the development work needed to transition to MIPS. Finally, we will accept a minimum of any consecutive 90 days in the 2018 performance period for the advancing care information performance category to support eligible clinicians and groups as they transition to technology certified to the 2015 Edition for use in 2018. For these reasons, we believe a 1 year transition during the 2017 MIPS performance period is sufficient.

    After consideration of the public comments received, we are finalizing our proposal to align the performance period for the advancing care information performance category with the MIPS performance period of one full calendar year. For the first performance period of MIPS (CY 2017), we will accept a minimum of 90 consecutive days of data in CY 2017, however, we encourage MIPS eligible clinicians to report data for the full year performance period. For the second performance period of MIPS (CY 2018), we will accept a minimum of 90 consecutive days of data in 2018, however, we encourage MIPS eligible clinicians to report data for the full year performance period. We refer readers to section II.E.4. of this final rule with comment period for further discussion of the MIPS performance period.

    (5) Advancing Care Information Performance Category Data Submission and Collection (a) Definition of Meaningful EHR User and Certification Requirements

    In the 2015 EHR Incentive Programs final rule (80 FR 62873), we outlined the requirements for EPs using CEHRT in 2017 for the Medicare and Medicaid EHR Incentive Programs as it relates to the objectives and measures they select to report. In the proposed rule, we proposed to adopt a definition of CEHRT at § 414.1305 for MIPS eligible clinicians that is based on the definition that applies in the EHR Incentive Programs under § 495.4.

    We proposed for 2017, the first MIPS performance period, MIPS eligible clinicians would be able to use EHR technology certified to either the 2014 or 2015 Edition certification criteria as follows:

    • A MIPS eligible clinician who only has technology certified to the 2015 Edition may choose to report: (1) On the objectives and measures specified for the advancing care information performance category in section II.E.5.g.(7) of the proposed rule (81 FR 28221 through 28223), which correlate to Stage 3 requirements; or (2) on the alternate objectives and measures specified for the advancing care information performance category in section II.E.5.g.(7) of the proposed rule (81 FR 28223 and 28224), which correlate to modified Stage 2 requirements.

    • A MIPS eligible clinician who has technology certified to a combination of 2015 Edition and 2014 Edition may choose to report: (1) On the objectives and measures specified for the advancing care information performance category in section II.E.5.g.(7) of the proposed rule (81 FR 28221 through 28223), which correlate to Stage 3; or (2) on the alternate objectives and measures specified for the advancing care information performance category as described in section II.E.5.g.(7) of the proposed rule (81 FR 28223 and 28224), which correlate to modified Stage 2, if they have the appropriate mix of technologies to support each measure selected.

    • A MIPS eligible clinician who only has technology certified to the 2014 Edition would not be able to report on any of the measures specified for the advancing care information performance category described in section II.E.5.g.(7) of the proposed rule (81 FR 28221 through 28223) that correlate to a Stage 3 measure that requires the support of technology certified to the 2015 Edition. These MIPS eligible clinicians would be required to report on the alternate objectives and measures specified for the advancing care information performance category as described in section II.E.5.g.(7) of the proposed rule (81 FR 28223 and 28224), which correlate to modified Stage 2 objectives and measures.

    We proposed beginning with the performance period in 2018, MIPS eligible clinicians:

    • Must only use technology certified to the 2015 Edition to meet the objectives and measures specified for the advancing care information performance category in section II.E.5.g.(7) of the proposed rule (81 FR 28222 and 28223), which correlate to Stage 3.

    We welcomed comments on the proposals, which were intended to maintain consistency across MIPS, the Medicare EHR Incentive Program and the Medicaid EHR Incentive Program.

    Finally, we proposed to define at § 414.1305 a meaningful EHR user under MIPS as a MIPS eligible clinician who possesses CEHRT, uses the functionality of CEHRT, and reports on applicable objectives and measures specified for the advancing care information performance category for a performance period in the form and manner specified by CMS.

    The following is a summary of the comments we received regarding our proposal for EHR certification requirements.

    Comment: Most commenters supported the proposal to allow MIPS eligible clinicians to use either technology certified to 2014 or 2015 Edition for the performance period in 2017. Many commenters urged CMS to allow MIPS eligible clinicians to continue to use either EHR technology certified to the 2014 or 2015 Edition in the performance period 2018 and beyond, citing concerns over the time required for health IT development and certification and MIPS eligible clinician readiness concerns that the 2015 Edition technology may not be available in time for the performance period or reporting timeframe. A few commenters suggested that flexibility in the form of a hardship exception to reporting to MIPS be offered to accommodate MIPS eligible clinicians who are unable to implement EHR technology certified to the 2015 Edition in time for the 2018 performance period. Other commenters found the requirement to use EHR technology certified to the 2015 Edition in 2018 unacceptable. Commenters noted that as of the comment due date there are zero products certified to the 2015 Edition and recommended that we allow the use of products certified to the 2014 Edition through 2020. Some commenters were also concerned that the small amount of products certified to the 2015 Edition would require MIPS eligible clinicians to find alternatives to meeting the advancing care information requirements and possibly limit those in APMs from utilizing the benefits of the new technology.

    Response: We appreciate the comments and feedback we received, and the support of the proposal for performance periods in 2017 to allow the use of technology certified to the 2014 or 2015 Edition or a combination of the two. We believe this will allow MIPS eligible clinicians the flexibility to transition to EHR technology certified to the 2015 Edition for use for performance periods in 2018 in a manner that works best for their systems, workflows, and clinical needs. We additionally understand the concerns raised by commenters regarding the timeline to implement the 2015 Edition in time for use for performance periods in 2018. We note the requirements for technology certified to the 2015 Edition were established in October 2015 in ONC's final rule titled 2015 Edition Health Information Technology (Health IT) Certification Criteria, 2015 Edition Base Electronic Health Record (EHR) Definition, and ONC Health IT Certification Program Modifications (80 FR 62602-62759). The EHR Incentive Programs final rule adopted the requirement that EPs, eligible hospitals, and CAHs use technology certified to the 2015 Edition beginning in 2018. We intend to maintain continuity for MIPS eligible clinicians and health IT vendors who may already have CEHRT or who have begun planning for a transition to technology certified to the 2015 Edition based on the definition of CEHRT finalized for the EHR Incentive Programs in the 2015 EHR Incentive Programs final rule (80 FR 62871 through 62889). Therefore, there are no new certification requirements in the definition we are finalizing for MIPS eligible clinicians participating in the advancing care information performance category of MIPS at § 414.1305 in order to maintain consistency with the EHR Incentive Programs CEHRT definition at 42 CFR 495.4. Our proposal to adopt a substantively similar definition of CEHRT that was finalized in the 2015 EHR Incentive Programs final rule was intended to provide consistency for MIPS eligible clinicians and also to allow EHR vendors to begin development based on the specifications finalized in October of 2015 and released by ONC for testing beginning in 2016 unimpeded by the timeline related to any rulemaking for the MIPS program. This allows vendors to work toward certification on a longer timeline and allows MIPS eligible clinicians to adopt an implement the technology in preparation for the performance period in 2018. In addition, in order to allow eligible clinicians and groups adequate time to transition to EHR technology certified to the 2015 Edition for use in CY 2018, we will accept a minimum of 90 consecutive days of data within the CY 2018 performance period for the advancing care information performance category. In partnership with ONC, we are monitoring the development and certification process for health IT products certified to the 2015 Edition and will continue to gauge MIPS eligible clinician readiness for the 2018 performance period. At this time, we believe it is appropriate to require the use of EHR technology certified to the 2015 Edition for the performance period in 2018 and encourage MIPS eligible clinicians to work with their EHR vendors in the coming months to prepare for the transition to 2015 Edition in for the performance period in CY 2018.

    Comment: One commenter suggested that the CEHRT definition be expanded to include requirements beyond those finalized for meeting the advancing care information performance category and commenters noted that vendors other than EHR vendors could support the criteria listed in the proposed rule, to include Health Information Exchanges (HIE) or Health Information Service Providers (HISPs).

    Response: The definition of CEHRT does contain elements that are not included in the advancing care information performance category. As noted in the proposed rule (81 FR 28218-28219), and consistent with prior EHR Incentive Program policy, removing a measure from the reporting requirements does not remove the functions supporting that measure from the definition of CEHRT unless we make corresponding changes to that definition. Therefore, a MIPS eligible clinician must implement that function in their practice in order to have their system meet the technological specifications required for participation in the program. For example, in the 2015 EHR Incentive Programs final rule (80 FR 62786), we noted that the Stage 1 ”Record Demographics” measure was designated as topped out and no longer required for reporting, but CEHRT must still capture and record demographics as structured data using the appropriate standards. For MIPS, we did not propose to include the CPOE and CDS objectives and measures in the advancing care information performance category although the technology functions supporting these measures were included in our proposed definition of CEHRT for MIPS.

    Comment: Some commenters were encouraged by the CMS' commitment to collaborate with ONC on the 2015 Edition CEHRT requirements for MIPS to align with the evolving standards to support health IT capabilities.

    Response: We appreciate these comments and will continue to collaborate with ONC on the alignment of MIPS requirements and CEHRT in future rulemaking.

    Comment: A few commenters requested that the definitions of CEHRT incorporate the roles of non-physician practitioners, including Nurse Practitioners (NPs), Physician Assistants (PAs), Certified Registered Nurse Anesthetists (CRNAs) and Clinical Nurse Specialists (CNSs). They noted that current EHR vendor software usually does not allow non-physician practitioners to make entries or be identified. The commenters suggested that CEHRT vendors should be required include provisions so that non-physician practitioners can also utilize the CEHRT so that they can meet MIPS requirements.

    Response: The requirements for the use of CEHRT do not specify the type of provider or clinician that can enter data, nor do ONC's certification criteria in any way limit the entry of data by non-physician practitioners. In some states, the MIPS eligible clinicians mentioned by the commenter may already be participating in the Medicaid EHR Incentive Programs as an EP and using CEHRT to support their clinical practice. In addition, many practices across a wide range of settings where EPs have participated in the Medicare EHR Incentive Programs have developed different workflows to meet their practice needs including the various staff beyond the eligible clinician that enter data. We encourage MIPS eligible clinicians and groups to work with their vendor, and with their own practice and clinical workflows to identify and establish best practices for data capture and data mapping to support their unique practice needs.

    Comment: Some commenters recommended that CMS consider ways to measure possible clinical workflow disruptions caused by health IT (EHRs). The commenters suggested that CMS use Medicare beneficiary surveys, focus groups, patient reported outcome measures, and the CAHPS for MIPS survey; and to incorporate those results when designing health IT specifications and regulations to be used across settings.

    Response: We appreciate the feedback and will take this suggestion into consideration in the future. We encourage MIPS eligible clinicians to work with their EHR vendor to improve the clinical workflow in a way that best suits their individual practice needs.

    Comment: Other commenters noted that while patient access to data is important, MIPS eligible clinicians also need interoperable data from a variety of sources to integrate seamlessly into their work flow. The commenters believe that third party applications will play a major role in satisfying this need to ensure data “quality” so that physicians get the most relevant data in a useable format, when and where they need it.

    Response: CMS and ONC agree with the comments that interoperability and the seamless integration of data and systems into clinical workflows is essential to improving health care quality. For this reason, the 2015 Edition certification criteria include testing and certification for API functionality as a certified health IT module (80 FR 62601-62759), as well as criteria related to ensuring the ability to receive and consume electronic summary of care records from external sources into the provider's EHR and to developing a path for bi-directional exchange of immunization data with public health registries.

    After consideration of the comments we received, we are finalizing our proposal regarding EHR certification requirements at § 414.1305 as proposed and encourage MIPS eligible clinicians to prepare for the migration to the 2015 Edition of CEHRT in 2018. In 2017, MIPS eligible clinicians may use EHR technology certified to the 2014 Edition or the 2015 Edition or a combination of the two. We note that a MIPS eligible clinician who only has technology certified to the 2014 Edition would not be able to report certain measures specified for the advancing care information performance category that correlate to a Stage 3 measure for which there was no Stage 2 equivalent. These MIPS eligible clinicians may instead report the objectives and measures specified for the advancing care information performance category which correlate to Modified Stage 2 objectives and measures. In 2018, MIPS eligible clinicians must use EHR technology certified to the 2015 Edition.

    The following is a summary of the comments we received regarding our proposal for defining a meaningful EHR user under MIPS.

    Comment: Many commenters expressed an overall desire to maintain a moderate to high level standard and category weight for the distinction of meaningful EHR user. These commenters noted that the definition of meaningful EHR user will have an important impact on heath IT adoption and that reducing the stringency or lowering the advancing care information performance category weight in the MIPS final score could hinder progress toward robust, person-centered use of health IT across the health care industry.

    Response: We agree that defining a meaningful EHR user is critical for all of the reasons that the commenter raises; it is an important piece of health IT adoption and promoting interoperability. We seek to balance this critical aspect of EHR reporting with our desire to increase widespread adoption of health IT and clinical standards among MIPS eligible clinicians. We believe our final policies will encourage more widespread adoption and use of health IT in a practice setting. We are also dedicated to increasing the stringency of the measures specified for the advancing care information performance category in future years of the MIPS program to further the advancement of health IT use.

    After consideration of the public comments we received, we are finalizing our proposal to define a meaningful EHR user for MIPS under § 414.1305 as a MIPS eligible clinician who possesses CEHRT, uses the functionality of CEHRT, and reports on applicable objectives and measures specified for the advancing care information performance category for a performance period in the form and manner specified by CMS.

    (b) Method of Data Submission

    Under the Medicare EHR Incentive Program, EPs attest to the numerators and denominators for certain objectives and measures, through a CMS Web site. For the purpose of reporting advancing care information performance category objectives and measures under the MIPS, we proposed at § 414.1325 to allow for MIPS eligible clinicians to submit advancing care information performance category data through qualified registry, EHR, QCDR, attestation and CMS Web Interface submission methods. Regardless of data submission method, all MIPS eligible clinicians must follow the reporting requirements for the objectives and measures to meet the requirements of the advancing care information performance category.

    We note that under this proposal, 2017 would be the first year that EHRs (through the QRDA submission method), QCDRs and qualified registries would be able to submit EHR Incentive Program objectives and measures (as adopted for the advancing care information performance category) to us, and the first time this data would be reported through the CMS Web Interface. We recognize that some Health IT vendors, QCDRs and qualified registries may not be able to conduct this type of data submission for the 2017 performance period given the development efforts associated with this data submission capability. However, we are including these data submission mechanisms in 2017 to support early adopters and to signal our longer-term commitment to working with organizations that are agile, effective and can create less burdensome data submission mechanisms for MIPS eligible clinicians. We believe the proposed data submission methods could reduce reporting burden by synchronizing reporting requirements and data submission, and systems, allow for greater access and ease in submitting data throughout the MIPS program. We note that specific details about the form and manner for data submission will be addressed by CMS in the future.

    The following is a summary of the comments we received regarding our proposal to allow for multiple methods for data submission for the advancing care information performance category.

    Comment: The majority of commenters supported the proposed data submission approach to allow for MIPS eligible clinicians to submit data for the advancing care information performance category through multiple submission methods, which includes, for example, via attestation, qualified registries, QCDRs, EHRs and CMS Web Interface. Many agreed that the proposal alleviates the need for individual MIPS eligible clinicians and groups to use a separate reporting mechanism to report data for different performance categories.

    Response: We appreciate the supportive comments and reiterate that our goals include reducing the reporting burden, aligning reporting requirements across MIPS performance categories, and supporting efficient data submission mechanisms.

    Comment: Some commenters expressed concern that many third party data submission entities do not have the necessary data submission functionality and will not have enough time to develop, distribute and adopt the needed functionality for a performance period in 2017. One commenter requested that CMS provide detailed guidance to vendors and QCDRs as they implement data submission functionality. Another commenter expressed concern about the potential for vendors and developers of QCDRs and registries to fail to fulfill the technical requirements for data submission and advised CMS to finalize a policy indicating that MIPS eligible clinicians would not be penalized for failure of data submission due to vendor issues. One commenter suggested offering bonus points for the use of QCDRs or registry adoption to recognize the investment needed to participate.

    Response: We appreciate the concerns raised by commenters and note that we intend to provide detailed guidance for EHR vendors, as well as third party data intermediaries who submit data on behalf of MIPS eligible clinicians to help them be successful in data submission. However, we acknowledge that some EHRs, QCDRs and registry vendors may not be able to support data submission for the advancing care information performance category for 2017 due to the time needed to develop the technology and functionality to collect and submit these data. For this reason, as discussed in section II.E.5.a. of this final rule with comment period, we offer MIPS eligible clinicians several reporting mechanisms from which to choose. While we believe that in the long term, it is more convenient for MIPS eligible clinicians to submit data one time for all performance categories, we acknowledge that this may not be possible in the transition year for the aforementioned reasons. Therefore, we offer the option of attestation for those MIPS eligible clinicians who's CEHRT, QCDR or registry are not prepared to support advancing care information performance category data submission in 2017. For further discussion of MIPS submission methods, we refer readers to section II.E.5.a. of this final rule with comment period.

    Comment: One commenter requested that CMS provide greater flexibility in the submission standards set forth for health IT vendors, particularly in the transition year of MIPS, including the ability to submit data via QCDR XML. The commenter stated that QCDR vendors often experience issues submitting data using the uniform standards in QRDA implementation guides and that many QRDA variables that are clinical in nature do not easily map to the variables in CEHRT.

    Response: We note that our proposal does allow for submission of the advancing care information performance category data via QCDR, as well as registry, CEHRT, CMS Web Interface and attestation. We believe this flexibility allows MIPS eligible clinicians the ability to submit through their chosen submission mechanism that is most appropriate for their practice.

    Comment: One commenter believed the attestation process is cumbersome and expensive for large groups and suggested that CMS develop a process that will allow larger groups to attest as a group.

    Response: Because the EPs reporting under EHR Incentive Program reported using their individual NPIs, attestation and data submission was completed at the NPI level which was not conducive to groups combining their data and attesting for all of their NPIs together. We agree that this same approach under the MIPS would be cumbersome for group submission. Under the MIPS, groups will have the ability to attest or submit their advancing care information data through a qualified registry, QCDR, EHR, attestation, or CMS Web Interface as a group, meaning the data would be aggregated to the group level and submitted once on behalf of all MIPS eligible clinicians within the group. MIPS eligible clinicians will also have the ability to submit as individuals, if their group is not submitting using the group method. In these cases, the attestation or data submission would be done at the individual (TIN/NPI) level.

    Comment: One commenter recommended the mandatory publication of EHR source code in order to reduce bias and errors.

    Response: We appreciate the suggestion, however, we note that this is outside our authority under section 1848(q) of the Act and outside the scope of this rule.

    We note that there were several other comments related to data submission for MIPS, and we direct readers to section II.E.5.a. of this final rule with comment period for discussion of those comments. After consideration of the comments we received, we are finalizing our policy as proposed.

    (c) Group Reporting

    Under the Medicare EHR Incentive Program, we adopted a reporting mechanism for EPs that are part of a group, to attest using one common form, or a batch reporting process. To determine whether those EPs meaningfully used CEHRT, under that batch reporting process, we assessed the individual performance of the EPs that made up the group, not the group as a whole.

    The structure of the MIPS and our desire to achieve alignment across the MIPS performance categories appropriately necessitates the ability to assess the performance of MIPS eligible clinicians at the group level for all MIPS performance categories. We believe MIPS eligible clinicians should be able to submit data as a group, and be assessed at the group level, for all of the MIPS performance categories, including the advancing care information performance category. For this reason, we proposed a group reporting mechanism for individual MIPS eligible clinicians to have their performance assessed as a group for all performance categories in section II.E.1.e. of the proposed rule (81 FR 28178 and 28179), consistent with section 1848(q)(1)(D)(i)(I) & (II) of the Act.

    Under this option, we proposed that performance on advancing care information performance category objectives and measures would be assessed and reported at the group level, as opposed to the individual MIPS eligible clinician level. We note that the data submission criteria would be the same when submitted at the group-level as if submitted at the individual-level, but the data submitted would be aggregated for all MIPS eligible clinicians within the group practice. We believe this approach to data submission better reflects the team dynamics of the group, and would reduce the overall reporting burden for MIPS eligible clinicians that practice in groups, incentivize practice-wide approaches to data submission, and provide enterprise-level continuous improvements strategies for submitting data to the advancing care information performance category. Please see section II.E.1.e. of the proposed rule (81 FR 28178 and 28179) for more discussion of how to participate as a group under MIPS.

    The following is a summary of the comments we received regarding our proposal to allow for group reporting starting in 2017.

    Comment: The majority of commenters strongly support the allowance of group reporting in the advancing care information performance category. Reasons for support include the reduction in reporting burden, as well as alignment with other MIPS performance categories.

    Response: We appreciate the supportive comments.

    Comment: Many commenters expressed concern about allowing group reporting for the advancing care information performance category in 2017 given the short timeframe between the publication for this final rule with comment period and the start of the 2017 performance period. Commenters believe that this would offer too little time to implement group reporting capabilities in CEHRT, stating that report logic will require clear specifications and time for development and distribution of report updates.

    Response: We recognize that the implementation of group reporting may require varying levels of effort for different practices and therefore may not be the best choice for all MIPS eligible clinicians for the 2017 performance period. However, we believe that making group reporting available for performance periods in CY 2017 offers a significant reduction in reporting burden for many group practices that have a large number of MIPS eligible clinicians, all of whom would otherwise have to report the MIPS requirements individually. We additionally note that groups and MIPS eligible clinicians have the ability to report through multiple reporting mechanisms providing flexibility should their CEHRT be unable to support group reporting in 2017.

    Comment: Some commenters requested clarification on how group reporting of the base and performance scores will be calculated if one or more individual MIPS eligible clinicians within a group practice does not report on an objective or can claim an exclusion from reporting on an objective. In addition, a few commenters asked how to avoid counting more than once the unique patients seen by multiple MIPS eligible clinicians within the group practice. They also asked for detailed instructions for calculating the numerators and denominators of the measures reported.

    Response: We understand that additional explanation is needed in order for groups to determine whether the group reporting option is best for their practice.

    As with group reporting for the other MIPS performance categories, to report as a group, the group will need to aggregate data for all the individual MIPS eligible clinicians within the group for whom they have data in CEHRT. For those who choose to report as a group, performance on the advancing care information performance category objectives and measures would be reported and evaluated at the group level, as opposed to the individual MIPS eligible clinician level. For example, the group calculation of the numerators and denominators for each measure must reflect all of the data from all individual MIPS eligible clinicians that have been captured in CEHRT for the given advancing care information measure. If the group practice has CEHRT that is capable of supporting group reporting, they would submit the aggregated data produced by the CEHRT. If the group practice does not have CEHRT that is capable of or updated to support group reporting, the group would aggregate the data by adding together the numerators and denominators for each MIPS eligible clinician within the group for whom the group has data captured in their CEHRT. If an individual MIPS eligible clinician meets the criteria to exclude a measure, their data can be excluded from the calculation of that particular measure only.

    We understand and agree that it can be difficult to identify unique patients across a group for the purposes of aggregating performance on the advancing care information measures, particularly when that group is using multiple CEHRT systems. We further recognize that for 2017, groups may be using systems which are certified to different CEHRT editions further adding to this challenge. We consider “unique patients” to be individual patients treated by the group who would typically be counted as one patient in the denominator of an advancing care information measure. This patient may see multiple MIPS eligible clinicians within the group, or may see MIPS eligible clinicians at multiple group locations. When aggregating performance on advancing care information measures for group reporting, we do not require that the group determine that a patient seen by one MIPS eligible clinician (or at one location in the case of groups working with multiple CEHRT systems) is not also seen by another MIPS eligible clinician in the group or captured in a different CEHRT system. While this could result in the same patient appearing more than once in the denominator, we believe that the burden to the group of identifying these patients is greater than any gain in measurement accuracy. Accordingly, this final policy will allow groups some flexibility as to the method for counting unique patients in the denominators to accommodate these scenarios where aggregation may be hindered by systems capabilities across multiple CEHRT platforms. We note that this is consistent with our data aggregation policy for providers practicing in multiple locations under the EHR Incentive Program (77 FR 53982).

    Comment: A few commenters voiced concerns that group reporting and many EHR systems, particularly hospital EHRs, mask who actually performs the service and may not recognize the ability of MIPS eligible clinicians who are not physicians to provide and document care. For example, non-physicians who are not considered MIPS eligible clinicians, such as nurse-midwives, physical or occupational therapists and psychologists often perform services and complete their actions using CEHRT. However, the commenter notes that CEHRT functionality usually does not offer the ability to distinguish which clinician actually performed the action, thus making it difficult to calculate an accurate numerator and denominator for measures in the advancing care information performance category. One commenter requested that CMS require that CEHRT be able to identify which clinician is using the CEHRT, ensuring that clinicians other than physicians are able to make entries and actions are attributed to MIPS eligible clinicians.

    Response: We appreciate the feedback and agree that there are issues related to group reporting that we will continue to monitor as the program develops. We note that the vast majority of commenters supported the group reporting option as it represents a reduction in reporting burden for MIPS eligible clinicians who choose to report as groups rather than as individuals. As we move forward with the advancing care information performance category we will be working with ONC to refine capabilities in CEHRT that could further support group reporting.

    Comment: One commenter urged CMS to avoid issuing guidance that assigns nurses the role of scribe or data entry for physicians because this would adversely affect the quality of care delivered to patient.

    Response: We do not intend to issue guidance that define or redefine the role of non-physician practitioners, such as nurse practitioners or nurse specialists.

    After consideration of the comments, we are finalizing our proposal to allow group reporting for the advancing care information performance category with the additional explanation of data aggregation requirements for group reporting provided in our response above, particularly as it relates to aggregating unique patients seen by the group.

    For our final policy, we considered and rejected imposing a threshold for group reporting. For example, in future years we may require that groups can only submit their advancing care information performance category data as a group if 50 percent or more of their eligible patient encounters are captured in CEHRT. While we considered this as an option for 2017, the transition year of MIPS, we chose not to institute such a policy at this time and will instead consider it for future years. We are seeking comment in this final rule with comment period on what would be an appropriate threshold for group reporting in future years.

    We note that group reporting policies for the MIPS program, including the other performance categories, are discussed in section II.E.5.a. of this final rule with comment period, and we refer readers to that section for additional discussion of group reporting.

    (6) Reporting Requirements & Scoring Methodology (a) Scoring Method

    Section 1848(q)(5)(E)(i)(IV) of the Act, as added by section 101(c) of the MACRA, states that 25 percent of the MIPS final score shall be based on performance for the advancing care information performance category. Therefore, we proposed at § 414.1375 that performance in the advancing care information performance category will comprise 25 percent of a MIPS eligible clinician's MIPS final score for payment year 2019 and each year thereafter. We received many comments in the MIPS and APMs RFI from stakeholders regarding the importance of flexible scoring for the advancing care information performance category and provisions for multiple performance pathways. We agree that this is the best approach moving forward with the adoption and use of CEHRT as it becomes part of a single coordinated program under the MIPS. For the reasons described here and previously in this preamble, we are proposing a methodology which balances the goals of incentivizing participation and reporting while recognizing exceptional performance by awarding points through a performance score. In this methodology, we proposed at § 414.1380(b)(4) that the score for the advancing care information performance category would be comprised of a score for participation and reporting, hereinafter referred to as the “base score,” and a score for performance at varying levels above the base score requirements, hereinafter referred to as the “performance score”.

    The following is a summary of the comments we received regarding overall scoring for the advancing care information performance category.

    Comment: Overall, most commenters found the scoring to be cumbersome, complex, and complicated and recommended that it be simplified. Suggestions included removing distinction between the base score and performance score. Others suggested removing objectives and measures or moving them to other MIPS performance categories, such as moving Public Health and Clinical Data Registry Reporting to the improvement activities performance category. One commenter suggested simplifying the assignment of points for each measure. For example, they suggested that 10 percent per measure be awarded for the following: 1. Patient Access; 2. Electronic Prescribing; 3. Computerized Provider Order Entry (CPOE); 4. Patient-Specific Education; 5. View, Download, Transmit; 6. Secure Messaging; 7. Patient-Generated Health Data; 8. Patient Care Record Exchange; 9. Request/Accept Patient Care Record; 10. Clinical Information Reconciliation.

    Response: We appreciate the constructive feedback from commenters. Our priority is to finalize reporting requirements for the advancing care information performance category that incentivizes performance and reporting with minimal complexity and reporting burden. We have addressed many of these comments and concerns in our final scoring methodology outlined in section II.E.5.g.(6)(a) of this final rule with comment period.

    Comment: Some commenters appreciated the split between base and performance scores in the advancing care information performance category, citing the flexibility offered compared to the EHR Incentive programs. Many commenters also praised the elimination of the requirement to meet measure thresholds.

    Response: We appreciate commenters' support for our proposal. Our priority is to finalize a scoring methodology for the advancing care information performance category that promotes the use of CEHRT reporting requirements in an efficient, effective and flexible manner.

    Comment: Some commenters did not support the elimination of measure thresholds. They believed that incorporating measure thresholds enables MIPS eligible clinicians to earn higher score for the advancing care information performance score and would encourage a higher level of success using CEHRT. Another commenter suggested replacing the base score requirement of at least one in the numerator with a requirement to meet a 5 percent threshold for each measure reported beginning for the performance period of CY 2019.

    Response: We believe the scoring approach, as proposed and then as finalized in this final rule with comment period, promotes performance on the advancing care information performance category measures by rewarding high performance rather than requiring MIPS eligible clinicians to meet one threshold across the board. We agree that in future years of the program, we may consider higher minimum thresholds for reporting, however, we also seek to allow flexibility for MIPS eligible clinicians to report on the measures that are most meaningful to their practice.

    Comment: Most commenters supported the proposal to move away from the overall all-or-nothing scoring approach previously used in the EHR Incentive Programs. However, many commenters do not support the all-or-nothing approach proposed to earn the base score and subsequent points in the performance score, for the advancing care information performance category. More than one commenter recommended offering partial credit for each objective in the base score rather than an all-or-nothing approach. Other comments include removing the base score and only awarding points toward a performance score, as well as adding more measure exclusions. Some suggested awarding points toward the performance score even if the MIPS eligible clinician fails to meet a base score.

    Response: In order to provide more flexibility for MIPS eligible clinicians, we have moved away from the all-or-nothing approach in our final policy. We note that certain measures under our final policy remain required measures in the base score. For example, section 1848(o)(2)(A) of the Act includes certain requirements that we have chosen to implement through measures such as e-Prescribing, Send Summary of Care (formerly Patient Care Record Exchange) and Request/Accept Patient Care Record, and thus, certain measures under our final policy remain required measures for the base score in the advancing care information performance category. In addition to those measures listed above, there are other measures such as Security Risk Analysis that are essential to protecting patient privacy, which we believe should be mandatory for reporting. We have addressed these comments further with our final scoring methodology outlined in section II.E.5.g.(6)(a) of this final rule with comment period. We have reduced the total number of required measures from 11 in the base score as proposed to only five in the final policy, which addresses some of the concerns raised by commenters while meeting our statutory requirements, as well as our commitment to patient privacy and access.

    Comment: Many commenters requested that the distribution of points for the base score and performance score of the advancing care information performance category be reweighted. More than one commenter suggested reducing the weight of the base score and increasing the weight of the performance score over time. For example, some commenters requested that the base be worth 40 percent and the performance be 60 percent of the points. Another commenter believed the base score should initially be more heavily weighted, with the base score at 60 points, Protect Patient Health Information score at 10 points, and performance score at 80 points.

    Response: Based on the overwhelming comments received, and our goal to simplify the scoring methodology wherever possible, we agree with commenters that the base and performance scores should be reconsidered for the final policy. We have outlined the final scoring methodology in section II.E.5.g.(6)(a) of this final rule with comment period, in which the performance score is reweighted and the total possible score for the advancing care information performance category is increased to 155 percent which would be capped at 100 percent when applied to the 25 possible points for the advancing care information performance category in the MIPS final score.

    Comment: Many commenters disliked that no credit is awarded if the numerator for any measure is not at least one or the response is not “yes” for yes/no measures. Some commenters propose changing the policy to allow MIPS eligible clinicians to earn a performance score and bonus score even if they fail the base score. Others suggest reducing the number of objectives to report to earn the base score. For example, one commenter suggested requiring only the measures within the following objectives to achieve the base score: Protect Patient Health Information, Patient Electronic Access and Health Information Exchange.

    Response: We appreciate the suggestions raised by commenters and have taken these comments into account for our final policy discussed in section II.E.5.g(6)(a) We note that for required measures in the base score, we would still require a one in the numerator or a “yes” response to yes/no measures. Section 1848(o)(2)(A) of the Act includes certain requirements that we have chosen to implement through three of the measures in the base score (e-Prescribing, Send a Summary of Care (formerly Patient Care Record Exchange) and Request/Accept Summary of Care (formerly Patient Care Record), and thus, we believe these measures should be required in order for a MIPS eligible clinician to earn any score in the advancing care information performance category. The other two required measures, Security Risk Analysis and Provide Patient Access (formerly Patient Access) are of paramount importance to CMS, and thus, we have maintained them as required measures in the base score.

    Comment: Many commenters support the emphasis on health information exchange and patient engagement in both the base score and performance score. Some commenters recommended an even more weight given to these areas in the performance score.

    Response: We appreciate this feedback. We agree that health information exchange and coordination of care through patient engagement are essential to improving the quality of care.

    (b) Base Score

    To earn points toward the base score, a MIPS eligible clinician must report the numerator and denominator of certain measures specified for the advancing care information performance category (see measure specifications in section II.E.5.g.(7) (81 FR 28226 through 28228)), which are based on the measures adopted by the EHR Incentive Programs for Stage 3 in the 2015 EHR Incentive Programs final rule, to account for 50 percent (out of a total 100 percent) of the advancing care information performance category score. For measures that include a percentage-based threshold for Stage 3 of the EHR Incentive Program, we would not require those thresholds to be met for purposes of the advancing care information performance category under MIPS, but would instead require MIPS eligible clinicians to report the numerator (of at least one) and denominator (or a yes/no statement for applicable measures, which would be submitted together with data for the other measures) for each measure being reported. We note that for any measure requiring a yes/no statement, only a yes statement would qualify for credit under the base score. Under the proposal, the base score of the advancing care information performance category would incorporate the objective and measures adopted by the EHR Incentive Programs with an emphasis on privacy and security. We proposed two variations of a scoring methodology for the base score, a primary and an alternate proposal, which are outlined below. Both proposals would require the MIPS eligible clinician to meet the requirement to protect patient health information created or maintained by CEHRT to earn any score within the advancing care information performance category; failure to do so would result in a base score of zero, a performance score of zero (discussed in section II.E.5.g of the proposed rule (81 FR 28221), and an advancing care information performance category score of zero.

    The primary proposal at section II.E.5.g.(6)(b)(ii) of the proposed rule (81 FR 28221) would require a MIPS eligible clinician to report the numerator (of at least one) and denominator or yes/no statement (only a yes statement would qualify for credit under the base score) for a subset of measures adopted by the EHR Incentive Program for EPs in the 2015 EHR Incentive Programs final rule. In an effort to streamline and simplify the reporting requirements under the MIPS, and reduce reporting burden on MIPS eligible clinicians, we proposed that two objectives (Clinical Decision Support and Computerized Provider Order Entry) and their associated measures would not be required for reporting the advancing care information performance category. Given the consistently high performance on these two objectives in the EHR Incentive Program with EPs accomplishing a median score of over 90 percent for the last 3 years, we stated our belief that these objectives and measures are no longer an effective measure of EHR performance and use. In addition, we do not believe these objectives and associated measures contribute to the goals of patient engagement and interoperability, and thus, we believe these objectives can be removed in an effort to reduce reporting burden without negatively impacting the goals of the advancing care information performance category. We note that the removed objectives and associated measures would still be required as part of ONC's functionality standards for CEHRT, however, MIPS eligible clinicians would not be required to report the numerator and denominator or yes/no statement for those measures. In the 2015 EHR Incentive Programs final rule we also established that, for measures that were removed, the technology requirements would still be a part of the definition of CEHRT. For example, in that final rule, the Stage 1 Objective to Record Demographics was removed, but the technology and standard for this function in the EHR were still required (80 FR 62784). This means that the MIPS eligible clinician would still be required to have these functions as a part of their CEHRT.

    The alternate proposal at section II.E.5.g.(6)(b)(iii) of the proposed rule (81 FR 28222) would require a MIPS eligible clinician to report the numerator (of at least one) and denominator or yes/no statement (only a yes statement would qualify for credit under the base score) for all objectives and measures adopted for Stage 3 in the 2015 EHR Incentive Programs final rule to earn the base score portion of the advancing care information performance category, which would include reporting a yes/no statement for CDS and a numerator and denominator for CPOE objectives. We included these objectives in the alternate proposal as MIPS eligible clinicians may believe the continued measurement of these objectives is valuable to the continued use of CEHRT as this would maintain the previously established objectives under the EHR Incentive Program.

    We stated our belief that both proposed approaches to the base score are consistent with the statutory requirements under HITECH and previously established CEHRT requirements as we transition to MIPS. We also believe both approaches, in conjunction with the advancing care information performance score, recognize the need for greater flexibility in scoring CEHRT use across different clinician types and practice settings by allowing MIPS eligible clinicians to focus on the objectives and measures most applicable to their practice.

    Comment: Several commenters were disappointed that our proposals for the base score are so similar to the current meaningful use requirements. They requested a more streamlined approach as they believe the statute intended. Another commenter believed that advancing care information performance category should reflect a MIPS eligible clinician's use of digital clinical data to inform patient care and encourage bi-directional data interoperability.

    Response: While we did draw on the meaningful use foundation in drafting the requirements for the advancing care information performance category, our proposals have lessened those requirements and provided additional flexibility as compared with all stages of the EHR Incentive Programs. We note that we have made significant revisions to the scoring methodology and reporting requirements in our final policy discussed in section II.E.5.g.(6)(a) in response to these comments. We would also welcome concrete proposals for new measures as we move forward with EHR reporting requirements under the MIPS. We are eager to improve interoperability and would welcome suggestions for improvement.

    Comment: We received many comments on the allocation of points in the base score. Some commenters asked CMS to simplify the base score calculation and weight the base score higher. Alternatively commenters recommended that CMS reweight the base score to 75 percent of the total advancing care information performance category. Other commenters recommended that increasing the weight of the base score only occur if CMS also moves away from the pass-fail approach to scoring this section. Others suggested removing the base component of the scoring methodology, and instead just have a set amount of points that it is possible to achieve for each measure.

    In regard to the base score calculation, most commenters requested that we remove the all-or-nothing scoring of the base score. Some asked that CMS give clinicians the option to report on a subset of measures to satisfy the base score. Many requested partial credit. Some commenters expressed concern that not reporting at least a numerator of one for the base measures will result in a score or zero for the entire category. A commenter proposed reporting a zero numerator or denominator on a measure would satisfy successfully submitting data, and thus, the clinician should achieve full points for the base score. Another recommended CMS grant credit for each reported measure under the base score and make clear that a physician will not fail the entire advancing care category if they fail to report all base score measures.

    Commenters also suggested giving full credit in the advancing care information performance category if a MIPS eligible clinician attests to using technology certified to the 2014 or 2015 Edition for MIPS year 1, and 75 percent credit toward advancing care information performance category for subsequent years. Another asked that 50 percent in the base score be awarded to clinicians that implemented CEHRT for at least 90 days of the performance period to ease newer users into EHR. While most requested less stringent requirements, some thought that it is too easy to achieve the 50 percent base score. Others believed the “one patient threshold” for advancing care information performance category reporting for all measures in the base score is far too low.

    Response: We have taken commenters' feedback into consideration as we have constructed our final policy as outlined in section II.E.5.g.(6)(a) of this final rule with comment period. While we appreciate commenters concerns about low thresholds, we believe that the reporting requirements we set (a one in the numerator for numerator/denominator measures, and a “yes' for yes/no measures) are appropriate as we transition to the MIPS. We note the definition of MIPS eligible clinician includes many practitioners that were not eligible under the EHR Incentive Programs and thus have little to no experience with the objectives and measures. While the reporting requirements are lower than the thresholds established for Modified Stage 2 and Stage 3 of the EHR Incentive Programs, we believe they are appropriate for the first performance period of MIPS. Further we have tried to limit the composition of the base score so that MIPS eligible clinicians can distinguish themselves through reporting on the performance score measures. We are finalizing additional flexibilities to address the concern about an all-or-nothing approach and reduced the number of required measures from 11 in the proposed base score to five in our final policy. We note that certain measures which implement statutory requirements or that we consider high priority to protect patient privacy and access are required for reporting. MIPS eligible clinicians are required to report on all five of the required measures in the base score in order to earn any points in the advancing care information performance category. Considering this significant reduction in the number of required measures for the base score, we do not believe it is appropriate to increase the weight of the base score as some commenters suggested and will keep it at 50 percent in our final scoring methodology.

    We are finalizing our policy that a MIPS eligible clinician must report either a one in the numerator for numerator/denominator measures, or a “yes” response for yes/no measures in order to earn points in the base score, and a MIPS eligible clinician must report all required measures in the base score in order to earn a score in the advancing care information performance category. We note that the remainder of a MIPS eligible clinician's score will be based on performance and/or meeting the requirements to earn a bonus score for Public Health and Clinical Data Registry Reporting or improvement activities as described in section II.E.5.g.(7)(b) and II.E.5.g.(2)(b) of this final rule with comment period.

    (i) Privacy and Security; Protect Patient Health Information

    In the 2015 EHR Incentive Programs final rule (80 FR 62832), we finalized the Protect Patient Health Information objective and its associated measure for Stage 3, which requires EPs to protect electronic protected health information (ePHI, as defined in 45 CFR 160.103) created or maintained by the CEHRT through the implementation of appropriate technical, administrative, and physical safeguards. As privacy and security is of paramount importance and applicable across all objectives, the Protect Patient Health Information objective and measure would be an overarching requirement for the base score under both the primary proposal and alternate proposal, and therefore would be an overarching requirement for the advancing care information performance category. We proposed that a MIPS eligible clinician must meet this objective and measure to earn any score within the advancing care information performance category. Failure to do so would result in a base score of zero under either the primary proposal or alternate outlined proposal, as well as a performance score of zero (discussed in section II.E.5.g. of the proposed rule (81 FR 28215) and an advancing care information performance category score of zero.

    The following is a summary of the comments we received regarding our proposal to require that a MIPS eligible clinician must meet the Protect Patient Health Information objective and measure to earn any score within the advancing care information performance category.

    Comment: Many commenters supported the proposal requiring the Protect Patient Health Information objective and measure in order to receive the full base score and any performance score in the advancing care information performance category.

    Response: We agree as we continue to believe that there are many benefits of safeguarding ePHI. Unintended and/or unlawful disclosures of ePHI puts EHRs, interoperability and health information exchange at risk. It is paramount that ePHI is properly protected and secured and we believe that requiring this objective and measure remains fundamental to this goal.

    Comment: A few commenters expressed uncertainty about the effectiveness of the Protect Patient Health Information objective and measure in ensuring the security and privacy of patient health information, as well as maintaining doctor-patient confidentiality.

    Response: We understand that in some cases this measure may not be enough to protect data as data breaches become more sophisticated. However we continue to believe that widespread performance of security risk analyses on a regular basis remains an important component of protecting ePHI. The measure is a foundation of protection and we expect that individuals and entities subject to HIPAA will also be meeting the requirements of HIPAA.

    Comment: Some commenters believed that reporting the Protect Patient Health Information objective and measure is redundant and burdensome, as the security risk analysis and other privacy and security areas are already included under HIPAA requirements.

    Response: Yes, we agree that a security risk analysis is included in the HIPAA rules. However, it is our experience that some EPs are not fulfilling this requirement under the EHR Incentive Programs. To reinforce its importance, we are including it as a requirement for MIPS eligible clinicians.

    Comment: Some commenters expressed concern that meeting the Protect Patient Health Information objective and measure requirements presents a burden to small group practices, practices in rural settings, new adopters of CEHRT and some MIPS eligible clinicians who experience varying hardships.

    Response: We disagree. The HIPAA Privacy and Security Rules, which are more comprehensive than the Advancing Care Information measure and with which certain entities must also comply, have been effective for over 10 years. In addition, the Department of Health and Human Services has produced a security risk assessment tool designed for use by small and medium sized providers and clinicians available at https://www.healthit.gov/providers-professionals/security-risk-assessment and also http://www.hhs.gov/hipaa/for-professionals/security/index.html. This tool should help providers and clinicians with compliance and additional resources are also available at http://www.hhs.gov/hipaa/for-professionals/security/guidance/index.html. We understand that there are many sources of education available in the commercial market regarding HIPAA compliance.

    Comment: Many commenters stated that EHR use could jeopardize patient confidentiality because personal information can be stolen. Some stated that EHRs are a violation of privacy. Others do not want their medical information accessible to the government or third party vendors. Several stated that the proposed rule is contrary to the HIPAA regulations.

    Response: We agree that it is important to address the unique risks and challenges that EHRs may present. We maintain that a focus on the protection of ePHI is necessary for all clinicians. We also note that a security risk analysis is required under the HIPAA regulations (45 CFR 164.308(a)(1)).

    Comment: A few commenters offered suggestions to modify the Protect Patient Health objective and measure, such as aligning the architecture of CEHRT with the Hippocratic Oath or working with Office for Civil Rights (OCR) or the Office of the Inspector General (OIG) to develop additional guidance to physicians regarding privacy practices.

    Response: We appreciate this feedback. We will continue to work with the OCR and ONC to develop and refine guidance.

    We are finalizing the requirement that a MIPS eligible clinician must meet the Protect Patient Health Information objective and measure in order to earn any score within the advancing care information performance category.

    (ii) Advancing Care Information Performance Category Base Score Primary Proposal

    In the 2015 EHR Incentive Programs final rule (80 FR 62829-62871), we finalized certain objectives and measures EPs would report to demonstrate meaningful use of CEHRT for Stage 3. Under our proposal for the base score of the advancing care information performance category, MIPS eligible clinicians would be required to submit the numerator (of at least one) and denominator, or yes/no statement as appropriate (only a yes statement would qualify for credit under the base score), for each measure within a subset of objectives (Electronic Prescribing, Patient Electronic Access to Health Information, Care of Coordination Through Patient Engagement, Health Information Exchange, and Public Health and Clinical Data Registry Reporting) adopted in the 2015 EHR Incentive Programs final rule for Stage 3 to account for the base score of 50 percent of the advancing care information performance category score. Successfully submitting a numerator and denominator or yes/no statement for each measure of each objective would earn a base score of 50 percent for the advancing care information performance category. As proposed in the proposed rule, failure to meet the submission criteria (numerator/denominator or yes/no statement as applicable) and measure specifications (81 FR 28226 through 28230) for any measure in any of the objectives would result in a score of zero for the advancing care information performance category base score, a performance score of zero (discussed in section II.E.5.g. of the proposed rule 81 FR 28215) and an advancing care information performance category score of zero.

    For the Public Health and Clinical Data Registry Reporting objective there is no numerator and denominator to measure; rather, the measure is a “yes/no” statement of whether the MIPS eligible clinician has completed the measure, noting that only a yes statement would qualify for credit under the base score. Therefore we proposed that MIPS eligible clinicians would include a yes/no statement in lieu of the numerator/denominator statement within their submission for the advancing care information performance category for the Public Health and Clinical Data Registry Reporting objective. We further proposed that, to earn points in the base score, a MIPS eligible clinician would only need to complete submission on the Immunization Registry Reporting measure of this objective. Completing any additional measures under this objective would earn one additional bonus point in the advancing care information performance category score. For further information on this proposed objective, we direct readers to 81 FR 28230.

    (iii) Advancing Care Information Performance Category Base Score Alternate Proposal

    Under our alternate proposal for the base score of the advancing care information performance category, a MIPS eligible clinician would be required to submit the numerator (of at least one) and denominator, or yes/no statement as appropriate, for each measure, for all objectives and measures for Stage 3 in the 2015 EHR Incentives Program final rule (80 FR 62829-62871) as outlined in Table 7 of the proposed rule (81 FR 28223). Successfully submitting a numerator and denominator for each measure of each objective would earn a base score of 50 percent for the advancing care information performance category. Failure to meet the submission requirements, or measure specifications for any measure in any of the objectives would result in a score of zero for the advancing care information performance category base score, a performance score of 0 (discussed in section II.E.5.g. of the proposed rule), and an advancing care information performance category score of 0.

    We proposed the same approach in the alternate proposal for the Public Health and Clinical Data Registry Reporting objective as for the primary outlined proposal. We direct readers to 81 FR 28226 through 28230 for further details on the individual objectives and measures.

    The following is a summary of the comments we received regarding our base score primary and alternate proposals which differ based on whether reporting the CDS and CPOE objectives would be required.

    Comment: Most commenters support the adoption of the base score primary proposal, which eliminates the objectives and associated measures for CPOE and CDS and agreed that most MIPS eligible clinicians already use CPOE and CDS and do very well on those measures. Several noted that measures require additional data entry and the pop-up alerts interfere with clinical workflow, and thus, removal of these measures could improve clinical workflow in the EHR.

    Response: We agree and appreciate the support of these commenters. As we have done previously under the EHR Incentive Programs we will continue to monitor performance on objectives and measures and plan to propose to refine measures and add new measures in future years.

    Comment: Since CPOE and CDS continue to be valuable to practices, many commenters support the alternate proposal to require the CPOE and CDS objectives in the base score for the advancing care information performance category. One commenter stated that maintaining these two objectives offers an opportunity for the development of important measures for specialists, including anesthesia-focused measures. Another commenter suggested including the CPOE objective in for the performance score of the advancing care information performance category to give more flexibility and offer an opportunity to MIPS eligible clinicians to earn more points, especially for those MIPS eligible clinicians who will be using an EHR technology certified to the 2014 Edition in 2017.

    Response: While we agree that CPOE and CDS are valuable, we continue to believe that it is important to streamline and simplify the reporting requirements under MIPS. We note that the functionality supporting these objectives will continue to be required as part of CEHRT requirements.

    Comment: One commenter urged CMS to clarify that even if the reporting of CPOE and CDS measures is eliminated under the primary proposal base score of the advancing care information performance category, MIPS eligible clinicians who utilize CPOE are still expected to utilize appropriately credentialed clinical staff to enter the orders and those who utilize CDS must have the required functionality turned on to receive credit in the advancing care information performance category base score.

    Response: As for the functionality, even if the CPOE and CDS objectives and measures are not included for reporting under the advancing care information performance category, it is still expected that MIPS eligible clinicians will continue to have the functionality enabled as a part of CEHRT.

    Comment: Some commenters recommended retaining the CPOE and CDS objectives and associated measures, noting that while the two functionalities are widely adopted by those who were already participating in the Medicare and Medicaid EHR Incentive Programs, MIPS eligible clinicians include practitioners who were not eligible for those programs, many of whom have not yet adopted the functionalities and activities required for those objectives. Some commenters asked that, if retaining the CPOE objective and associated measures, that CMS include the low volume threshold exclusions.

    Response: While we appreciate these concerns, we continue to believe that it is important to streamline and simplify the reporting requirements under MIPS. Practitioners who are not eligible to participate in the EHR Incentive Programs but are MIPS eligible clinicians will be subject to many new requirements and will have a considerable amount of learning to do in their initial years of the program, thus we do not believe it is necessary to add more to that list of requirements and also increase the reporting burden for clinicians with more experience using EHR who have historically had high performance on these measures in the past under the EHR Incentive Program. We note that the functionality supporting these objectives will continue to be required as part of certification requirements and available to new adopters of EHR technology.

    Comment: One commenter expressed skepticism about the applicability of the objectives with special emphasis in the base score to specialists. For example, the commenter expressed concern that many anesthesiologists may have difficulty attesting to the Patient Electronic Access, Coordination of Care Through Patient Engagement and Health Information Exchange objectives. They suggested developing equally valuable substitute measures and objectives that focus on the use of CEHRT by specialists and MIPS eligible clinicians who work in settings that vary from traditional office-based practices.

    Response: We understand that the practice settings of MIPS eligible clinicians vary and that meeting the proposed objectives and measures may require different levels of effort. We will consider the development of objectives and measures for specialists and other clinicians who do not work in office settings in future rulemaking.

    Comment: We received many suggested changes to the measures included in our primary proposal. Some requested that we allow MIPS eligible clinicians to choose which measures are most relevant to their practice. Others recommended that the base score be streamlined and focus on three critical objectives of meaningful use: Protection of personal health information, patient electronic access to his/her health information, and health information exchange. Some commenters recommended including the smallest set of objectives in the base score required by statute and including any additional objectives in the performance score category.

    Response: We appreciate the many suggested changes to measures and measure reporting requirements and will take them into consideration in this and future rules. We are also conscious of the need to balance complexity or reporting requirements with reporting goals. In our final policy, we have restructured our base score to reduce reporting burden, and limited the required measures keeping only those measures that implement certain requirements under section 1848(o)(2)(A) of the Act, which include e-Prescribing and two of the measures under the Health Information Exchange objective; as well as Security Risk Analysis, which we have previously stated is of paramount importance to protecting patient privacy; and Provide Patient Access which is critical to increasing patient engagement and allowing patients access to their personal health data. We note that this reduction of measures is responsive to the comments we received requesting that we move away from the all-or-nothing scoring methodology in the proposed base score. While we believe all measures under the advancing care information performance category are of upmost importance, we acknowledge that we must balance the need for these data with data collection and reporting burden. We refer readers to section II.E.5.g.(6)(a) for more discussion of our final scoring policy.

    After consideration of the comments, we are finalizing our primary proposal with modifications described in section II.E.5.g.(6)(a) for the base score. This proposal does not require the reporting of the objectives and measures for CDS and CPOE. We note that the functionalities required for these objectives and associated measures are still required as part of ONC's certification criteria for CEHRT.

    The following is a summary of the comments we received related to the bonus for Public Health and Clinical Data Registry Reporting.

    Comment: The majority of commenters recommended that more bonus credit should be awarded to MIPS eligible clinicians for reporting to additional registries by either increasing the bonus to 5 or 10 percent or by offering a bonus for each additional registry to which the MIPS eligible clinician reports. One commenter specifically expressed concern that only awarding 1 percent downplays the importance and benefit of submitting data to multiple registries. Many commenters supported the proposal that Immunization Registry Reporting should be the only registry required for the base score, but encouraged CMS to provide more than 1 percent as a bonus for additional registry reporting. Another suggested that for CY 2017, CMS require two public health reporting measures in the Public Health and Clinical Data Registry Reporting objective for the base score, including mandatory reporting to immunization registries and any of the optional public health measures.

    Response: The Public Health and Clinical Data Registry reporting objective focuses on the importance of the ongoing lines of communication that should exist between MIPS eligible clinicians and public health agencies and clinical data registries thus, we agree that a larger bonus should be awarded for reporting to additional registries under the Public Health and Clinical Data Registry Reporting objective. These registries play an important part in monitoring the health status of patients across the country and some, for example syndromic surveillance registries, help in the early detection of outbreaks which is critical to public health overall.

    After consideration of the comments we received, and for the reasons mentioned above, we are increasing the bonus score to 5 percent in the advancing care information performance category score for reporting to one or more public health or clinical data registries beyond the Immunization Registry Reporting measure. We note that in our effort to reduce the number of required measures in the base score and simplify reporting requirements, the Immunization Registry Reporting measure is no longer required as part of the base score, however MIPS eligible clinicians can earn 10 percent in the performance score for reporting this measure. Additionally, if the MIPS eligible clinician reports to one or more additional registries under the Public Health and Clinical Data Registry Reporting objective, they will earn the 5 percent bonus score. We note that the bonus is only available to MIPS eligible clinicians who earn a base score.

    (iv) 2017 Advancing Care Information Transition Objectives and Measures (Referred to in the Proposed Rule as Modified Stage 2)

    In the 2015 EHR Incentive Programs final rule (80 FR 62772), we streamlined reporting for EPs by adopting a single set of objectives and measures for EPs regardless of their prior stage of participation. This was the first step in synchronizing the objectives and eliminating the separate stages of meaningful use in the EHR Incentive Program. In doing so, we also sought to provide some flexibility and to allow adequate time for EPs to move toward the more advanced use of EHR technology. This flexibility included alternate exclusions and specifications for EPs scheduled to demonstrate Stage 1 in 2015 and 2016 (80 FR 62788) and allowed clinicians to select either the Modified Stage 2 Objectives or the Stage 3 Objectives in 2017 (80 FR 62772) with all EPs moving to the Stage 3 Objectives in 2018. We note that in section II.E.5.g (81 FR 28218 and 28219) of the proposed rule, we proposed the requirements for MIPS eligible clinicians using various editions of CEHRT in 2017 as it relates to the objectives and measures they select to report.

    In connection with that proposal, and in an effort not to unfairly burden MIPS eligible clinicians who are still utilizing EHR technology certified to the 2014 Edition certification criteria in 2017, we proposed at § 414.1380(b)(4) modified primary and alternate proposals for the base score for those MIPS eligible clinicians utilizing EHR technology certified to the 2014 Edition. We note that these modified proposals are the same as the primary and alternate outlined proposals in regard to scoring and data submission, but vary in the number of measures required under the Coordination of Care Through Patient Engagement and Health Information Exchange objectives as demonstrated in Table 8 of the proposed rule (81 FR 28224).

    This approach allows MIPS eligible clinicians to continue moving toward advanced use of CEHRT in 2018, but allows for flexibility in the implementation of upgraded technology and in the selection of measures for reporting in 2017.

    The following is a summary of the comments we received regarding the proposals for reporting on the Modified Stage 2 objectives and measures for the advancing care information performance category in 2017. We note that in this final rule with comment period we will refer to these measures as the 2017 Advancing Care Information Transition objectives and measures instead of Modified Stage 2, which is a term specific to the EHR Incentive Program.

    Comment: Many commenters supported the proposal to allow MIPS eligible clinicians to report on the 2017 Advancing Care Information Transition objectives and measures in the 2017 performance period to meet the requirements of the advancing care information performance category. They stated that this approach offers flexibility to MIPS eligible clinicians who do not yet use a 2015 Edition CEHRT.

    Response: We agree. We are aware that in 2017 many MIPS eligible clinicians might not yet have access to EHR technology certified to the 2015 Edition. Therefore, to accommodate these MIPS eligible clinicians we will allow the option for them to report for the 2017 performance period using EHR technology certified to the 2014 Edition or a combination of both 2014 and 2015 Editions.

    Comment: A majority of commenters suggested retaining 2017 Advancing Care Information Transition objectives and measures beyond performance periods in 2017, citing vendor, as well as clinician readiness with implementing and using EHR technology certified to the 2015 Edition in time for the 2018 performance period. Additionally, some commenters believed that the 2017 Advancing Care Information Transition reporting requirements are less stringent, and therefore, more feasible for MIPS eligible clinicians to achieve, resulting in more MIPS eligible clinician success in the advancing care information performance category. One commenter suggested continuing to allow the reporting of 2017 Advancing Care Information Transition objectives and not requiring the reporting of Advancing Care Information objectives until a performance period in 2019.

    Response: For the majority of measures in the EHR Incentive Programs, the difference between the Modified Stage 2 measures and the Stage 3 measures is the threshold required to successfully demonstrate meaningful use. For the advancing care information performance category, there are no thresholds and MIPS eligible clinicians are allowed to select the objectives and measures most applicable to their practice for reporting purposes. For this reason, we disagree that either set of measures for the advancing care information performance category is more stringent than the other. While we understand the commenters' concerns about readiness for subsequent years as it relates to adopting new technologies, we continue to believe that it is important to move forward with a single set objectives and measures focused on the top priorities of clinical effectiveness, patient engagement and health information exchange. We further maintain our belief that it reduces complexity and burden to have all MIPS eligible clinicians reporting on the same set of objectives and measures and the same specifications for those measures. We note that we will accept a minimum of 90 consecutive days of data within the CY 2018 performance period for the advancing care information performance category in order to support MIPS eligible clinicians and groups transitioning to technology certified to the 2015 Edition for use in 2018. At this time, we believe it is appropriate to require the use of EHR technology certified to the 2015 Edition for the CY 2018 performance period and encourage MIPS eligible clinicians to work with their EHR vendors in the coming months to prepare for the transition to 2015 Edition CEHRT.

    Comment: A few commenters requested clarification of the objectives and measures to use for performance periods in CY 2017 if the MIPS eligible clinician uses a combination of technologies certified to the 2014 and 2015 Editions during the performance period. The commenters anticipate that many practices could begin the performance period using 2014 Edition and upgrade during the performance period to begin use of 2015 Edition. Others expect that MIPS eligible clinicians may use a combination of 2014 and 2015 Editions during the performance period. Commenters also requested clarification on how MIPS eligible clinicians will be scored if the objectives and measures to which they report only apply to part of the performance period and not the full calendar year.

    Response: In 2017, a MIPS eligible clinician who has technology certified to a combination of 2015 Edition and 2014 Edition may choose to report on either the Advancing Care Information objectives and measures specified for the advancing care information performance category in section II.E.5.g.(7) of this final rule or the 2017 Advancing Care Information Transition objectives and measures specified for the advancing care information performance category as described in section II.E.5.g.(7) of this final rule if they have the appropriate mix of technologies to support each measure selected. If a MIPS eligible clinician switches from 2014 Edition to 2015 Edition CEHRT during the performance period, the data collected for the base and performance score measures should be combined from both the 2014 and 2015 Edition of CEHRT.

    After consideration of the comments we received, we are finalizing our proposal as proposed. We note that because we will accept a minimum of 90 consecutive days of data from the CY 2017 performance period, MIPS eligible clinicians who have EHR technology certified to the 2014 Edition and then transition to EHR technology certified to the 2015 Edition in 2017 have flexibility and may select which measures they want to report on for the 2017 performance period.

    (c) Performance Score

    In addition to the base score, which includes submitting each of the objectives and measures to achieve 50 percent of the possible points within the advancing care information performance category, we proposed to allow multiple paths to achieve a score greater than the 50 percentage base score. The performance score is based on the priority goals established by us to focus on leveraging CEHRT to support the coordination of care. A MIPS eligible clinician would earn additional points above the base score for performance in the objectives and measures for Patient Electronic Access, Coordination of Care through Patient Engagement, and Health Information Exchange. These measures have a focus on patient engagement, electronic access and information exchange, which promote healthy behaviors by patients and lay the ground work for interoperability. These measures also have significant opportunity for improvement among MIPS eligible clinicians and the industry as a whole based on adoption and performance data. We believe this approach for achievement above a base score in the advancing care information performance category would provide MIPS eligible clinicians a flexible and realistic incentive towards the adoption and use of CEHRT.

    We proposed at § 414.1380(b)(4) that, for the performance score, the eight associated measures under these three objectives would each be assigned a total of 10 possible points. For each measure, a MIPS eligible clinician may earn up to 10 percent of their performance score based on their performance rate for the given measure. For example, a performance rate of 95 percent on a given measure would earn 9.5 percentage points of the performance score for the advancing care information performance category. This scoring approach is consistent with the performance score approach outlined for other MIPS categories in the proposed rule. Table 9 of the proposed rule (81 FR 28225), provided an example of the proposed performance score methodology.

    We noted that in this methodology, a MIPS eligible clinician has the potential to earn a performance score of up to 80 percent, which, in combination with the base score would be greater than the total possible 100 percent for the advancing care information performance category. We stated that this methodology would allow flexibility for MIPS eligible clinicians to focus on measures which are most relevant to their practice to achieve the maximum performance category score, while deemphasizing concentration in other measures which are not relevant to their practice.

    This proposed methodology recognizes the importance of promoting health IT adoption and standards and the use of CEHRT to support quality improvement, interoperability, and patient engagement. We invited comments on our proposal.

    The following is a summary of the comments we received regarding our proposal.

    Comment: A few commenters suggested removing the base score and instead scoring MIPS eligible clinicians solely on performance for the following measures: (1) Patient Electronic Access; (2) Electronic Prescribing; (3) Computer Provider-Order Entry; (4) Patient-Specific Education; (5) View, Download, Transmit; (6) Secure Messaging; (7) Patient-Generated Health Data; (8) Patient Care Record Exchange; (9) Request/Accept Patient Care Record; and (10) Clinical Information Reconciliation. Others requested that the patient engagement measures, View, Download or Transmit, Secure Messaging, and Patient-Generated Health Data be voluntary in order to provide flexibility.

    Response: We appreciate the feedback and have significantly reduced the number of required measures in the base score which adds both flexibility and simplicity to the scoring methodology while addressing statutory requirements. We refer readers to section II.E.5.g.(6)(b) of this final rule with comment period for further discussion of our final policy.

    Comment: A commenter suggested that the performance score measures should reflect the patient population because many MIPS eligible clinicians treat patients that are poor, elderly, or have limited English proficiency, and suggested that these factors strongly disadvantage MIPS eligible clinicians on measures as compared to MIPS eligible clinicians whose patient populations are better educated and better off financially. Another suggested the advancing care information performance category be renamed Health IT-related activities score and reflect the improvement activities performance category such that MIPS eligible clinicians select activities from a long list.

    Response: While we understand that the demographics and education-level of patient populations of MIPS eligible clinicians may vary, we disagree that measures in the advancing care information performance category should be adjusted to accommodate for different patient populations. We believe MIPS eligible clinicians who have CEHRT have the ability to adequately use CEHRT to perform the actions required for the measures, regardless of their patient population. We also believe we have offered enough flexibility for MIPS eligible clinicians who are concerned about patient action requirements by not establishing measure thresholds and instead requiring a minimum of one in the numerator for numerator/denominator measures. We direct readers to the discussion of the advancing care information performance category scoring in section II.E.5.g.(6)(a) of this final rule with comment period. We look forward to continuing to refine the advancing care information performance category over time.

    (d) Overall Advancing Care Information Performance Category Score

    To determine the MIPS eligible clinician's overall advancing care information performance category score, we proposed to use the sum of the base score, performance score, and the potential Public Health and Clinical Data Registry Reporting bonus point. We note that if the sum of the MIPS eligible profession's base score (50 percent) and performance score (out of a possible 80 percent) with the Public Health and Clinical Data Registry Reporting bonus point are greater than 100 percent, we would apply an advancing care information performance category score of 100 percent. For example, if the MIPS eligible clinician earned the base score of 50 percent, a performance score of 60 percent and the bonus point for Public Health and Clinical Data Registry Reporting for a total of 111 percent, the MIPS eligible clinician's overall advancing care information performance category score would be 100 percent. The total percentage score (out of 100) for the advancing care information performance category would then be multiplied by the weight (25 percent) of the advancing care information performance category and incorporated into the MIPS final score, as described at 81 FR 28220 through 28271 of the proposed rule. Table 10 of the proposed rule (81 FR 28226) provides an example of the calculation of the advancing care information performance category score based on these proposals. For our final policy, we revised the proposed scoring approach by reducing the number of required measures in the base score and adding measures to the performance score in an effort to address commenters' concerns (as described above) and add flexibility wherever possible. The base score and performance score are added together, along with any additional bonus score if applicable, to determine the overall advancing care information performance category score.

    Under the final policy, a MIPS eligible clinician must report all required measures of the base score to earn any base score, and thus to earn any score in the advancing care information performance category. We understand that many commenters preferred that we do away entirely with the all-or-nothing approach to the base score and we have made adjustments to the base score to be responsive to those commenters' concerns. We note that section 1848(o)(2)(A) of the Act includes certain requirements that we have chosen to implement through certain measures such as e-Prescribing, Send a Summary of Care and Request/Accept Summary, and thus, we continue to require these measures in the advancing care information performance category base score. In addition, we have maintained the Security Risk Analysis measure as a required measure as we believe it is essential to protecting patient privacy as discussed in the proposed rule (81 FR 28221), and thus, we believe should be mandatory for reporting. We have also maintained Provide Patient Access as the fifth required measure under the base score because we believe it is essential for patients to have access to their health care information in order to improve health, provide transparency and drive patient engagement. To address commenters' concerns, we have reduced the total number of required measures in the base score to only these five, and moved other measures to the performance score where MIPS eligible clinicians can choose which measures to report based on their individual practice. While we believe all measures under the advancing care information performance category are of upmost importance, we acknowledge that we must balance the need for these data with data collection and reporting burden. Given the considerable reduction in required measures, we do not believe it is appropriate to increase the weight of the base score, and thus, it remains at 50 percent of the advancing care information performance category score.

    The performance score builds upon the base score and is based on a MIPS eligible clinician's performance rate for each measure reported for the performance score (calculated using the numerator/denominator). A performance rate of 1-10 percent would earn 1 percentage point, a performance rate of 11-20 percent would earn 2 percentage points and so on. For example, if the clinician reports a numerator/denominator of 85/100 for the Patient-Specific Education measure, their performance rate would be 85 percent and they would earn 9 percentage points toward their performance score for the advancing care information performance category. With nine measures included in the performance score, a MIPS eligible clinician has the ability to earn up to 90 percentage points if they report all measures in the performance score.

    We note that the measures under the Public Health and Clinical Data Registry Reporting objective are yes/no measures and do not have a numerator/denominator to calculate the performance rate. For the Immunization Registry Reporting measure, we will award 0 or 10 percentage points for the performance score (0 percent for a “no” response, 10 percent for a “yes” response). Active engagement with a public health or clinical data registry to meet any other measure associated with the Public Health and Clinical Data Registry Reporting objective will earn the MIPS eligible clinician a bonus of 5 percentage points as outlined in section II.E.5.g.(6)(b)f this final rule with comment period. MIPS eligible clinicians are not required to report the Immunization Registry Reporting measure in order to earn the bonus 5 percent for reporting to one or more additional registries.

    Two of the measures in the base score are not included in the performance score. The Security Risk Analysis and e-Prescribing measures are required under the base score, but a MIPS eligible clinician will not earn additional points under the performance score for reporting these measures. Due to the critical nature of the Security Risk Analysis measure, and as we stated in the proposed rule, we believe this measure is of paramount importance and applicable across all objectives. Therefore, the Protect Patient Health Information objective and Security Risk Analysis measure are foundational requirements for the advancing care information performance category (81 FR 28221). For this reason, we are including it as a required measure in the base score, but are not awarding any additional score for performance. The e-Prescribing measure is one of the measures that fulfills a statutory requirement under section 1848(o)(2)(A) of the Act, and thus, we are requiring it as part of the base score. Given the historically high performance on this measure under the EHR Incentive Program with EPs achieving an average of 87 percent of all permissible prescriptions written and transmitted electronically using CEHRT in 2015, we are not including it in the performance score for the advancing care information performance category.

    Under our final policy, MIPS eligible clinicians have the ability to earn an overall score for the advancing care information performance category of up to 155 percentage points, which will be capped at 100 percent when the base score, performance score and bonus score are all added together. We believe this addresses commenters' requests for additional opportunities to earn credit in all aspects of the advancing care information performance category including the base score, performance score and bonus score. In addition, we believe this scoring approach adds flexibility for MIPS eligible clinicians to choose measures that are most applicable to their practice and best represent their performance. While certain measures are still required for reporting, we have reduced this number from 11 required measures in the proposed base score to only five in this final policy. We have also increased the number of measures for which a MIPS eligible clinician has the ability to earn performance score credit from eight measures in the proposed performance score to nine in this final policy. We note that MIPS eligible clinicians can choose which of these measures to focus on for their performance score allowing clinicians to customize their reporting and score.

    Table 9—Advancing Care Information Performance Category Scoring Methodology Advancing Care Information Objectives and Measures Advancing care information objective Advancing care information measure * Required/
  • not required for base score
  • (50%)
  • Performance score
  • (up to 90%)
  • Reporting requirement
    Protect Patient Health Information Security Risk Analysis Required 0 Yes/No Statement. Electronic Prescribing e-Prescribing Required 0 Numerator/Denominator. Patient Electronic Access Provide Patient Access Required Up to 10 Numerator/Denominator. Patient-Specific Education Not Required Up to 10 Numerator/Denominator. Coordination of Care Through Patient Engagement View, Download, or Transmit (VDT) Not Required Up to 10 Numerator/Denominator. Secure Messaging Not Required Up to 10 Numerator/Denominator. Patient-Generated Health Data Not Required Up to 10 Numerator/Denominator. Health Information Exchange Send a Summary of Care Required Up to 10 Numerator/Denominator. Request/Accept Summary of Care Required Up to 10 Numerator/Denominator. Clinical Information Reconciliation Not Required Up to 10 Numerator/Denominator. Public Health and Clinical Data Registry Reporting Immunization Registry Reporting Not Required 0 or 10 Yes/No Statement. Syndromic Surveillance Reporting Not Required Bonus Yes/No Statement. Electronic Case Reporting Not Required Bonus Yes/No Statement. Public Health Registry Reporting Not Required Bonus Yes/No Statement. Clinical Data Registry Reporting Not Required Bonus Yes/No Statement. Bonus (up to 15) Report to one or more additional public health and clinical data registries beyond the Immunization Registry Reporting measure 5 bonus Yes/No Statement. Report improvement activities using CEHRT 10 bonus Yes/No Statement. * Several measure names have been changed since the proposed rule. This table reflects those changes. We refer readers to section II.E.5.g.(7)(a) of this final rule with comment period for further discussion of measure name changes.

    Comment: In addition to the scoring comments we summarized in the above sections, many commenters expressed concerns related to the difference in scoring for the 2017 Advancing Care Information Transition objectives and measures (referred to in the proposed rule as the Modified Stage 2 Objectives and Measures). Commenters highlighted that for the proposed policy, there are eight available measures in the Advancing Care Information Objectives and Measures while there are only six available measures in the 2017 Advancing Care Information Transition objectives and measures for which MIPS eligible clinicians can earn credit in the performance score of the advancing care information performance category. Commenters believed this would pose a disadvantage to those MIPS eligible clinicians with EHR technology certified to the 2014 Edition who would only be able to report on 2017 Advancing Care Information Transition objectives and measures, and consequently have a lesser opportunity to earn credit in the performance score.

    Response: We appreciate the comments and have outlined our final scoring methodology for the 2017 Advancing Care Information Transition objectives and measures in Table 10 to demonstrate that those MIPS eligible clinicians reporting the 2017 Advancing Care Information Transition objectives and measures will not be disadvantaged. MIPS eligible clinicians will have the ability to earn up to 155 percentage points for the advancing care information performance category, which will be capped at 100 percent, regardless of which set of measures they report. We note that in order to make up the difference in the number of measures included in the performance score for the two measure sets, we have increased the number of percentage points available for the performance weight of the Provide Patient Access and Health Information Exchange measures (up to 20 percent for each measure), as these measures are critical to our goals of patient engagement and interoperability.

    Table 10—Advancing Care Information Performance Category Scoring Methodology for 2017 Advancing Care Information Transition—Objectives and Measures 2017 Advancing care information
  • transition objective
  • (2017 only)
  • 2017 Advancing care information
  • transition measure *
  • (2017 only)
  • Required/
  • not required for
  • base score
  • (50%)
  • Performance score
  • (up to 90%)
  • Reporting requirement
    Protect Patient Health Information Security Risk Analysis Required 0 Yes/No Statement. Electronic Prescribing E-Prescribing Required 0 Numerator/Denominator. Patient Electronic Access Provide Patient Access Required Up to 20 Numerator/Denominator. View, Download, or Transmit (VDT) Not Required Up to 10 Numerator/Denominator. Patient-Specific Education Patient-Specific Education Not Required Up to 10 Numerator/Denominator. Secure Messaging Secure Messaging Not Required Up to 10 Numerator/Denominator. Health Information Exchange Health Information Exchange Required Up to 20 Numerator/Denominator. Medication Reconciliation Medication Reconciliation Not Required Up to 10 Numerator/Denominator. Public Health Reporting Immunization Registry Reporting
  • Syndromic Surveillance Reporting
  • Specialized Registry Reporting
  • Not Required
  • Not Required
  • Not Required
  • 0 or 10
  • Bonus
  • Bonus
  • Yes/No Statement.
  • Yes/No Statement.
  • Yes/No Statement.
  • Bonus up to 15% Report to one or more additional public health and clinical data registries beyond the Immunization Registry Reporting measure 5% bonus Yes/No Statement. Report improvement activities using CEHRT 10% bonus Yes/No Statement. * Several measure names have been changed since the proposed rule. This table reflects those changes. We refer readers to section II.E.5.g.(7)(a) of this final rule with comment period for further discussion of measure name changes.

    We are seeking comment on our final scoring methodology policies, and future enhancements to the methodology.

    (e) Scoring Considerations

    Section 1848(q)(5)(E)(ii) of the Act, as added by section 101(c) of the MACRA, provides that in any year in which the Secretary estimates that the proportion of EPs (as defined in section 1848(o)(5) of the Act) who are meaningful EHR users (as determined under section 1848(o)(2) of the Act) is 75 percent or greater, the Secretary may reduce the applicable percentage weight of the advancing care information performance category in the MIPS final score, but not below 15 percent, and increase the weightings of the other performance categories such that the total percentage points of the increase equals the total percentage points of the reduction. We note section 1848(o)(5) of the Act defines an EP as a physician, as defined in section 1861(r) of the Act. For purposes of applying section 1848(q)(5)(E)(ii) of the Act, we proposed to estimate the proportion of physicians as defined in section 1861(r) who are meaningful EHR users as those physician MIPS eligible clinicians who earn an advancing care information performance category score of at least 75 percent under our proposed scoring methodology for the advancing care information performance category for a performance period. This would require the MIPS eligible clinician to earn the advancing care information performance category base score of 50 percent, and an advancing care information performance score of at least 25 percent (or 24 percent plus the Public Health and Clinical Data Registry Reporting bonus point) for an overall performance category score of 75 percent for the advancing care information performance category. We are alternatively proposing to estimate the proportion of physicians as defined in section 1861(r) who are meaningful EHR users as those physician MIPS eligible clinicians who earn an advancing care information performance category score of 50 percent (which would only require the MIPS eligible clinician to earn the advancing care information performance category base score) under our proposed scoring methodology for the advancing care information performance category for a performance period, and we solicited comments on both of these proposed thresholds.

    We proposed to base this estimation on data from the relevant performance period, if we have sufficient data available from that period. For example, if feasible, we would consider whether to reduce the applicable percentage weight of the advancing care information performance category in the MIPS final score for the 2019 MIPS payment year based on an estimation using the data from the 2017 performance period. We noted that in section II.E.5.g.(8) of the proposed rule (81 FR 28231-28232) we proposed to reweight the advancing care information performance category to zero for certain hospital-based physicians and other physicians. These physicians meet the definition of MIPS eligible clinicians, but would not be included in the estimation because the advancing care information performance category would be weighted at zero for them. We note that any adjustments of the performance category weights specified in section 1848(q)(5)(E) of the Act based on this policy would be established in future notice and comment rulemaking.

    The following is a summary of the comments we received regarding our proposed definition of meaningful EHR user.

    Comment: Commenters overwhelmingly supported the proposal to define meaningful EHR users as those MIPS eligible clinicians who earn a score of 75 percent in the advancing care information performance category. They believed that a lower score, such as 50 percent, would not be stringent enough and that the majority of MIPS eligible clinicians would achieve the meaningful EHR user status by simply reporting and attesting to just one patient encounter for each measure. Additionally, many commenters pointed out that this would result in a reduction of the applicable weight of the advancing care information performance category in the MIPS final score and would reduce the focus and emphasis on increased patient engagement and health information exchange.

    Response: We appreciate this feedback and agree that 50 percent would be a very low threshold to be considered a meaningful EHR user in the advancing care information performance category.

    Comment: A few commenters supported the alternate proposal to define meaningful EHR users as those MIPS eligible clinicians who earn a score of 50 percent in the advancing care information performance category. This approach would only require MIPS eligible clinicians to achieve the base score of 50 percent to achieve the meaningful EHR user status. They cited the overall complexity of the reporting requirements, as well as level of difficulty for small practices to score well in the performance category.

    Response: We understand the commenters' concerns regarding the complexity of reporting requirements, and note that we have addressed this through our final scoring policy outlined in section II.E.5.g.(6)(d) of this final rule with comment period. We believe the adjustments made in the scoring methodology address commenters' concerns by reducing the requirements to earn the base score, and thus, there is no need to lower the threshold for being considered a meaningful EHR user.

    Comment: One commenter requested that the definition of a meaningful EHR user and the requirements to achieve this status in the MIPS be further clarified in this rule stating that it is important to clearly define expectations and set a higher standard in order to achieve interoperability and EHR-aided improved health outcomes for Medicare beneficiaries.

    Response: We appreciate this feedback and reiterate that a meaningful EHR user under this policy is a physician, as defined in section 1861(r) of the Act who earns an advancing care information performance category overall score of 75 percent per our primary proposal outlined above. To earn a score of 75 percent in the advancing care information performance category, a physician would need to accomplish the base score, plus additional performance and/or bonus score for a total of 75 percent or 18.75 performance category points as they are applied to the MIPS final score.

    After consideration of the comments we received, in combination with our final scoring methodology and its impact on this policy, we are finalizing as proposed our primary proposal for purposes of applying section 1848(q)(5)(E)(ii) of the Act, to estimate the proportion of physicians as defined in section 1861(r) of the Act who are meaningful EHR users as those physician MIPS eligible clinicians who earn an advancing care information performance category score of at least 75 percent for a performance period. We will base this estimation on data from the relevant performance period, if we have sufficient data available from that period. We will not include in this estimation physicians for whom the advancing care information performance category is weighted at zero percent under section 1848(q)(5)(F) of the Act.

    (7) Advancing Care Information Performance Category Objectives and Measures Specifications (a) Advancing Care Information Objectives and Measures Specifications (Referred to in the Proposed Rule as MIPS Objectives and Measures)

    We proposed the objectives and measures for the advancing care information performance category of MIPS as outlined in the proposed rule. We noted that these objectives and measures have been adapted from the Stage 3 objectives and measures as finalized in the 2015 EHR Incentive Programs final rule (80 FR 62829 through 62871), however, we did not propose to maintain the previously established thresholds for MIPS. Any additional changes to the objectives and measures were outlined in the proposed rule. For a more detailed discussion of the Stage 3 objectives and measures, including explanatory material and defined terms, we refer readers to the 2015 EHR Incentive Programs final rule (80 FR 62829 through 62871).

    Objective: Protect Patient Health Information.

    Objective: Protect electronic protected health information (ePHI) created or maintained by the CEHRT through the implementation of appropriate technical, administrative, and physical safeguards.

    Security Risk Analysis Measure: Conduct or review a security risk analysis in accordance with the requirements in 45 CFR 164.308(a)(1), including addressing the security (to include encryption) of ePHI data created or maintained by CEHRT in accordance with requirements in 45 CFR164.312(a)(2)(iv) and 45 CFR 164.306(d)(3), implement security updates as necessary, and correct identified security deficiencies as part of the MIPS eligible clinician's risk management process.

    Objective: Electronic Prescribing.

    Objective: Generate and transmit permissible prescriptions electronically.

    e-Prescribing Measure: At least one permissible prescription written by the MIPS eligible clinician is queried for a drug formulary and transmitted electronically using CEHRT.

    Denominator: Number of prescriptions written for drugs requiring a prescription in order to be dispensed other than controlled substances during the performance period; or number of prescriptions written for drugs requiring a prescription in order to be dispensed during the performance period.

    Numerator: The number of prescriptions in the denominator generated, queried for a drug formulary, and transmitted electronically using CEHRT.

    For this objective, we note that the 2015 EHR Incentive Program final rule included a discussion of controlled substances in the context of the Stage 3 objective and measure (80 FR 62834), which we understand from stakeholders has caused confusion. We therefore proposed for both MIPS and for the EHR Incentive Programs that health care providers would continue to have the option to include or not include controlled substances that can be electronically prescribed in the denominator. This means that MIPS eligible clinicians may choose to include controlled substances in the definition of “permissible prescriptions” at their discretion where feasible and allowable by law in the jurisdiction where they provide care. The MIPS eligible clinician may also choose not to include controlled substances in the definition of “permissible prescriptions” even if such electronic prescriptions are feasible and allowable by law in the jurisdiction where they provide care.

    Objective: Clinical Decision Support (Alternate Proposal Only).

    Objective: Implement clinical decision support (CDS) interventions focused on improving performance on high-priority health conditions.

    Clinical Decision Support (CDS) Interventions Measure: Implement three clinical decision support interventions related to three CQMs at a relevant point in patient care for the entire performance period. Absent three CQMs related to a MIPS eligible clinician's scope of practice or patient population, the clinical decision support interventions must be related to high-priority health conditions.

    Drug Interaction and Drug-Allergy Checks Measure: The MIPS eligible clinician has enabled and implemented the functionality for drug-drug and drug-allergy interaction checks for the entire performance period.

    Objective: Computerized Provider Order Entry (Alternate Proposal Only).

    Objective: Use computerized provider order entry (CPOE) for medication, laboratory, and diagnostic imaging orders directly entered by any licensed healthcare professional, credentialed medical assistant, or a medical staff member credentialed to and performing the equivalent duties of a credentialed medical assistant, who can enter orders into the medical record per state, local, and professional guidelines.

    Medication Orders Measure: At least one medication order created by the MIPS eligible clinician during the performance period is recorded using CPOE.

    Denominator: Number of medication orders created by the MIPS eligible clinician during the performance period.

    Numerator: The number of orders in the denominator recorded using CPOE.

    Laboratory Orders Measure: At least one laboratory order created by the MIPS eligible clinician during the performance period is recorded using CPOE.

    Denominator: Number of laboratory orders created by the MIPS eligible clinician during the performance period.

    Numerator: The number of orders in the denominator recorded using CPOE.

    Diagnostic Imaging Orders Measure: At least one diagnostic imaging order created by the MIPS eligible clinician during the performance period is recorded using CPOE.

    Denominator: Number of diagnostic imaging orders created by the MIPS eligible clinician during the performance period.

    Numerator: The number of orders in the denominator recorded using CPOE.

    Objective: Patient Electronic Access.

    Objective: The MIPS eligible clinician provides patients (or patient-authorized representative) with timely electronic access to their health information and patient-specific education.

    Patient Access Measure: For at least one unique patient seen by the MIPS eligible clinician: (1) The patient (or the patient-authorized representative) is provided timely access to view online, download, and transmit his or her health information; and (2) The MIPS eligible clinician ensures the patient's health information is available for the patient (or patient-authorized representative) to access using any application of their choice that is configured to meet the technical specifications of the Application Programing Interface (API) in the MIPS eligible clinician's CEHRT.

    Denominator: The number of unique patients seen by the MIPS eligible clinician during the performance period.

    Numerator: The number of patients in the denominator (or patient authorized representative) who are provided timely access to health information to view online, download, and transmit to a third party and to access using an application of their choice that is configured meet the technical specifications of the API in the MIPS eligible clinician's CEHRT.

    Patient-Specific Education Measure: The MIPS eligible clinician must use clinically relevant information from CEHRT to identify patient-specific educational resources and provide electronic access to those materials to at least one unique patient seen by the MIPS eligible clinician.

    Denominator: The number of unique patients seen by the MIPS eligible clinician during the performance period.

    Numerator: The number of patients in the denominator who were provided electronic access to patient-specific educational resources using clinically relevant information identified from CEHRT during the performance period.

    Objective: Coordination of Care Through Patient Engagement.

    Objective: Use CEHRT to engage with patients or their authorized representatives about the patient's care.

    View, Download, Transmit (VDT) Measure: During the performance period, at least one unique patient (or patient-authorized representatives) seen by the MIPS eligible clinician actively engages with the EHR made accessible by the MIPS eligible clinician. A MIPS eligible clinician may meet the measure by either—(1) view, download or transmit to a third party their health information; or (2) access their health information through the use of an API that can be used by applications chosen by the patient and configured to the API in the MIPS eligible clinician's CEHRT; or (3) a combination of (1) and (2).

    Denominator: Number of unique patients seen by the MIPS eligible clinician during the performance period.

    Numerator: The number of unique patients (or their authorized representatives) in the denominator who have viewed online, downloaded, or transmitted to a third party the patient's health information during the performance period and the number of unique patients (or their authorized representatives) in the denominator who have accessed their health information through the use of an API during the performance period.

    Secure Messaging Measure: For at least one unique patient seen by the MIPS eligible clinician during the performance period, a secure message was sent using the electronic messaging function of CEHRT to the patient (or the patient-authorized representative), or in response to a secure message sent by the patient (or the patient-authorized representative).

    Denominator: Number of unique patients seen by the MIPS eligible clinician during the performance period.

    Numerator: The number of patients in the denominator for whom a secure electronic message is sent to the patient (or patient-authorized representative) or in response to a secure message sent by the patient (or patient-authorized representative), during the performance period.

    Patient-Generated Health Data Measure: Patient-generated health data or data from a non-clinical setting is incorporated into the CEHRT for at least one unique patient seen by the MIPS eligible clinician during the performance period.

    Denominator: Number of unique patients seen by the MIPS eligible clinician during the performance period.

    Numerator: The number of patients in the denominator for whom data from non-clinical settings, which may include patient-generated health data, is captured through the CEHRT into the patient record during the performance period.

    Objective: Health Information Exchange.

    Objective: The MIPS eligible clinician provides a summary of care record when transitioning or referring their patient to another setting of care, receives or retrieves a summary of care record upon the receipt of a transition or referral or upon the first patient encounter with a new patient, and incorporates summary of care information from other health care clinician into their EHR using the functions of CEHRT.

    Send a Summary of Care (formerly Patient Care Record Exchange) Measure: For at least one transition of care or referral, the MIPS eligible clinician that transitions or refers their patient to another setting of care or health care clinician—(1) creates a summary of care record using CEHRT; and (2) electronically exchanges the summary of care record.

    Denominator: Number of transitions of care and referrals during the performance period for which the MIPS eligible clinician was the transferring or referring clinician.

    Numerator: The number of transitions of care and referrals in the denominator where a summary of care record was created using CEHRT and exchanged electronically.

    Request/Accept Summary of Care (formerly Patient Care Record) Measure: For at least one transition of care or referral received or patient encounter in which the MIPS eligible clinician has never before encountered the patient, the MIPS eligible clinician receives or retrieves and incorporates into the patient's record an electronic summary of care document.

    Denominator: Number of patient encounters during the performance period for which a MIPS eligible clinician was the receiving party of a transition or referral or has never before encountered the patient and for which an electronic summary of care record is available.

    Numerator: Number of patient encounters in the denominator where an electronic summary of care record received is incorporated by the clinician into the CEHRT.

    Clinical Information Reconciliation Measure: For at least one transition of care or referral received or patient encounter in which the MIPS eligible clinician has never before encountered the patient, the MIPS eligible clinician performs clinical information reconciliation. The MIPS eligible clinician must implement clinical information reconciliation for the following three clinical information sets: (1) Medication. Review of the patient's medication, including the name, dosage, frequency, and route of each medication. (2) Medication allergy. Review of the patient's known medication allergies. (3) Current Problem list. Review of the patient's current and active diagnoses.

    Denominator: Number of transitions of care or referrals during the performance period for which the MIPS eligible clinician was the recipient of the transition or referral or has never before encountered the patient.

    Numerator: The number of transitions of care or referrals in the denominator where the following three clinical information reconciliations were performed: Medication list, medication allergy list, and current problem list.

    Objective: Public Health and Clinical Data Registry Reporting.

    Objective: The MIPS eligible clinician is in active engagement with a public health agency or clinical data registry to submit electronic public health data in a meaningful way using CEHRT, except where prohibited, and in accordance with applicable law and practice.

    Immunization Registry Reporting Measure: The MIPS eligible clinician is in active engagement with a public health agency to submit immunization data and receive immunization forecasts and histories from the public health immunization registry/immunization information system (IIS).

    Syndromic Surveillance Reporting Measure: The MIPS eligible clinician is in active engagement with a public health agency to submit syndromic surveillance data from a non-urgent care ambulatory setting where the jurisdiction accepts syndromic data from such settings and the standards are clearly defined.

    Electronic Case Reporting Measure: The MIPS eligible clinician is in active engagement with a public health agency to electronically submit case reporting of reportable conditions.

    Public Health Registry Reporting Measure: The MIPS eligible clinician is in active engagement with a public health agency to submit data to public health registries.

    Clinical Data Registry Reporting Measure: The MIPS eligible clinician is in active engagement to submit data to a clinical data registry.

    (b) 2017 Advancing Care Information Transition Objectives and Measures Specifications (Referred to in the Proposed Rule as Modified Stage 2)

    We proposed the 2017 Advancing Care Information Transition objectives and measures for the advancing care information performance category of MIPS as outlined in this section of the proposed rule. We note that these objectives and measures have been adapted from the Modified Stage 2 objectives and measures as finalized in the 2015 EHR Incentive Programs final rule (80 FR 62793-62825), however, we have not proposed to maintain the previously established thresholds for MIPS. Any additional changes to the objectives and measures are outlined in this section of the proposed rule. For a more detailed discussion of the Modified Stage 2 objectives and measures, including explanatory material and defined terms, we refer readers to the 2015 EHR Incentive Programs final rule (80 FR 62793-62825).

    Objective: Protect Patient Health Information.

    Objective: Protect electronic protected health information (ePHI) created or maintained by the CEHRT through the implementation of appropriate technical, administrative, and physical safeguards.

    Security Risk Analysis Measure: Conduct or review a security risk analysis in accordance with the requirements in 45 CFR 164.308(a)(1), including addressing the security (to include encryption) of ePHI data created or maintained by CEHRT in accordance with requirements in 45 CFR164.312(a)(2)(iv) and 45 CFR 164.306(d)(3), and implement security updates as necessary and correct identified security deficiencies as part of the MIPS eligible clinician's risk management process.

    Objective: Electronic Prescribing.

    Objective: MIPS eligible clinicians must generate and transmit permissible prescriptions electronically.

    E-Prescribing Measure: At least one permissible prescription written by the MIPS eligible clinician is queried for a drug formulary and transmitted electronically using CEHRT.

    Denominator: Number of prescriptions written for drugs requiring a prescription in order to be dispensed other than controlled substances during the performance period; or number of prescriptions written for drugs requiring a prescription in order to be dispensed during the performance period.

    Numerator: The number of prescriptions in the denominator generated, queried for a drug formulary, and transmitted electronically using CEHRT.

    Objective: Clinical Decision Support (alternate proposal only).

    Objective: Implement clinical decision support (CDS) interventions focused on improving performance on high-priority health conditions.

    Clinical Decision Support (CDS) Interventions Measure: Implement three clinical decision support interventions related to three CQMs at a relevant point in patient care for the entire performance period. Absent three CQMs related to a MIPS eligible clinician's scope of practice or patient population, the clinical decision support interventions must be related to high-priority health conditions.

    Drug Interaction and Drug-Allergy Checks Measure: The MIPS eligible clinician has enabled and implemented the functionality for drug-drug and drug-allergy interaction checks for the entire performance period.

    Objective: Computerized Provider Order Entry (alternate proposal only).

    Objective: Use computerized provider order entry (CPOE) for medication, laboratory, and diagnostic imaging orders directly entered by any licensed healthcare professional, credentialed medical assistant, or a medical staff member credentialed to and performing the equivalent duties of a credentialed medical assistant, who can enter orders into the medical record per state, local, and professional guidelines.

    Medication Orders Measure: At least one medication order created by the MIPS eligible clinician during the performance period is recorded using CPOE.

    Denominator: Number of medication orders created by the MIPS eligible clinician during the performance period.

    Numerator: The number of orders in the denominator recorded using CPOE.

    Laboratory Orders Measure: At least one laboratory order created by the MIPS eligible clinician during the performance period is recorded using CPOE.

    Denominator: Number of laboratory orders created by the MIPS eligible clinician during the performance period.

    Numerator: The number of orders in the denominator recorded using CPOE.

    Diagnostic Imaging Orders Measure: At least one diagnostic imaging order created by the MIPS eligible clinician during the performance period is recorded using CPOE.

    Denominator: Number of diagnostic imaging orders created by the MIPS eligible clinician during the performance period.

    Numerator: The number of orders in the denominator recorded using CPOE.

    Objective: Patient Electronic Access.

    Objective: The MIPS eligible clinician provides patients (or patient-authorized representative) with timely electronic access to their health information and patient-specific education.

    Patient Access Measure: At least one patient seen by the MIPS eligible clinician during the performance period is provided timely access to view online, download, and transmit to a third party their health information subject to the MIPS eligible clinician's discretion to withhold certain information.

    Denominator: The number of unique patients seen by the MIPS eligible clinician during the performance period.

    Numerator: The number of patients in the denominator (or patient authorized representative) who are provided timely access to health information to view online, download, and transmit to a third party.

    View, Download, Transmit (VDT) Measure: At least one patient seen by the MIPS eligible clinician during the performance period (or patient-authorized representative) views, downloads or transmits their health information to a third party during the performance period.

    Denominator: Number of unique patients seen by the MIPS eligible clinician during the performance period.

    Numerator: The number of unique patients (or their authorized representatives) in the denominator who have viewed online, downloaded, or transmitted to a third party the patient's health information during the performance period.

    Objective: Patient-Specific Education.

    Objective: The MIPS eligible clinician provides patients (or patient authorized representative) with timely electronic access to their health information and patient-specific education.

    Patient-Specific Education Measure: The MIPS eligible clinician must use clinically relevant information from CEHRT to identify patient-specific educational resources and provide access to those materials to at least one unique patient seen by the MIPS eligible clinician.

    Denominator: The number of unique patients seen by the MIPS eligible clinician during the performance period.

    Numerator: The number of patients in the denominator who were provided access to patient-specific educational resources using clinically relevant information identified from CEHRT during the performance period.

    Objective: Secure Messaging.

    Objective: Use CEHRT to engage with patients or their authorized representatives about the patient's care.

    Secure Messaging Measure: For at least one patient seen by the MIPS eligible clinician during the performance period, a secure message was sent using the electronic messaging function of CEHRT to the patient (or the patient-authorized representative), or in response to a secure message sent by the patient (or the patient authorized representative) during the performance period.

    Denominator: Number of unique patients seen by the MIPS eligible clinician during the performance period.

    Numerator: The number of patients in the denominator for whom a secure electronic message is sent to the patient (or patient-authorized representative) or in response to a secure message sent by the patient (or patient-authorized representative), during the performance period.

    Objective: Health Information Exchange.

    Objective: The MIPS eligible clinician provides a summary of care record when transitioning or referring their patient to another setting of care, receives or retrieves a summary of care record upon the receipt of a transition or referral or upon the first patient encounter with a new patient, and incorporates summary of care information from other health care clinicians into their EHR using the functions of CEHRT.

    Health Information Exchange Measure: The MIPS eligible clinician that transitions or refers their patient to another setting of care or health care clinician (1) uses CEHRT to create a summary of care record; and (2) electronically transmits such summary to a receiving health care clinician for at least one transition of care or referral.

    Denominator: Number of transitions of care and referrals during the performance period for which the EP was the transferring or referring health care clinician.

    Numerator: The number of transitions of care and referrals in the denominator where a summary of care record was created using CEHRT and exchanged electronically.

    Objective: Medication Reconciliation.

    Medication Reconciliation Measure: The MIPS eligible clinician performs medication reconciliation for at least one transition of care in which the patient is transitioned into the care of the MIPS eligible clinician.

    Denominator: Number of transitions of care or referrals during the performance period for which the MIPS eligible clinician was the recipient of the transition or referral or has never before encountered the patient.

    Numerator: The number of transitions of care or referrals in the denominator where the following three clinical information reconciliations were performed: Medication list, medication allergy list, and current problem list.

    Objective: Public Health Reporting.

    Objective: The MIPS eligible clinician is in active engagement with a public health agency or clinical data registry to submit electronic public health data in a meaningful way using CEHRT, except where prohibited, and in accordance with applicable law and practice.

    Immunization Registry Reporting Measure: The MIPS eligible clinician is in active engagement with a public health agency to submit immunization data.

    Syndromic Surveillance Reporting Measure: The MIPS eligible clinician is in active engagement with a public health agency to submit syndromic surveillance data.

    Specialized Registry Reporting Measure: The MIPS eligible clinician is in active engagement to submit data to a specialized registry.

    We note that the 2017 Advancing Care Information Transition objectives and measures specifications that we proposed are for those MIPS eligible clinicians that are using 2014 Edition CEHRT. We are referring to this as the “2017 Advancing Care Information Transition objectives and measures” in this final rule with comment period, although it was referred to in the proposed rule as the “Modified Stage 2 objectives and measures” set. In addition, in this final rule with comment period, we refer to the measures specified for the advancing care information performance category described in section II.E.5.g.(7) of the proposed rule (81 FR 28221 through 28223) that correlate to a Stage 3 as the “Advancing Care Information objectives and measures” although it was referred to in the proposed rule as “MIPS objectives and measures” set. We note that these terms more are more specific to MIPS, and to the advancing care information performance category than the terms used in the proposed rule. We have also decided to re-name several of the proposed measures to use titles that we believe are more illustrative of the substance of the measures. We note that are not changing the names of the objectives associated with these measures. The measures being renamed are as follows:

    Proposed title Revised title Patient Access Provide Patient Access. Patient Care Record Exchange Send a Summary of Care. Request/Accept Patient Care Record Request/Accept Summary of Care.

    We will be referring to these measures by their revised titles throughout the remainder of this final rule with comment period.

    The following is a summary of the comments we received regarding the proposal to adopt the objectives and measures detailed at 81 FR 28226-28230 for the advancing care information performance category.

    Comment: One commenter suggested the e-Prescribing measure be included in both the base score as well as the performance score of the advancing care information performance category to give more flexibility and offer an opportunity for MIPS eligible clinicians to earn more points, especially for those MIPS eligible clinicians who will be using a 2014 Edition CEHRT in 2017.

    Response: As several commenters have stated, MIPS eligible clinicians should not be disadvantaged due to having to report on the 2017 Advancing Care Information Transition objectives and measures in 2017 and we agree. While we have not added the e-Prescribing measure to the performance score, we have added many other measures to give MIPS eligible clinicians the opportunity to increase their performance score under the advancing care information performance category. We refer readers to section II.E.5.g.(6)(a) of this final rule with comment period for further discussion of the scoring policy to see how we have equalized the opportunities for MIPS eligible clinicians reporting using technology certified to the 2014 Edition and those using technology certified to the 2015 Edition for the advancing care information performance category for 2017.

    Comment: Many commenters supported the inclusion of the e-Prescribing measure in the base score of the advancing care information performance category. Some recommended modifications to the measure such as changing the threshold to yes/no. A commenter supported adoption of the e-Prescribing measure on the condition that it have no minimum threshold and no performance measurement.

    Response: We disagree that the threshold should be yes/no as we continue to believe that reporting a numerator and denominator is more appropriate because it will provide us with the data necessary to monitor performance on this measure. Performance on the measure, under the EHR Incentive Programs, has been consistently much higher than the thresholds set. We believe that through e-Prescribing, errors from paper prescriptions are reduced, and therefore, inclusion in the base score is justified. We also disagree with commenters who recommended adding e-Prescribing to the performance score. Since historical performance on this measure under the EHR Incentive Program has been high, we do not believe that this measure will help MIPS eligible clinicians distinguish themselves from others in regard to performance, and thus we have not included it in the performance score.

    Comment: A commenter urged CMS to take into account that measurement of e-Prescribing is often not a measurement of the physician's diligence or capability, but rather a measurement of factors completely outside the physician's control, such as the ability of nearby pharmacies to accept electronic prescriptions. Another commenter recommended an exception to e-Prescribing for MIPS eligible clinicians in rural areas where most pharmacies do not have capability to accept electronic prescriptions.

    Response: While we understand these concerns, section 1848(o)(2)(A)(i) of the Act requires electronic prescribing as part of using CEHRT in a meaningful manner. We note that we proposed an exclusion for MIPS eligible clinicians who write fewer than 100 permissible prescriptions. Further, we believe the inclusion of the Electronic Prescribing objective in the base score is appropriate because, as noted in the Medicare and Medicaid Programs; Electronic Health Record Incentive Program; Final Rule (75 FR 44338), it is the most widely adopted form of electronic exchange occurring and has been proven to reduce medication errors.

    Comment: For the e-Prescribing measure, a commenter requested clarification that MIPS eligible clinicians are permitted to optionally exclude from the denominator any “standing” or “protocol” orders for medications that are predetermined for a given procedure or a given set of patient characteristics.

    Response: We disagree that the denominator should exclude “standing” prescriptions and continue to believe that the denominator should be the number of prescriptions written for drugs requiring a prescription in order to be dispensed other than controlled substances during the performance period; or number of prescriptions written for drugs requiring a prescription in order to be dispensed during the performance period.

    Comment: One commenter stated that the e-Prescribing measure will be topped out by the time that MIPS is implemented and should be removed.

    Response: While performance on the e-Prescribing measure may be high for EPs participating in the EHR Incentive Programs, the MIPS program includes many other clinicians who may have limited experience with this measure. Furthermore, as we have previously stated, section 1848(o)(2)(A)(i) of the Act requires electronic prescribing as part of using CEHRT in a meaningful manner, and thus, we have chosen to make it a required measure under the advancing care information performance category.

    Comment: A commenter asked how e-Prescribing for the prescription of controlled substances should be measured for MIPS eligible clinicians who have not yet adopted the upgraded technology associated with the 2015 Edition.

    Response: We proposed (81 FR 28227) that MIPS eligible clinicians would continue to have the option to include or not include controlled substances that can be electronically prescribed in the denominator of the e-Prescribing measure. This means that MIPS eligible clinicians may choose to include controlled substances in the definition of “permissible prescriptions” at their discretion where feasible and allowable by law in the jurisdiction where they provide care. The MIPS eligible clinician may also choose not to include controlled substances in the definition of “permissible prescriptions” even if such electronic prescriptions are feasible and allowable by law in the jurisdiction where they provide care. This policy is the same for MIPS eligible clinicians using EHR technology certified to the 2014 and the 2015 Editions.

    Comment: Many commenters supported the inclusion of the Patient Electronic Access objective. Many commenters appreciated the emphasis on patient electronic access throughout the advancing care information performance category and agreed with providing flexibility for MIPS eligible clinicians to provide information to patients.

    Response: We appreciate the support and will require the Provide Patient Access measure of the Patient Electronic Access objective in the base score of the advancing care information performance category. We continue to believe that through providing access to information and increased patient engagement, health care outcomes can be improved.

    Comment: Many commenters claimed that MIPS eligible clinicians will continue to struggle meeting the Patient Electronic Access objective. Some commenters believe the Patient Electronic Access objective holds MIPS eligible clinicians responsible for the actions of patients and other physicians outside of their control. A few noted that internet access issues will suppress small and rural MIPS eligible clinicians' performance scores in the advancing care information performance category, particularly in achieving success with Patient Electronic Access. Another commenter expressed concern regarding the Patient Electronic Access objective due to a lack of computers and electronic access among minority and non-English speaking patients. One commenter recommended that MIPS eligible clinicians be given 4 business days to provide this information, rather than 48 hours because MIPS eligible clinicians need time to review, correct and verify the accuracy of the information.

    Response: While we understand these concerns, we believe providing patients' access to their health information is a critical step in improving patient care, increasing transparency and engaging patients. Under the Patient Electronic Access Objective, the Provide Patient Access measure only requires that patients are provided timely access to view online, download, and transmit his or her health information; and that the information is available to access using any application of their choice that is configured to meet the technical specifications of the Application Programing Interface (API) in the MIPS eligible clinician's CEHRT. This measure is required for the base score. The base score requirement is for MIPS eligible clinicians to report a numerator (of at least one) and a denominator, which we believe is reasonable and achievable by most MIPS eligible clinicians regardless of their practice circumstances or the characteristics of their patient population. This measure does not require that the patient take any action. (Note the View, Download or Transmit measure under the Coordination of Care Through Patient Engagement Objective depends on the actions of the patient but the measure is part of the performance score and is not required.) The other measure under the Patient Electronic Access Objective is the Patient-Specific Access measure which is part of the performance score and is not required.

    We additionally note that we have increased flexibility of our scoring methodology allowing MIPS eligible clinicians to focus on measures that best represent their practice in the performance score, and thus this measure is optional for reporting as part of the performance score.

    Comment: A few commenters suggested that both measures in the Patient Electronic Access objective be retired. They believe that CMS data shows most clinicians score very well on Patient-Specific Education and Provide Patient Access measures, and thus, should not have to report on them. One commenter suggests that the Patient-Specific education measure be considered “topped out” due to historically high performance and stated concern that the manner in which the Patient-Specific education measure is currently specified is overly constrained and limiting to providers who may prefer workflows to provide patient education beyond what is permitted by CMS and certification.

    Response: We disagree. As we have indicated previously, we believe these measures are a critical step to improving patient health, increasing transparency and engaging patients in their care. We additionally note there are certain types of clinicians that were not eligible to participate under the EHR Incentive Programs but are considered MIPS eligible clinicians, and we believe that it is appropriate to include the Patient Electronic Access objective and its associated measures. We note that under the Stage 2 of the EHR Incentive Programs, EPs achieved an average of 91 percent on the Provide Patient Access measure. While under the EHR Incentive Programs EPs performed well, we will be gathering data on MIPS eligible clinicians to determine whether the Patient-Specific Education and Patient Electronic Access measures should be included in future MIPS performance periods. We welcome specific examples suggestions for changes to the existing measures and potential new measures to replace the existing ones.

    Comment: A commenter sought clarification on the Patient Electronic Access objective around the API availability and the use of 2014 Edition CEHRT. Another commenter asked what is meant by the phrase “subject to the MIPS eligible clinician's discretion to withhold certain information” and asked why it was included.

    Response: The specifications of the 2017 Advancing Care Information Transition Provide Patient Access measure do not require use of an API, and thus MIPS eligible clinicians who use EHR technology certified to the 2014 Edition and report this measure would not need to use an API for this measure. We refer readers to section II.E.5.g.(7) of this final rule with comment period for a description of the measure specifications. The Advancing Care Information Provide Patient Access measure is identical to the Patient Electronic Access measure that was finalized in the 2015 EHR Incentive Programs final rule for Stage 3. We maintain that MIPS eligible clinicians who provide electronic access to patient health information should have the ability to withhold any information from disclosure if the disclosure of the information is prohibited by federal, state or local laws or such information, if provided, may result in significant patient harm. We refer readers to the 2015 EHR Incentive Programs final rule (80 FR 62841-FR 62852) for a discussion of the Stage 3 Patient Electronic Access measure.

    Comment: A commenter suggested that the View, Download and Transmit and Secure Messaging measures be made optional and noted the previous reductions in thresholds as an indication that there are significant challenges to meeting these measures.

    Response: While we understand that there are challenges with these measures we continue to believe that the measures in the Coordination of Care Through Patient Engagement objective is an essential component of improving health care. We note that under our revised scoring methodology, these measures will not be required in the base score of the advancing care information category.

    Comment: One commenter believed that although it is a reasonable policy for CMS to require MIPS eligible clinicians to make information electronically available to their patients within a reasonable time frame, they are very concerned about numerator requirements of the View, Download, or Transmit measure that only takes into account the actions of patients. Some stated that MIPS eligible clinicians who are diligent in making information securely available to their patients should not be penalized simply because the patient is not interested in accessing the information.

    Response: The View, Download, or Transmit measure is not required in the base score of the advancing care information performance category under our final scoring policy. It is available for MIPS eligible clinicians who choose to report on the measure to increase their performance score.

    Comment: A few commenters recommended removing the Send a Summary of Care measure (formerly named the Patient Care Record Exchange measure) under the Health Information Exchange objective from the base score because some specialists may not have any transitions of care. One suggested that a minimum exclusion be provided for MIPS eligible clinicians that do not transition care or refer patients during the performance period.

    Response: We disagree with the recommendation to remove this measure from the base score. One of the primary focuses of the advancing care information performance category is to encourage the exchange of health information using CEHRT. The Send a Summary of Care measure encourages the incorporation of summary of care information from other health care providers and clinicians into the MIPS eligible clinician's EHR to support better patient outcomes. We believe that MIPS eligible clinicians, particularly specialists, have the opportunity to send or receive a summary of care record from another care setting or clinician at least once during a MIPS performance period. In addition, since meeting the requirements of this measure to earn the base score involves reporting a numerator and denominator of at least one rather than meeting a percentage threshold, we believe this offers enough flexibility for MIPS eligible clinicians who are concerned that they rarely exchange patient health information with other providers.

    Comment: A commenter requested that the Patient-Specific Education measure under the Patient Electronic Access objective not be limited to educational materials identified by CEHRT as they believe many medical specialty societies have developed patient-facing Web sites and educational materials.

    Response: We appreciate this suggestion and will consider in future years of MIPS. However, as finalized for the 2017 performance period, the Patient-Specific Education measure is limited to educational materials identified by CEHRT. We note that we have refined our proposal and in 2017, this measure is not required in the base score of the advancing care information category. MIPS eligible clinicians may choose whether to report this measure as part of the performance score.

    Comment: One commenter asked for clarification about when the patient-specific education was to be provided. The 2017 Advancing Care Information Transition measure in the proposed rule (based on Modified Stage 2 measure of the EHR Incentive Program) requires that patient-specific education be provided during the performance period while the 2015 EHR Incentive Programs final rule allows patient education to be provided any time between the start of the EHR reporting period and the date of attestation to count toward the numerator.

    Response: While the commenter is correct about the policy established for the EHR Incentive Programs, under the MIPS, the patient-specific education must be provided within the performance period. We additionally note for the commenter that we included a proposal for the EHR Incentive Programs related to measure calculations for actions outside the EHR reporting period in the recent hospital Outpatient Prospective Payment System Proposed Rule (81 FR 45745 through 45746) for reporting in CY 2017 for the EHR Incentive Program.

    Comment: A commenter requested that we stay consistent with the Stage 3 measure exclusion for the Patient-Specific Education measure and allow MIPS eligible clinicians with no office visits during the performance period be permitted to report a “null value” and achieve full base and performance score credit.

    Response: In our final scoring methodology for the advancing care information category, the Patient-Specific Education measure is not a required measure for reporting in the base score, and thus we do not believe it is necessary to provide an exclusion for this measure. Instead MIPS eligible clinicians may choose to report the measure to earn credit in the advancing care information performance score. We believe it is appropriate to require the reporting of a numerator and denominator to add to the performance score. We refer readers to section II.E.5.g.(6)(a) for more discussion of our final scoring policy. We additionally note that there are exclusions for MIPS eligible clinicians who are considered non-patient facing, and direct readers to section II.E.3. of this final rule with comment period for further discussion of this policy.

    Comment: A commenter questioned whether the MIPS eligible clinician or the patient is responsible for the View, Download, and Transmit measure under the Coordination of Care Through Patient Engagement objective as the description states that the MIPS eligible clinician may meet the measure and does not reflect that the necessity of a patient viewing, downloading, and transmitting.

    Response: We appreciate that the commenter brought this error to our attention. Our intention was that a MIPS eligible clinician may meet the measure if at least one unique patient viewed, downloaded, or transmitted to a third party their health information. We are revising the Advancing Care Information measure under the Coordination of Care Through Patient Engagement objective to reflect our intended policy.

    Comment: Some commenters supported the inclusion of the Secure Messaging measure. A few recommended that it be converted into a yes/no measure. A commenter supported adoption of the proposed Secure Messaging measure, provided that the finalized measure have no minimum threshold and no performance measurement. A few commenters requested the removal of the requirement for secure messaging between patient and MIPS eligible clinician for nursing home residents and to patients who receive their primary care at home, since patients will not sign-up. A commenter recommended changing the numerator of the Secure Messaging measure to “responses to secure messages sent by patients,” and the denominator to “all secure messages sent by patients,” to address the misalignment between the numerator and denominator in the proposed measure.

    Response: We appreciate the comments and the support for the Secure Messaging measure. In our revised scoring policy, we are finalizing our scoring methodology such that the Secure Messaging measure is not one of the required measures of the advancing care information performance category. MIPS eligible clinicians may still choose to report the measure to earn credit in the performance score, and thus have the option to determine whether this measure represents their practice. We refer readers to section II.E.5.g.(6)(a) of this final rule with comment period for further discussion of our final scoring policy.

    We disagree with the suggestion to change Secure Messaging to a yes/no measure, or to change the numerator and denominator as this measure is meant to promote the sending of secure messages by the MIPS eligible clinician and not by patients. We believe that it is more appropriate for the numerator to consist of the number of patients found in the denominator to whom a secure electronic message is sent or in response to a secure message sent by the patient (or patient-authorized representative), during the performance period.

    Comment: Some commenters opposed the inclusion of the Health Information Exchange objective and the associated measures: Send a Summary of Care, Request/Accept Summary of Care, and Clinical Information Reconciliation. They noted that it holds MIPS eligible clinicians responsible for information over which they have no control and recommended the objective be removed. The commenters believed that the Health Information Exchange objective holds MIPS eligible clinicians responsible for the actions of patients and other physicians outside of their control. Other commenters opposed the measures included in the Health Information Exchange objective because those measures overestimate the interoperability of EHR technology. Commenters also expressed concern that this measure would emphasize quantity of information, rather the sharing of relevant information. A few commenters indicated that past experience with the Health Information Exchange objective in the EHR Incentive Programs has been challenging for EPs. Challenges include costs, lack of contacts at hospital systems to effective communicate where an electronic transition of care document should be sent, and inadequate training and understanding of how to use EHR functionality even if fully enabled.

    Response: While we appreciate these concerns, we believe the benefits health information exchange outweigh the challenges. As we stated in the 2015 EHR Incentive Programs final rule (80 FR 62804), we believe that the electronic exchange of health information between providers and clinicians would encourage the sharing of the patient care summary from one provider or clinician to another and important information that the patient may not have been able to provide. This can significantly improve the quality and safety of referral care and reduce unnecessary and redundant testing. EHRs and the electronic exchange of health information, either directly or through health information exchanges, can reduce the burden of such communication. Therefore, we believe it is appropriate to include the Health Information Exchange objective and include the Send the Summary of Care and the Request/Accept Summary of Care measures as required in the base score of the advancing care information performance category.

    Comment: A commenter was concerned about MIPS eligible clinicians who do not have access to a health information exchange and in these cases, recommended a hardship exception option for this objective.

    Response: We note that there is no requirement to have access to a health information exchange for the Health Information Exchange objective. Rather for the Request/Accept Summary of Care measure (formerly Patient Care Record measure), the summary of care record must be electronically exchanged. We note that the intent for flexibility around exchange via any electronic means is to promote and facilitate a wide range of options. We refer readers to the discussion of the Health Information Exchange objective at 80 FR 62852 through 62862 as it provides a thorough discussion of transport mechanisms for the summary of care record.

    Comment: Some commenters believe that internet access issues will stifle performance in the advancing care information performance category for MIPS eligible clinicians in small and rural settings, especially those with high staff turnover, in trying to satisfy the Health Information Exchange objective.

    Response: We understand this concern and recognize that nationwide access to broadband is still a challenge for some MIPS eligible clinicians. If a MIPS eligible clinician does not have sufficient internet access, they may qualify for reweighting of the advancing care information performance category score. We refer readers to the discussion of MIPS eligible clinicians facing a significant hardship in section II.E.5.g.(8)(a)(ii) of this final rule with comment period.

    Comment: A commenter stated that the Health Information Exchange objective does not adequately reflect EHR interoperability. They believe the metric is too focused on the quantity of information moved and not the relevance of these exchanges. They urged CMS to re-focus the advancing care information performance category on interoperability by developing specialty-specific interoperability use cases rather than the measuring the quantity of data exchanged.

    Response: We are very interested in adopting measures that reflect interoperability. We urge interested parties to participate in our solicitation call for new measures that will be available in the next few months.

    Comment: A commenter urged CMS to clarify whether the denominator of the Request/Accept Summary of Care measure under the Health Information Exchange objective includes the number of transitions of care sent to the MIPS eligible clinicians with CEHRT, and whether MIPS eligible clinicians are able to exclude referrals from this measure if the receiving clinician does not have CEHRT fully implemented.

    Response: The calculation of the denominator for the 2017 Advancing Care Information Transition measure, Health Information Exchange, is different from that of the Advancing Care Information measure, Request/Accept Summary of Care. As we noted in the 2015 EHR Incentive Programs final rule (80 FR 62804-62806) we did not adopt a requirement for the Modified Stage 2 Health Information Exchange measure (which correlates to the 2017 Advancing Care Information Transition measure) that the recipient to whom the EP sends a summary of care document possess CEHRT or even an EHR in order to be the recipient of an electronic summary of care document. However, measure 2 of the Stage 3 Health Information Exchange objective (which correlates to the Advancing Care Information measure, Request/Accept Summary of Care) was finalized such that the EP, as a recipient of a transition or referral, incorporates an electronic summary of care document into CEHRT. Therefore, as we proposed for MIPS, we are finalizing our policy such that transitions and referrals from recipients who do not possess CEHRT could be excluded from the denominator of the 2017 Advancing Care Information Transition measure, Health Information Exchange, but should be included for the denominator of the MIPS measure, Request/Accept Summary of Care.

    We disagree that the Advancing Care Information measure should be limited to only include recipients who possess CEHRT for the Request/Accept Summary of Care measure, as that would limit support for MIPS eligible clinicians exchanging health information with providers and clinicians across a wide range of settings. We further note that, consistent with the policy set forth in the 2015 EHR Incentive Programs final rule (80 FR 62852-62862), MIPS eligible clinicians and groups may send the electronic summary of care document via any electronic means so long as the MIPS eligible clinician sending the summary of care record is using the standards established for the creation of the electronic summary of care document.

    Comment: Many commenters strongly supported the inclusion of the Health Information Exchange objective and associated measures. They noted benefits such as the incorporation and use of both non-clinical and patient-generated health data as well as supplementing medication reconciliation for transitions of care with medication allergies and problems as part of the Health Information Exchange objective. They supported the prioritization of measures that promote the policy objectives of interoperability, care coordination, and patient engagement. They supported measures that incorporate the use of online access to health information and secure email, and the collection and integration of data from non-clinical sources.

    Response: We agree and will continue to require the Health Information Exchange objective in the advancing care information performance category. In addition section 1848(o)(2)(A)(ii) of the Act requires the electronic exchange of health information.

    Comment: A commenter noted that the definition of patient-generated health data inappropriately focuses on the device generating the data rather than the patient and recommended expanding the definition to include other more relevant data sources such as filling out forms and surveys, and by self-report. One commenter believed there should be a distinction between patient-generated and device-generated data and that MIPS eligible clinicians should have the ability to review data sources as part of the record similar to a track change function.

    Response: For the Patient-Generated Health Data measure, the calculation of the numerator incorporates both health data from non-clinical settings, as well as health data generated by the patient. We will consider the suggestion for expanding the types of health data to include for this measure, such as some patient-reported information, in future rulemaking.

    Comment: For the Clinical Information Reconciliation measure, specifically the medication reconciliation portion, the commenter believed the updated measure for Stage 3 adds further definition to the data that must be reviewed.

    Response: We note that the Clinical Information Reconciliation measure under the Health Information Exchange objective, we are adopting for the advancing care information performance category is the same as the Stage 3 measure under the EHR Incentive Program with the threshold and exclusion removed.

    Comment: For the Medication Reconciliation measure, the proposed 2017 Advancing Care Information Transition measure adds the medication allergy list and current problems list to the items that must be reconciled. One commenter indicated that this significantly expands the current Modified Stage 2 measure such that a change in workflow is required. In addition, functionality to reconcile medication allergies and problems are not included in the 2014 Edition of CEHRT.

    Response: The 2017 Advancing Care Information Transition Medication Reconciliation measure is still limited to medication reconciliation as it was for the Modified Stage 2 measure. For the Advancing Care Information measure, we proposed to include medication list, medication allergy list and current problem list under the Clinical Information Reconciliation measure which aligns with the third measure under the Health Information Exchange objective for Stage 3 of the EHR Incentive Programs and requires technology certified to the 2015 Edition.

    Comment: A few commenters requested, in addition to eliminating the requirement to report the CPOE and CDS objectives and associated measures that MIPS eligible clinicians only be required to report on the remaining objectives and measures that are relevant to their practice.

    Response: In developing our final scoring methodology for the advancing care information performance category for a performance period in 2017, we have significantly reduced the number of required measures from 11 to five. We have moved more measures to the performance score so the MIPS eligible clinicians are able to tailor their participation by relevance to their practices. We refer readers to section II.E.5.g.(6)(a) for more discussion of our final scoring policy.

    Comment: The majority of commenters supported the proposal to include the Public Health and Clinical Data Registry Reporting objective in the advancing care information performance category. Many commenters particularly praised the reduction in requirements of the objective by only requiring the reporting of the Immunization Registry Reporting measure while including the remaining measures as optional to earn a bonus point. However, some commenters expressed concern that by only requiring one measure to report, the importance of public health registry reporting is downplayed. Many commenters suggested MIPS eligible clinicians be encouraged and incentivized to report to registries beyond Immunization Registry Reporting.

    Several commenters indicated that the Public Health Registry reporting objective would be better suited as an activity in the improvement activities performance category and public health registry reporting should be counted for points in that performance category rather than the advancing care information performance category.

    Response: We appreciate the support of our proposal to reduce the reporting burden for the Public Health and Clinical Data Registry Reporting objective. We agree that given the importance and benefit to MIPS eligible clinicians of submitting data to multiple registries, that more points should be awarded for reporting to additional registries under the objective. As we have amended our proposal and the Immunization Registry Reporting measure is no longer a required measure in the base score, MIPS eligible clinicians may still choose to report the measure to increase their performance score. In addition, we are increasing the bonus to 5 percent for reporting one or more public health or clinical data registries.

    We disagree that the Public Health and Clinical Data Registry reporting objective should be in the improvement activities performance category. The proposed measures in the Public Health and Clinical Data Registry Reporting objective focus on active, ongoing engagement with registries, as well as electronic submission of data, which we believe are within the scope of effectively using CEHRT to achieve the goals of the advancing care information performance category.

    Comment: A commenter supported the proposal to include the Public Health and Clinical Data Registry reporting but encouraged CMS to require reporting to cancer registries, because accurate and detailed cancer information enables better public policy development.

    Response: We have not created a separate cancer registry reporting measure for MIPS because we believe that such reporting is captured under existing public health registry reporting measures. If a MIPS eligible clinician is reporting under the 2017 Advancing Care Information Transition objectives and measures, they may report cancer registry data under the specialized registry measure. However, if the eligible clinician or group chooses to do so, they must use the 2014 Edition certification criteria specific to cancer case reporting in order to meet the measure. This measure is an exception to the flexible CEHRT requirements for the 2017 Advancing Care Information Transition objectives and measures Specialized Registry Reporting measure and for this reason we previously finalized a policy that if a participant has the CEHRT available and chooses to report to meet the measure they may do so but they are not required to consider a cancer registry in their specialized registry selection (80 FR 62823). If the MIPS eligible clinician is reporting under the MIPS advancing care information performance category measures, active engagement with a cancer registry would meet the Public Health Registry Reporting measure and would require the use of technology certified to the cancer case reporting criteria of the 2014 or 2015 Edition.

    If a MIPS eligible clinician is reporting under the 2017 Advancing Care Information Transition objectives and measures, they may report cancer registry data under the Specialized Registry measure. If they are reporting under the Advancing Care Information objectives and measures, they would report under the Public Health Registry Reporting measure.

    Comment: One commenter expressed concern that many of the measures under the Public Health and Clinical Registry Reporting objective do not apply to all practices, and for those to whom it does apply, the measures should not burden MIPS eligible clinicians by requiring them to join a registry in order to report.

    Response: We appreciate the concern that different registries have different requirements for participation and they may not apply to a MIPS eligible clinician's practice. We note that we have amended our proposal and the Immunization Registry Reporting measure is no longer a required measure, but MIPS eligible clinicians may report the measure to earn credit in the performance score. In addition, we are only awarding a bonus score for reporting to additional public health or clinical data registries. We believe this offers enough flexibility for MIPS eligible clinician who may experience challenges engaging with a public health or clinical registry.

    Comment: A commenter recommended that for performance period 2017, MIPS eligible clinicians be required to be in active engagement with two public health registries and to report on two public health registry reporting measures, for example, Immunization Registry Reporting and one optional public health registry reporting measure. Several commenters recommended that for performance periods in 2018 and beyond, MIPS eligible clinicians be required to be in active engagement with three public health registries and to report on three public health registry reporting measures, for example, Immunization Registry Reporting, Electronic Public Health Registry Reporting, and one specialized public health registry.

    Response: While we appreciate these comments, we are not requiring Public Health and Clinical Data Registry Reporting in the base score of the advancing care information performance category. MIPS eligible clinicians can increase their performance score if they choose to report on the Immunization Registry Reporting measure in 2017. We are also finalizing as part of our scoring policy that MIPS eligible clinicians can earn a bonus score for reporting to additional public health registries.

    Comment: A commenter stated that our proposal to only require Immunization Registry Reporting measure will likely result in a decrease in public health reporting. They urged CMS to retain the public health reporting requirements from the EHR Incentive Programs. While another noted that after putting significant effort into meeting EHR Incentive Program Stage 2 requirements of submitting to two public health registries, they were disappointed that the proposed MACRA rule would only require data submission to an immunization registry.

    Response: While we understand these concerns, and we believe that the Public Health and Clinical Registry Reporting measures should not be included in the base score of the advancing care information performance category and have amended our proposal to specify that the Immunization Registry Reporting measure is no longer a required measure in the base score. We agree with the commenter that many EPs have successfully achieved active engagement with more than one clinical data registry over the past few years. However, we also know that many MIPS eligible clinicians are still working diligently toward meeting the requirements of this objective. We believe that an opportunity for growth and improvement continues to exist, especially among a large proportion of MIPS eligible clinicians who did not previously participate in the Medicare and Medicaid EHR incentive programs. Therefore, MIPS eligible clinicians may still choose to report the Immunization Registry Reporting measure to increase their performance score. In addition, MIPS eligible clinicians who choose to report on additional public health and clinical data registry reporting measures may increase their bonus score toward their advancing care information performance category score.

    Comment: Some commenters supported the inclusion of the Immunization Registry Reporting measure. They noted that immunization registries are the most widely available and applicable public health registries and previously included for EPs in meaningful use. The continuation of the exclusions for MIPS eligible clinicians who do not administer immunizations, or whose local registries do not accept data according to the standards adopted in certification, ensures that MIPS eligible clinicians are not penalized for factors beyond their control.

    Response: While we appreciate these comments, we are not requiring public health reporting in the base score of the advancing care information performance category. However, MIPS eligible clinicians may still choose to report the Immunization Registry Reporting measure to increase their advancing care information performance score.

    Comment: A commenter recommended that there be a resource or listing of all available public health and clinical registries that MIPS eligible clinicians could engage with to meet the measures of the Public Health and Clinical Data Registry Reporting objective.

    Response: We are planning a to develop a centralized public health registry repository to assist MIPS eligible clinicians in finding public health registries available and clinically relevant to their practice that are accepting electronic submissions.

    Comment: A commenter questioned why we had modified the Stage 3 measure for syndromic surveillance from an “urgent care setting” to a “non-urgent” care setting under MIPS.

    Response: This was an oversight on our part. As we noted in the 2015 final rule (80 FR 62866) few jurisdictions accept syndromic surveillance from non-urgent care EPs. We are modifying the measure for MIPS so that it aligns with the Stage 3 measure that we finalized for the EHR Incentive Program and limit the surveillance data to be submitted to data from an urgent care setting.

    After consideration of the comments, we are finalizing our proposal for the Advancing Care Information objectives and measures and the 2017 Advancing Care Information Transition objectives and measures as proposed with modifications to correct language in certain measures as noted as follows:

    For the 2017 Advancing Care Information Transition Medication Reconciliation measure: We are maintaining the Modified Stage 2 numerator as follows: “Numerator: The number of transitions of care in the denominator where medication reconciliation was performed.

    For the Advancing Care Information View, Download, Transmit (VDT) measure: During the performance period, at least one unique patient (or patient-authorized representatives) seen by the MIPS eligible clinician actively engages with the EHR made accessible by the MIPS eligible clinician. An MIPS eligible clinician may meet the measure by a patient either—(1) viewing, downloading. or transmitting to a third party their health information; or (2) accessing their health information through the use of an API that can be used by applications chosen by the patient and configured to the API in the MIPS eligible clinician's CEHRT; or (3) a combination of (1) and (2).

    For the Advancing Care Information Syndromic Surveillance Reporting measure: The MIPS eligible clinician is in active engagement with a public health agency to submit surveillance data from an urgent care ambulatory setting where the jurisdiction accepts syndromic data from such settings and the standards are clearly defined.

    We note that we will consider new measures for future years of the program, and invite comment on what types of EHR measures and measurement should be considered for inclusion in the program. In addition we invite comments on how to make the measures that we are adopting in this final rule more stringent in the future, especially in light of the statutory requirements.

    (c) Exclusions

    In the 2015 EHR Incentive Programs final rule (80 FR 62829 through 62871) we outlined certain exclusions from the objectives and measures of meaningful use for EPs who perform low numbers of a particular action or activity for a given measure (for example, an EP who writes fewer than 100 permissible prescriptions during the EHR reporting period would be granted an exclusion for the Electronic Prescribing measure) or for EPs who had no office visits during the EHR reporting period. Moving forward, we believe that the proposed MIPS exclusion criteria as proposed at (81 FR 28173-28176) and as further discussed in section II.E.1. of this final rule with comment period, and advancing care information performance category scoring methodology together accomplish the same end as the previously established exclusions for the majority of the advancing care information performance category measures. By excluding from MIPS those clinicians who do not exceed the low-volume threshold (proposed in section II.E.3.c. of the proposed rule, as MIPS eligible clinicians who, during the performance period, have Medicare billing charges less than or equal to $10,000 and provide care for 100 or fewer Part B-enrolled Medicare beneficiaries), we believe exclusions for most of the individual advancing care information performance category measures are no longer necessary. The additional flexibility afforded by the proposed advancing care information performance category scoring methodology eliminates required thresholds for measures and allows MIPS eligible clinicians to focus on, and therefore report higher numbers for, measures that are more relevant to their practice.

    We noted that EPs who write less than 100 permissible prescriptions during the EHR reporting period are allowed an exclusion for the e-Prescribing measure under the EHR Incentive Program (80 FR 62834), which we did not propose for MIPS. We note that the Electronic Prescribing objective would not be part of the performance score under our proposals, and thus, MIPS eligible clinicians who write very low numbers of permissible prescriptions would not be at a disadvantage in relation to other MIPS eligible clinicians when seeking to achieve a maximum advancing care information performance category score. For the purposes of the base score, we proposed that those MIPS eligible clinicians who write fewer than 100 permissible prescriptions in a performance period may elect to report their numerator and denominator (if they have at least one permissible prescription for the numerator), or they may report a null value. This is consistent with prior policy which allowed flexibility for clinicians in similar circumstances to choose an alternate exclusion (80 FR 62789).

    In addition, in the 2015 EHR Incentive Programs final rule, we adopted a set of exclusions for the Immunization Registry Reporting measure under the Public Health and Clinical Data Registry Reporting objective (80 FR 62870). We recognize that some types of clinicians do not administer immunizations, and therefore proposed to maintain the previously established exclusions for the Immunization Registry Reporting measure. We therefore proposed that these MIPS eligible clinicians may elect to report their yes/no statement if applicable, or they may report a null value (if the previously established exclusions apply) for purposes of reporting the base score.

    We note that we did not propose to maintain any of the other exclusions established under the EHR Incentive Program, however, we solicited comment on whether other exclusions should be considered under the advancing care information performance category under the MIPS.

    The following is a summary of the comments we received regarding our exclusion proposal.

    Comment: Many commenters supported our proposal to provide an exclusion for the e-Prescribing measure to those MIPS eligible clinicians who write less than 100 permissible prescriptions during the performance period, and many commenters requested additional exclusions. Commenters disagreed with the removal of exclusions for other objectives, such as the transitions of care measure under the Health Information Exchange objective that existed under the EHR Incentive Programs. Many suggested continuing all EHR Incentive Programs Modified Stage 2 and Stage 3 exclusions under MIPS. Others suggested that exclusions be added to the Health Information Exchange measure under 2014 Edition CEHRT and the MIPS Clinical Information Reconciliation measure. Some suggested an exclusion for the Health Information Exchange Objective be added if a MIPS eligible clinician has fewer than 100 external referrals. Commenters also requested exclusions for clinicians who do not refer patients and those with insufficient broadband availability. Commenters recommended low-volume exclusions for various measures including e-Prescribing, Provide Patient Access, and the measures under the Coordination of Care Through Patient Engagement, and Health Information Exchange objectives. Commenters also urged the addition of an exclusion for MIPS eligible clinicians practicing in multiple locations because they may encounter specific hardships due to CEHRT availability. Some requested that any meaningful use exclusions for Public Health and Clinical Data Registry Reporting remain in effect for those using the 2014 CEHRT. Some requested an exclusion should exist for the Syndromic Surveillance Reporting measure for those physicians who do not directly or rarely diagnose or treat conditions related to syndromic surveillance. Another commenter requested that we maintain the meaningful use Stage 3 exclusion for the Patient-Specific Education and that MIPS eligible clinicians with no office visits during the performance period be permitted to report a “null value” and achieve full base and performance score credit.

    Response: We note that we are finalizing fewer required measures for the base score of the advancing care information performance category than we had proposed. As there are now fewer required measures, we do not believe that it is necessary to create additional exclusions for measures which are now optional for reporting. In addition, as we have moved the Immunization Registry Reporting measure from “required” in the base score to “not required” in the base score, we are not finalizing our proposal to provide an exclusion for those MIPS eligible clinicians who do not administer immunizations during the performance period. The exclusion is no longer necessary because MIPS eligible clinicians now have the option of whether or not to report on Immunization Registry Reporting to receive credit for this measure under the performance score of the advancing care information performance category.

    Comment: A few commenters supported the elimination of exclusions and noted that the elimination of thresholds enable MIPS eligible clinicians to focus more on quality patient care and less on meeting thresholds.

    Response: We appreciate the support of these commenters and agree that the fewer required measures and elimination of thresholds have enabled the removal of many of the exclusions that existed under the EHR Incentive Programs.

    After consideration of the comments, we are finalizing our exclusion policy as proposed with the following modification. We are not finalizing the exclusions for the Immunization Registry Reporting measure under the Public Health and Clinical Data Registry Reporting objective for those MIPS eligible clinicians who do not administer immunizations as part of their practice.

    (8) Additional Considerations (a) Reweighting of the Advancing Care Information Performance Category for MIPS Eligible Clinicians Without Sufficient Measures Applicable and Available

    As discussed in the proposed rule, section 101(b)(1)(A) of the MACRA amended section 1848(a)(7)(A) of the Act to sunset the meaningful use payment adjustment at the end of CY 2018. Section 1848(a)(7) of the Act includes certain statutory exceptions to the meaningful use payment adjustment under section 1848(a)(7)(A) of the Act. Specifically, section 1848(a)(7)(D) of the Act exempts hospital-based EPs from the application of the payment adjustment under section 1848(a)(7)(A) of the Act. In addition, section 1848(a)(7)(B) of the Act provides that the Secretary may exempt an EP who is not a meaningful EHR user for the EHR reporting period for the year from the application of the payment adjustment under section 1848(a)(7)(A) of the Act if the Secretary determines that compliance with the requirements for being a meaningful EHR user would result in a significant hardship, such as in the case of an EP who practices in a rural area without sufficient internet access. The MACRA did not maintain these statutory exceptions for the advancing care information performance category of the MIPS. Thus, the exceptions under sections 1848(a)(7)(B) and (D) of the Act are limited to the meaningful use payment adjustment under section 1848(a)(7)(A) of the Act and do not apply in the context of the MIPS.

    Section 1848(q)(5)(F) of the Act provides, if there are not sufficient measures and activities applicable and available to each type of MIPS eligible clinician, the Secretary shall assign different scoring weights (including a weight of zero) for each performance category based on the extent to which the category is applicable to each type of MIPS eligible clinician, and for each measure and activity specified for each such category based on the extent to which the measure or activity is applicable and available to the type of MIPS eligible clinician.

    We believe that under our proposals for the advancing care information performance category of the MIPS, there may not be sufficient measures that are applicable and available to certain types of MIPS eligible clinicians as outlined in the proposed rule, some of whom may have qualified for a statutory exception to the meaningful use payment adjustment under section 1848(a)(7)(A) of the Act. For the reasons stated in the proposed rule, we proposed to assign a weight of zero to the advancing care information performance category for purposes of calculating a MIPS final score for these MIPS eligible clinicians. We refer readers to section II.E.6. of the proposed rule for more information regarding how the quality, cost and improvement activities performance categories would be reweighted.

    (i) Hospital-Based MIPS Eligible Clinicians

    Section 1848(a)(7)(D) of the Act exempts hospital-based EPs from the application of the meaningful use payment adjustment under section 1848(a)(7)(A) of the Act. We defined a hospital-based EP for the EHR Incentive Program under § 495.4 as an EP who furnishes 90 percent or more of his or her covered professional services in sites of service identified by the codes used in the HIPAA standard transaction as an inpatient hospital or emergency room setting in the year preceding the payment year, or in the case of a payment adjustment year, in either of the 2 years before the year preceding such payment adjustment year. Under this definition, EPs that have 90 percent or more of payments for covered professional services associated with claims with Place of Service Codes 21 (inpatient hospital) or 23 (emergency department) are considered hospital-based (75 FR 44442).

    We believe there may not be sufficient measures applicable and available to hospital-based MIPS eligible clinicians under our proposals for the advancing care information performance category of MIPS.

    Hospital-based MIPS eligible clinicians may not have control over the decisions that the hospital makes regarding the use of health IT and CEHRT. These MIPS eligible clinicians therefore may have no control over the type of CEHRT available, the way that the technology is implemented and used, or whether the hospital continually invests in the technology to ensure it is compliant with ONC certification criteria. In addition, some of the specific advancing care information performance category measures, such as the Provide Patient Access measure under the Patient Electronic Access objective requires that patients have access to view, download and transmit their health information from the EHR which is made available by the health care clinician, in this case the hospital. Thus the measure is more attributable and applicable to the hospital and not to the MIPS eligible clinician, as the hospital controls the availability of the EHR technology. Further, the requirement under the Protect Patient Health Information objective to conduct a security risk analysis, would rely on the actions of the hospital, rather than the actions of the MIPS eligible clinician, as the hospital controls the access and availability and secure implementation of the EHR technology. In this case, the measure is again more attributable and applicable to the hospital than to the MIPS eligible clinician. Further, certain specialists (such as pathologists, radiologists and anesthesiologists) who often practice in a hospital setting and may be hospital-based MIPS eligible clinicians often lack face-to-face interaction with patients, and thus, may not have sufficient measures applicable and available to them under our proposals. For example, hospital-based MIPS eligible clinicians who lack face-to-face patient interaction may not have patients for which they could transfer or create an electronic summary of care record.

    In addition, we noted that eligible hospitals and CAHs are subject to meaningful use requirements under sections 1886(b)(3)(B) and (n) and 1814(l) of the Act, respectively, which were not affected by the enactment of the MACRA. Eligible hospitals and CAHs are required to report on objectives and measures of meaningful use under the EHR Incentive Program, as outlined in the 2015 EHR Incentive Programs final rule. We noted the objectives and measures of the EHR Incentive Programs for eligible hospitals and CAHs are specific to these facilities, and are more applicable and better represent the EHR technology available in these settings.

    For these reasons, we proposed to rely on section 1848(q)(5)(F) of the Act to assign a weight of zero to the advancing care information performance category for hospital-based MIPS eligible clinicians. We proposed to define a “hospital-based MIPS eligible clinician” at § 414.1305 as a MIPS eligible clinician who furnishes 90 percent or more of his or her covered professional services in sites of service identified by the codes used in the HIPAA standard transaction as an inpatient hospital or emergency room setting in the year preceding the performance period, otherwise stated as the year 3 years preceding the MIPS payment year. For example, under this proposal, hospital-based determinations would be made for the 2019 MIPS payment year based on covered professional services furnished in 2016. We also proposed, consistent with the EHR Incentive Program, that we would determine which MIPS eligible clinicians qualify as “hospital-based” for a MIPS payment year. We invited comments on these proposals.

    In addition, we sought comment on how the advancing care information performance category could be applied to hospital-based MIPS eligible clinicians in future years of MIPS, and the types of measures that would be applicable and available to these types of MIPS eligible clinicians.

    We also sought comment on whether the previously established 90 percent threshold of payments for covered professional services associated with claims with Place of Service (POS) Codes 21 (inpatient hospital) or 23 (emergency department) is appropriate, or whether we should consider lowering this threshold to account for hospital-based MIPS eligible clinicians who bill more than 10 percent of claims with a POS other than 21 or 23. Although we proposed a threshold of 90 percent, we are considering whether a lower threshold would be more appropriate for hospital-based MIPS eligible clinicians. In particular, we are interested in what factors should be applied to determine the threshold for hospital-based MIPS eligible clinicians. We will continue to evaluate the data to determine whether there are certain thresholds which naturally define a hospital-based MIPS eligible clinician.

    The following is a summary of the comments we received regarding our proposal for defining hospital-based MIPS eligible clinicians.

    Comment: Many commenters supported our proposed definition of a hospital-based MIPS eligible clinician as those who furnish 90 percent or more of his or her covered professional services in either Place of Service 21 or 23. Many also supported the proposal to assign a weight of zero to the advancing care information performance category for hospital-based MIPS eligible clinicians, citing that health IT decisions for these MIPS eligible clinicians are often made at the hospital level and are out of their control.

    Response: We thank commenters for their support of our proposal. For the reasons stated in the proposed rule, and based on the measures we are finalizing in this final rule with comment period, we agree that there may not be sufficient measures applicable and available to hospital-based MIPS eligible clinicians to report for the advancing care information performance category.

    Comment: A few commenters disagreed with our proposal and provided alternate hospital-based thresholds. They recommended that the threshold be lowered to a majority (or more than 50 percent). Several commenters recommended a 75 percent threshold, while another suggested reducing the threshold to 60 percent. One commenter recommended that CMS adopt a flexible approach that accommodates eligible clinicians who work in multiple settings.

    Response: Although commenters suggested alternate thresholds, they did not provide specific rationale to support the lowered thresholds or the factors that should be applied to determine the threshold for hospital-based MIPS eligible clinicians. With commenter feedback in mind, we have reevaluated the data and found that historical claims data support a lower threshold as suggested in these comments. With consideration of the comments and data we have reviewed, we are reducing the percentage of covered professional services furnished in certain sites of service to determine hospital-based MIPS eligible clinicians from 90 percent to 75 percent. The data analyzed supports the comments we received while still allowing MIPS eligible clinicians with 25 percent or more of their services in a settings outside of inpatient hospital, on-campus outpatient hospital (as referenced below) or emergency room settings to participate and earn points in the advancing care information performance category.

    Comment: Many commenters proposed that CMS broaden the definition of “hospital-based clinician” to include those MIPS eligible clinicians who are employed by a hospital, but still bill outpatient services, as those MIPS eligible clinicians will not have input into the selection of the EHR, pointing out that facility-based clinicians in both inpatient and outpatient settings experience the similar difficulties in meeting the proposed objectives and measures in the advancing care information performance category. Another commenter believed that CMS should include other clinician settings, such as ambulatory surgery centers, with hospital inpatient and ED settings as clinicians in other settings may also lack control over EHR technology. Another urged CMS to revise the criteria to include care provided in hospital outpatient departments and ASCs, excluding evaluation and management services. One commenter supported our proposal for hospital-based MIPS eligible clinicians and recommended that CMS also include POS 22 (on-campus outpatient hospital) because many hospitalists provide care in both the inpatient setting, as well as on-campus outpatient hospital departments. Another commenter suggested that the definition of hospital-based MIPS eligible clinicians include observation services.

    Response: We agree with commenters that there are MIPS eligible clinicians who bill using place of service codes other than POS 21 and POS 23 but who predominantly furnish covered professional services in a hospital setting and have no control over EHR technology. We believe these clinicians should be considered hospital-based for purposes of MIPS, and therefore, we are expanding our hospital-based definition to include POS 22, on-campus outpatient hospital.

    Comment: One commenter recommended using the newly-introduced Medicare specialty billing code for hospitalists in the definition of “hospital-based.”

    Response: The official use of the Medicare specialty billing code for hospitals does not begin until after the start of the MIPS program, and therefore we have no historical data to support its inclusion in the definition of hospital-based at this time. We will consider this recommendation for future rulemaking.

    Comment: One commenter recommended that CMS describe this group of MIPS eligible clinicians as facility-based rather than hospital-based.

    Response: We appreciate the comment although we continue to believe that hospital-based is the more appropriate term. We believe facility-based is too broad a term and could be misleading.

    Comment: A commenter requested that CMS be transparent about the time period used for determining whether an MIPS eligible clinician is hospital-based.

    Response: We proposed to use data from the year preceding the performance period, otherwise stated as the year that is 3 years preceding the MIPS payment year. We are adopting a modified final policy and will instead use claims with dates of service between September 1 of the calendar year 2 years preceding the performance period through August 31 of the calendar year preceding the performance period. For example, for the 2017 performance period (2019 MIPS payment year) we will use the data available at the end of October 2016 for Medicare claims with dates of service between September 1, 2015, through August 31, 2016, to determine whether a MIPS eligible clinician is considered hospital-based by our definition. In the event that it is not operationally feasible to use claims from this exact time period, we will use a 12-month period as close as practicable to September 1 of the calendar year 2 years preceding the performance period and August 31 of the calendar year preceding the performance period. We have adopted this change in policy in an effort to provide transparency to MIPS eligible clinicians; this change in timeline will allow us to notify MIPS eligible clinicians of their hospital-based status prior to the start of the performance period. By adopting this policy and notifying MIPS eligible clinicians of their hospital-based determination prior to the performance period, we enable MIPS eligible clinicians to better plan and prepare for reporting.

    Comment: One commenter noted that specialists who meet the criteria for being considered a hospital-based MIPS eligible clinician may still have access and the ability to effectively use CEHRT, and may sufficiently meet the requirements of the advancing care information performance category, while those MIPS eligible clinicians who do not meet the hospital-based criteria as proposed would not be able to meet those requirements. The commenter suggested taking this into consideration and proposed allowing some MIPS eligible clinicians who are not hospital-based, but who still face the same hardships, to reweight and redistribute their advancing care information performance category score.

    Response: We realize that some MIPS eligible clinicians face similar challenges around the inability to control their access to CEHRT even if they are not determined to be hospital-based. We refer readers to section II.E.5.g.(8)(a)(ii) of this final rule with comment period for further discussion of reweighting applications for those MIPS eligible clinicians who face a significant hardship.

    Comment: Commenters recommended offering MIPS eligible clinicians or groups the option to petition for a change in their hospital-based status when there is a change in their organizational affiliation.

    Response: We agree that circumstances change from year to year and MIPS eligible clinicians' hospital-based determination should be reevaluated for each MIPS payment year. We note that we are finalizing a policy to determine hospital-based status for each MIPS payment year by looking at a MIPS eligible clinician's covered professional services based on claims with dates of service between September 1 of the calendar year 2 years preceding the performance period through August 31 of the calendar year preceding the performance period. We appreciate the suggestion that MIPS eligible clinicians should have the ability to petition their hospital-based status. However, we believe this annual reevaluation in combination with our policy that hospital-based MIPS eligible clinicians may choose to report to the advancing care information performance category should they determine that there are applicable and available measures for them to submit allow sufficient flexibility for hospital-based MIPS eligible clinicians without the need to petition their hospital-based status.

    After consideration of the public comments and the data we have available, we are finalizing our proposal for MIPS under § 414.1305 with the following modifications. Under the MIPS, a hospital-based MIPS eligible clinicians is defined as a MIPS eligible clinician who furnishes 75 percent or more of his or her covered professional services in sites of service identified by the Place of Service (POS) codes used in the HIPAA standard transaction as an inpatient hospital (POS 21), on campus outpatient hospital (POS 22), or emergency room (POS 23) setting, based on claims for a period prior to the performance period as specified by CMS. We intend to use claims with dates of service between September 1 of the calendar year 2 years preceding the performance period through August 31 of the calendar year preceding the performance period, but in the event it is not operationally feasible to use claims from this time period, we will use a 12-month period as close as practicable to this time period.

    We note that this expanded definition of hospital-based MIPS eligible clinician will include a greater number of MIPS eligible clinicians than the previously proposed definition. We have expanded this definition because we believe it better represents hospital-based eligible clinicians and acknowledges the challenges they face with regard to EHR reporting as stated above. For the reasons stated in the proposed rule, our assumption remains that MIPS eligible clinicians who are determined hospital-based do not have sufficient advancing care information measures applicable to them, and thus we will reweight the advancing care information performance category to zero percent of the MIPS final score for the MIPS payment year in accordance with section 1848(q)(5)(F) of the Act. If a MIPS eligible clinician disagrees with our assumption and believes there are sufficient advancing care information measures applicable to them, they have the option to report the advancing care information measures for the performance period for the MIPS payment year for which they are determined hospital-based. However, if a MIPS eligible clinician who is determined hospital-based chooses to report on the advancing care information measures, they will be scored on the advancing care information performance category like all other MIPS eligible clinicians, and the performance category will be given the weighting prescribed by section 1848(q)(5)(E) of the Act regardless of their advancing care information performance category score.

    (ii) MIPS Eligible Clinicians Facing a Significant Hardship

    Section 1848(a)(7)(B) of the Act provides that the Secretary may exempt an EP who is not a meaningful EHR user for the EHR reporting period for the year from the application of the payment adjustment under section 1848(a)(7)(A) of the Act if the Secretary determines that compliance with the requirements for being a meaningful EHR user would result in a significant hardship. In the Stage 2 final rule (77 FR 54097-54100), we defined certain categories of significant hardships that may prevent an EP from meeting the requirements of being a meaningful EHR user. These categories include:

    • Insufficient Internet Connectivity (as specified in 42 CFR 495.102(d)(4)(i)).

    • Extreme and Uncontrollable Circumstances (as specified in 42 CFR 495.102(d)(4)(iii)).

    • Lack of Control over the Availability of CEHRT (as specified in 42 CFR 495.102(d)(4)(iv)(A)).

    • Lack of Face-to-Face Patient Interaction (as specified in 42 CFR 495.102(d)(4)(iv)(B)).

    We believe that under our proposals for the advancing care information performance category, there may not be sufficient measures applicable and available to MIPS eligible clinicians within the categories above. For these MIPS eligible clinicians, we proposed to rely on section 1848(q)(5)(F) of the Act to re-weight the advancing care information performance category to zero.

    Sufficient internet access is fundamental to many of the measures proposed for the advancing care information performance category. For example, the e-Prescribing measure requires sufficient access to the Internet to transmit prescriptions electronically, and the Secure Messaging measure requires sufficient Internet access to receive and respond to patient messages. These measures may not be applicable to MIPS eligible clinicians who practice in areas with insufficient internet access. We proposed to require MIPS eligible clinicians to demonstrate insufficient internet access through an application process in order to be considered for a reweighting of the advancing care information performance category. The application would have to demonstrate that the MIPS eligible clinicians lacked sufficient internet access, during the performance period, and that there were insurmountable barriers to obtaining such infrastructure, such as a high cost of extending the internet infrastructure to their facility.

    Extreme and uncontrollable circumstances, such as a natural disaster in which an EHR or practice building are destroyed, can happen at any time and are outside a MIPS eligible clinician's control. If a MIPS eligible clinician's CEHRT is unavailable as a result of such circumstances, the measures specified for the advancing care information performance category may not be available for the MIPS eligible clinician to report. We proposed that these MIPS eligible clinicians submit an application to include the circumstances by which the EHR technology was unavailable, and for what period of time it was unavailable, to be considered for reweighting of their advancing care information performance category.

    In the Stage 2 final rule (77 FR 54100) we discussed EPs who practice at multiple locations, and may not have the ability to impact their practices' health IT decisions. We noted the case of surgeons using ambulatory surgery centers or a physician treating patients in a nursing home who does not have any other vested interest in the facility, and may have no influence or control over the health IT decisions of that facility. If MIPS eligible clinicians lack control over the CEHRT in their practice locations, then the measures specified for the advancing care information performance category may not be available to them for reporting. To be considered for a reweighting of the advancing care information performance category, we proposed that these MIPS eligible clinicians would need to submit an application demonstrating that a majority (50 percent or more) of their outpatient encounters occur in locations where they have no control over the health IT decisions of the facility, and request their advancing care information performance category score be reweighted to zero. We noted that in such cases, the MIPS eligible clinician must have no control over the availability of CEHRT. Control does not imply final decision-making authority. For example, we would generally view MIPS eligible clinicians practicing in a large, group as having control over the availability of CEHRT, because they can influence the group's purchase of CEHRT, they may reassign their claims to the group, they may have a partnership/ownership stake in the group, or any payment adjustment would affect the group's earnings and the entire impact of the adjustment would not be borne by the individual MIPS eligible clinician. These MIPS eligible clinicians can influence the availability of CEHRT and the group's earnings are directly affected by the payment adjustment. Thus, such MIPS eligible clinicians would not, as a general rule, be viewed as lacking control over the availability of CEHRT and would not be eligible for their advancing care information performance category to be reweighted based on their membership in a group practice that has not adopted CEHRT.

    In the Stage 2 final rule (77 FR 54099), we noted the challenges faced by EPs who lack face-to-face interaction with patients (EPs that are non-patient facing), or lack the need to provide follow-up care with patients. Many of the measures proposed under the advancing care information performance category require face-to-face interaction with patients, including all eight of the measures that make up the three performance score objectives (Patient Electronic Access, Coordination of Care Through Patient Engagement and Health Information Exchange). Because these proposed measures rely so heavily on face-to-face patient interactions, we do not believe there would be sufficient measures applicable to non-patient facing MIPS eligible clinicians under the advancing care information performance category. We proposed to automatically reweight the advancing care information performance category to zero for a MIPS eligible clinician who is classified as a non-patient facing MIPS eligible clinician (based on the number of patient-facing encounters billed during a performance period) without requiring an application to be submitted by the MIPS eligible clinician. We refer readers to section II.E.1.b. of the proposed rule for further discussion of non-patient facing MIPS eligible clinicians. We also sought comment on how the advancing care information performance category could be applied to non-patient facing MIPS eligible clinicians in future years of MIPS, and the types of measures that would be applicable and available to these types of MIPS eligible clinicians.

    We proposed that all applications for reweighting the advancing care information performance category be submitted by the MIPS eligible clinician or designated group representative in the form and manner specified by CMS. We proposed that all applications may be submitted on a rolling basis, but must be received by us no later than the close of the submission period for the relevant performance period, or a later date specified by us. For example, for the 2017 performance period, applications must be submitted no later than March 31, 2018 (or later date as specified by us) to be considered for reweighting the advancing care information performance category for the 2019 MIPS payment year. An application would need to be submitted annually to be considered for reweighting each year.

    The following is a summary of comments received.

    Comment: Most commenters supported the inclusion of something similar to a hardship exception under the EHR Incentive Program for the advancing care information performance category and the reweighting of the advancing care information score to zero. Other commenters expressed appreciation that CMS has moved away from the 5 year limitation to hardship exceptions.

    Response: We appreciate the support of our proposal, and note that we did not propose exceptions from reporting on the advancing care information performance category or from application of the MIPS payment adjustment factor based on hardship. Rather, we are recognizing that there may not be sufficient measures applicable and available under the advancing care information performance category to MIPS eligible clinicians who lack sufficient internet connectivity, face extreme and uncontrollable circumstances, lack control over the availability of CEHRT, or do not have face-to-face interactions with patients. For those MIPS eligible clinicians, we proposed to reweight the advancing care information performance category to zero percent in the MIPS final score.

    Comment: We received many comments suggesting various additions to our proposal. One commenter suggested hardship exceptions under the advancing care information performance category for both 2017 and 2018 for practices that are experiencing transitional, infrastructural changes. One commenter suggested expanding the exceptions for unforeseen circumstances to a minimum of 5 years. Another requested that one of the hardship categories for the 2017 performance period include the lateness of the publication of the final rule with comment period, which will create a short timeline for adjustment to new requirements. A commenter strongly recommended that hospitalist be added to the list because they do the majority of their work in a hospital.

    Response: We note that, in some cases, transitional infrastructure changes might be considered under the extreme and uncontrollable circumstances category, depending upon the particular circumstances of the clinician practice. We believe that it is necessary for MIPS eligible clinicians to submit an application to reweight their advancing care information performance category score to zero for each applicable year. We do not believe it is appropriate to automatically reweight to zero the advancing care information performance category score for a span of multiple years as circumstances change year to year. We believe that our policy to allow a minimum of 90-days data for the transition year of MIPS helps to address any issues related to the timing of the release of this final rule with comment period. We refer readers to section II.E.4. of this final rule with comment period for further discussion of the MIPS performance period. Finally we note that hospital medicine is not a clinician specialty that is identified through the Medicare enrollment process. Those MIPS eligible clinicians that are considered hospital-based by our definition would have their advancing care information performance category weighted at zero percent of the MIPS final score as was previously discussed in this final rule with comment period.

    Comment: Many commenters suggested additional categories related to CEHRT. One commenter asked CMS to create hardship exceptions to ensure that clinicians are not unfairly punished for the failures of their CEHRT, citing concerns of past failures with technologies in meeting standards imposed by CMS and ONC. Yet another commenter recommended that we consider expanding the criteria for 2017 and 2018 to include specific clinician types that can prove that they would incur major administrative and financial burdens by adopting EHR technology for the first and second performance period. Another commenter suggested that exceptions be developed to avoid negative payment adjustments in 2019 for EHR migration difficulties. Other commenters suggested exception for switching CEHRT and providing hardships when CEHRT is decertified.

    Response: We appreciate this input and understand that there may be many issues related to CEHRT that may result in a MIPS eligible clinician being unable to report on measures under the advancing care information performance category due to circumstances outside of their control. As we do not want to limit potential unforeseen circumstances we will consider issues with vendors and CEHRT under the “extreme and uncontrollable circumstances” category, but we note that not all issues may qualify as extreme and outside of control of the clinician.

    Comment: One commenter supported continued hardship exceptions for clinicians who practice in settings such as skilled nursing facilities where they do not have control over availability of CEHRT, however they also believe this proposal does not go far enough. The commenter explained that without a hardship exception granted, these facilities will be encouraged to limit the number of patients seen by their clinicians so that they can avoid being eligible to participate in MIPS, which would adversely affect the access to care provided to this vulnerable population. They requested that skilled nursing facility visits (POS 31) and nursing facility visits (POS 32) (CPT codes 99304-99318) simply be exempt from meaningful use, and by extension the advancing care information performance category.

    Response: While we acknowledge this issue, we believe that it is adequately addressed by the “lack of control over CEHRT” category and does not warrant the exemption of certain evaluation and management codes. As we have noted previously, this final rule with comment period only addresses policies related to MIPS eligible clinicians and not Medicaid EPs, eligible hospitals or CAHs under the Medicare and Medicaid EHR Incentive Programs.

    Comment: Other commenters believed that CMS should continue a hardship exception for medical centers because the medical centers will have to monitor more programs requiring some but less of the same data. The commenters stated that the processes are confusing and time-consuming.

    Response: We currently do not allow a hardship exception specific to medical centers under the EHR Incentive Program. Medical centers are not subject to the application of the MIPS payment adjustment factors and are not addressed in this rulemaking.

    Comment: A few commenters requested that, as was included in the Medicare and Medicaid EHR Incentive Programs, an automatic hardship exception be granted to the following PECOS specialties: diagnostic radiology (30), nuclear medicine (36), interventional radiology (94), anesthesiology (05) and pathology (22).

    Response: We disagree that we should reweight to zero the advancing care information performance category score based on specialty code, and note that our proposal and final policy for reweighting the advancing care information performance category is based on the number of patient-facing encounters billed during a performance period, not based on specialty type. In the EHR Incentive Programs, we offered an exception to the Medicare payment adjustments to certain specialties as designated in PECOS because we recognized that EPs within the specialties that lack face-to-face interactions and lack follow up with patients with sufficient frequency (77 FR 54099-54100). Under the MIPS, we proposed to automatically reweight the advancing care information performance category to zero for any hospital-based MIPS eligible clinicians and/or non-patient facing MIPS eligible clinicians who may not have sufficient measures applicable and available to them. Some of the MIPS eligible clinicians in specialties referenced by the commenter may have sufficient patient encounters to report the measures under the advancing care information performance category, and thus, the advancing care information performance category measures would be applicable to these MIPS eligible clinicians.

    Comment: A commenter suggested that CMS publish an explanation of what constitutes “limited” internet access and list limited access areas per the Federal Communications Commission (FCC).

    Response: We have stated that MIPS eligible clinicians located in an area without sufficient Internet access to comply with objectives requiring Internet connectivity, and faced insurmountable barriers to obtaining such Internet connectivity could be apply for significant hardship. The FCC's National Broadband Map allows MIPS eligible clinicians to search, analyze, and map broadband availability in their area: http://www.broadbandmap.gov/.

    Comment: One commenter recommended a new option to allow applications to reweight advancing care information performance category to zero for MIPS eligible clinicians who did not previously intend to participate in meaningful use in CY 2017, and instead planned to obtain a significant hardship to avoid the Electronic Health Record Incentive Program 2019 payment adjustment.

    Response: We note that under section 101(b)(1) of the MACRA, the payment adjustments under the Medicare EHR incentive program will end after the 2018 payment adjustment year, which is based on the EHR reporting period in 2016. Therefore, MIPS eligible clinicians are not required to participate in the Medicare EHR incentive programs in the 2017 EHR reporting period to avoid a 2019 payment adjustment. MIPS eligible clinicians may qualify for reweighting of their advancing care information performance category score if they meet the criteria outlined in our policy for reweighting under MIPS.

    Comment: A commenter recommended that CMS explicitly clarify that the “lack of influence over the availability of CEHRT” option for reweighting advancing care information performance category to zero is not limited to multi-location/practice MIPS eligible clinicians.

    Response: The “lack of control over the availability of CEHRT” is not limited to MIPS eligible clinicians who practice at multiple locations, instead, it is available to any MIPS eligible clinicians who may not have the ability to impact their practices' health IT decisions. We noted that in such cases, the MIPS eligible clinician must have no control over the availability of CEHRT. We further specified that a majority (50 percent or more) of their outpatient encounters must occur in locations where they have no control over the health IT decisions of the facility. Control does not imply final decision-making authority as demonstrated in the example given in our proposal.

    Comment: A commenter recommended granting MIPS eligible clinicians that are eligible for Social Security benefits a hardship exception because of the considerable expenditures of both human and financial capital that would require several years to see a return on investment.

    Response: While we understand this suggestion, we do not believe that it is appropriate to reweight this category solely on the basis of a MIPS eligible clinicians' age or Social Security status. We have analyzed EHR Incentive Program data, as well as provider feedback, and believe that while other factors such as the lack of access to CEHRT or unforeseen environmental circumstances may constitute a significant hardship, the age of an MIPS eligible clinician alone or the preference to not obtain CEHRT does not.

    Comment: Commenters requested that application for reweighting not be burdensome for MIPS eligible clinicians to submit. One commenter requested that CMS clarify whether MIPS eligible clinicians will need to submit an annual application to be excluded from the advancing care information performance category or if this will occur automatically and the commenter preferred the latter.

    Response: We noted that CMS would specify the form and manner that reweighting applications are submitted outside the rulemaking process. Additional information on the submission process will be available after the rule is published. We do note that if an application is required, it must be submitted annually.

    Comment: Some commenters stated that MIPS eligible clinicians, who did not qualify for meaningful use, will need more time to familiarize themselves with EHR and could receive a low MIPS final score and negative payment adjustment due to lack of CEHRT. They believed that these MIPS eligible clinicians most likely serve high-disparity populations and that the most vulnerable patient populations could be negatively impacted.

    Response: We acknowledge that under MIPS more clinicians will be subject to the requirements of EHR reporting than were previously eligible under the EHR Incentive Program and may not have advancing care information measures that are applicable or available for them to submit. For this reason, we have proposed to reweight the advancing care information performance category to zero for hospital-based MIPS eligible clinicians, NPs, PAs, CRNAs and CNSs. We have also allowed for MIPS eligible clinicians to apply for a reweighting of their advancing care information performance category score should the MIPS eligible clinician not have measures that are applicable or available to them for various reasons as discussed in section II.E.5.g. of this final rule with comment period. We do not agree that MIPS eligible clinicians who were not eligible for the EHR Incentive Programs are concentrated in high disparity populations, nor do we believe that serving such a population would limit a MIPS eligible clinician's ability to report on the advancing care information objectives and measures.

    After consideration the comments, we are finalizing our policy to re-weight the advancing care information performance category to zero percent of the MIPS final score for MIPS eligible clinicians facing a significant hardships as proposed. For the reasons discussed in the proposed rule, we continue to assume that these clinicians may not have sufficient measures applicable and available to them for the advancing care information performance category. Should a MIPS eligible clinician apply for their advancing care information performance category to be reweighted under this policy but subsequently determine that their situation has changed such that they believe there are sufficient measures applicable and available to them for the advancing care information performance category, they may report on the measures. If they choose to report, they will be scored on the advancing care information performance category like any other MIPS eligible clinician, and the category will be given the weighting prescribed by section 1848(q)(5)(E) of the Act regardless of the MIPS eligible clinician's advancing care information performance category score.

    (iii) Nurse Practitioners, Physician Assistants, Clinical Nurse Specialists, and Certified Registered Nurse Anesthetists

    The definition of a MIPS eligible clinician under section 1848(q)(1)(C) of the Act includes certain non-physician practitioners, including Nurse Practitioners (NPs), Physicians Assistants (PAs), Certified Registered Nurse Anesthetists (CRNAs) and Clinical Nurse Specialists (CNSs)). CRNAs and CNSs are not eligible for the incentive payments under Medicare or Medicaid for the adoption and meaningful use of CEHRT (sections 1848(o) and 1903(t) of the Act, respectively) or subject to the meaningful use payment adjustment under Medicare (section 1848(a)(7)(A) of the Act), and thus, they may have little to no experience with the adoption or use of CEHRT. Similarly, NPs and PAs may also lack experience with the adoption or use of CEHRT, as they are not subject to the payment adjustment under section 1848(a)(7)(A) of the Act. We further noted that only 19,281 NPs and only 1,379 PAs have attested to the Medicaid EHR Incentive Program. Nurse practitioners are eligible for the Medicaid incentive payments under section 1903(t) of the Act, as are PAs practicing in a FQHC or a RHC that is led by a PA, if they meet patient volume requirements and other eligibility criteria.

    Because many of these non-physician clinicians are not eligible to participate in the Medicare and/or Medicaid EHR Incentive Program, we have little evidence as to whether there are sufficient measures applicable and available to these types of MIPS eligible clinicians under our proposals for the advancing care information performance category. The low numbers of NPs and PAs who have attested for the Medicaid incentive payments may indicate that EHR Incentive Program measures required to earn the incentive are not applicable or available, and thus, would not be applicable or available under the advancing care information performance category. For these reasons, we proposed to rely on section 1848(q)(5)(F) of the Act to assign a weight of zero to the advancing care information performance category if there are not sufficient measures applicable and available to NPs, PAs, CRNAs, and CNSs. We would assign a weight of zero only in the event that an NP, PA, CRNA, or CNS does not submit any data for any of the measures specified for the advancing care information performance category. We encourage all NPs, PAs, CRNAs, and CNSs to report on these measures to the extent they are applicable and available, however, we understand that some NPs, PAs, CRNAs, and CNSs may choose to accept a weight of zero for this performance category if they are unable to fully report the advancing care information measures. We believe this approach is appropriate for the first MIPS performance period based on the payment consequences associated with reporting, the fact that many of these types of MIPS eligible clinicians may lack experience with EHR use, and our current uncertainty as to whether we have proposed sufficient measures that are applicable and available to these types of MIPS eligible clinicians. We noted that we would use the first MIPS performance period to further evaluate the participation of these MIPS eligible clinicians in the advancing care information performance category and would consider for subsequent years whether the measures specified for this category are applicable and available to these MIPS eligible clinicians.

    We invited comments on our proposal. We additionally sought comment on how the advancing care information performance category could be applied to NPs, PAs, CRNAs, and CNSs in future years of MIPS, and the types of measures that would be applicable and available to these types of MIPS eligible clinicians.

    The following is a summary of the comments we received regarding our proposal.

    Comment: Commenters generally supported our proposal to reweight the advancing care information performance category for those MIPS eligible clinicians without sufficient measures. Most commenters supported CMS' proposal that submission under the advancing care information performance category for NPs, PAs, CNSs, and CRNAs, would be optional in 2017 given these non-physicians' lack of past participation in meaningful use.

    Response: We appreciate commenters for their support of this proposal and we agree for the reasons stated in the proposed rule that it is appropriate to assign a weight of zero only if the aforementioned practitioners do not submit data for any of the advancing care information performance category measures.

    Comment: One commenter urged CMS to revise the proposed rule so that NPs and advanced practice nurses (APNs) can obtain EHR Incentive Program incentives.

    Response: This final rule with comment period implements the MIPS as authorized under section 1848(q) of the Act. Eligibility for incentive payments under the EHR Incentive Program is determined under a separate section of the statute. Any change to the eligibility or extension of incentive payments under the EHR Incentive Program would require a change to the law and is not in the scope of this final rule with comment period.

    Comment: One commenter requested CMS make advancing care information performance category participation optional for clinicians who primarily provide services in post-acute care settings, which have not been part of the EHR Incentive Program in the past. Several commenters supported excluding clinicians not eligible to participate in the Medicare/Medicaid EHR Incentive Programs.

    Response: While we understand the concerns of the commenters, we disagree with their suggestions. Section 1848(q)(1)(C)(i) of the Act defines a MIPS eligible clinician to include specific types of clinicians and provides discretion to include other types of clinicians in later years. In the future, we expect additional clinician types will be added to the definition of MIPS eligible clinician.

    Comment: A commenter noted that by allowing additional non-physician practitioners (NPs, PAs, and in the future, dietitians, etc.) to be eligible to participate in the advancing care information performance category, the number of eligible clinicians under MIPS will greatly increase from the number of eligible clinicians in the EHR Incentive Program. The increased number of eligible clinicians will cause an unnecessary burden for organizational support staff to track and report their data. Commenters recommend advancing care information performance category data reporting be rolled up to the clinicians that they bill under so that clinician reporting includes data representing their MIPS eligible clinicians.

    Response: As we noted above, the definition of MIPS eligible clinician is broader than the definition of an EP in the EHR Incentive Program, and we intend to add additional clinician types to the definition of MIPS eligible clinician in future years. Under this program, we have added a group reporting option in which MIPS eligible clinicians who have reassigned their billing rights to a TIN may report at the group or TIN level instead of the individual level. We believe this addresses the administrative concerns raised by this comment and allows MIPS eligible clinicians to aggregate their data for reporting, therefore reducing reporting burden.

    After consideration of the comments, we are finalizing our NPs, PAs, CRNAs, and CNSs policy as proposed. These MIPS eligible clinicians may choose to submit advancing care information measures should they determine that these measures are applicable and available to them; however, we note that if they choose to report, they will be scored on the advancing care information performance category like all other MIPS eligible clinicians and the performance category will be given the weighting prescribed by section 1848(q)(5)(E) of the Act regardless of their advancing care information performance category score.

    (iv) Medicaid

    In the 2015 EHR Incentive Programs final rule we adopted an alternate method for demonstrating meaningful use for certain Medicaid EPs that would be available beginning in 2016, for EPs attesting for an EHR reporting period in 2015 (80 FR 62900). Certain Medicaid EPs who previously received an incentive payment under the Medicaid EHR Incentive Program, but failed to meet the eligibility requirements for the program in subsequent years, are permitted to attest using the CMS Registration and Attestation system for the purpose of avoiding the Medicare payment adjustment (80 FR 62900). However, as discussed in the proposed rule, section 101(b)(1)(A) of the MACRA amended section 1848(a)(7)(A) of the Act to sunset the meaningful use payment adjustment for Medicare EHR Incentive Program EPs at the end of CY 2018. This means that after the CY 2018 payment adjustment year, there will no longer be a separate Medicare EHR Incentive Program for EPs, and therefore Medicaid EPs who may have used this alternate method for demonstrating meaningful use cannot potentially be subject to a payment adjustment under the Medicare EHR Incentive Program at that time. Accordingly, there will no longer be a need for this alternate method of demonstrating meaningful use after the CY 2018 payment adjustment year.

    Similarly, beginning in 2014, states were required to collect, upload and submit attestation data for Medicaid EPs for the purposes of demonstrating meaningful use to avoid the Medicare payment adjustment (80 FR 62915). This form of reporting will also no longer need to continue with the sunset of the meaningful use payment adjustment for Medicare EHR Incentive Program EPs at the end of CY 2018. Accordingly, we proposed to amend the reporting requirement described at 42 CFR 495.316(g) by adding an ending date such that after the CY 2018 payment adjustment year states would no longer be required to report on meaningful EHR users.

    We noted that the Medicaid EHR Incentive Program for EPs was not impacted by the MACRA and the requirement under section 1848(q) of the Act to establish the MIPS program. We did not propose any changes to the objectives and measures previously established in rulemaking for the Medicaid EHR Incentive Program, and thus, EPs participating in that program must continue to report on the objectives and measures under the guidelines and regulations of that program.

    Accordingly, reporting on the measures specified for the advancing care information performance category under MIPS cannot be used as a demonstration of meaningful use for the Medicaid EHR Incentive Programs. Similarly, a demonstration of meaningful use in the Medicaid EHR Incentive Programs cannot be used for purposes of reporting under MIPS.

    Therefore, MIPS eligible clinicians who are also participating in the Medicaid EHR Incentive Programs must report their data for the advancing care information performance category through the submission methods established for MIPS in order to earn a score for the advancing care information performance category under MIPS and must separately demonstrate meaningful use in their state's Medicaid EHR Incentive Program in order to earn a Medicaid incentive payment. The Medicaid EHR Incentive Program continues through payment year 2021, with 2016 being the final year an EP can begin receiving incentive payments (§ 495.310(a)(1)(iii)). We solicited comments on alternative reporting or proxies for EPs who provide services to both Medicaid and Medicare patients and are eligible for both MIPS and the Medicaid EHR Incentive Payment.

    The following is a summary of the comments we received regarding our proposal to separate the reporting requirements of MIPS and the Medicaid EHR Incentive Programs:

    Comment: Many commenters stated the reporting burden imposed on MIPS eligible clinicians who also participate in the Medicaid EHR Incentive Programs, would have to report separately to achieve points in the advancing care information performance category, and to receive an incentive payment in the Medicaid EHR Incentive Programs. Some commenters urged CMS to align reporting requirements and submission methods across both programs to eliminate duplication in reporting effort. Some commenters requested that CMS eliminate the need to report duplicative quality measures by modifying its proposal to require that if quality is reported in a manner acceptable under MIPS or an APM, then it would not need to be reported under the Medicaid EHR Incentive Program. Other commenters expressed concern that varying reporting requirements for MIPS eligible clinicians, for hospitals and Medicaid EPs who participate in the EHR Incentive Programs will bring hardship to clinician staff, as well as EHR vendors.

    Response: We understand that reporting burden is a concern to MIPS eligible clinicians and CMS remains committed to exploring opportunities for alignment when possible. However, MIPS and the Medicare and Medicaid EHR Incentive Program are two separate programs with distinct requirements. The reporting requirements and scoring methods of the Medicaid EHR Incentive Program and those finalized for the advancing care information performance category in the MIPS program differ significantly. For example, in the Medicaid EHR Incentive Programs, EPs must report on all objectives and meet measure thresholds finalized in the 2015 EHR Incentive Programs final rule. In the advancing care information performance category, MIPS eligible clinicians must report on objectives and measures, but are not required to meet measure thresholds to be considered a meaningful EHR user.

    We remind commenters that while MIPS eligible clinicians would be required to meet the requirements of the advancing care information performance category to earn points toward their MIPS final score, there is no longer a requirement that EPs demonstrate meaningful use under the Medicaid EHR incentive program as a way to avoid the Medicare EHR payment adjustments. However, MIPS eligible clinicians who meet the Medicaid EHR Incentive Program eligibility requirements are encouraged to additionally participate in the Medicaid EHR Incentive Program to be eligible for Medicaid incentive payments through program year 2021.

    Comment: A few commenters proposed that MIPS eligible clinicians who are participating in the Medicaid EHR Incentive Program be exempted from reporting to MIPS until after the completion of their final EHR performance period. Others proposed allowing clinicians to choose either to report in the Medicaid EHR Incentive Program or the advancing care information performance category of MIPS. One commenter suggested awarding MIPS eligible clinicians 30 points toward the advancing care information performance category score if they successfully attest to meaningful use in the Medicaid EHR Incentive Program.

    Response: As previously mentioned, objective and measure requirements of the Medicaid EHR Incentive Program and those finalized for the advancing care information performance category in the MIPS program vary too greatly to enable one to serve as proxy for another.

    We are finalizing our Medicaid policy as proposed.

    h. APM Scoring Standard for MIPS Eligible Clinicians Participating in MIPS APMs

    Under section 1848(q)(1)(C)(ii) of the Act, as added by section 101(c)(1) of MACRA and as discussed in section II.F.5. of this final rule with comment period, Qualifying APM Participants (QPs) are not MIPS eligible clinicians and are thus excluded from MIPS payment adjustments. Partial Qualifying APM Participants (Partial QPs) are also not MIPS eligible clinicians unless they opt to report and be scored under MIPS. All other eligible clinicians participating in APMs who are MIPS eligible clinicians are subject to MIPS requirements, including reporting requirements and payment adjustments. However, most current APMs already assess their participants on cost and quality of care and require engagement in certain care improvement activities.

    We proposed at § 414.1370 to establish a scoring standard for MIPS eligible clinicians participating in certain types of APMs (“APM scoring standard”) to reduce participant reporting burden by eliminating the need for such APM eligible clinicians to submit data for both MIPS and their respective APMs. In accordance with section 1848(q)(1)(D)(i) of the Act, we proposed to assess the performance of a group of MIPS eligible clinicians in an APM Entity that participates in certain types of APMs based on their collective performance as an APM Entity group, as defined at § 414.1305.

    In addition to reducing reporting burden, we sought to ensure that eligible clinicians in APM Entity groups are not assessed in multiple ways on the same performance activities. For instance, performance on the generally applicable cost measures under MIPS could contribute to upward or downward adjustments to payments under MIPS in a way that is not aligned with the strategy in an ACO initiative for reducing total Medicare costs for a specified population of beneficiaries attributed through the unique ACO initiative's attribution methodology. Depending on the terms of the particular APM, we believe similar misalignments could be common between the MIPS quality and cost performance categories and the evaluation of quality and cost in APMs. We believe requiring eligible clinicians in APM Entity groups to submit data, be scored on measures, and be subject to payment adjustments that are not aligned between MIPS and an APM could potentially undermine the validity of testing or performance evaluation under the APM. We also believe imposition of these requirements would result in reporting activity that provides little or no added value to the assessment of eligible clinicians, and could confuse eligible clinicians as to which CMS incentives should take priority over others in designing and implementing care activities.

    We proposed to apply the APM scoring standard to MIPS eligible clinicians in APM Entity groups participating in certain APMs (“MIPS APMs”) that meet the criteria listed below (and would be identified as “MIPS APMs” on the CMS Web site). In the proposed rule, we defined the proposed criteria for MIPS APMs, the MIPS performance period for APM Entity groups, the proposed MIPS scoring methodology for APM Entity groups, and other information related to the APM scoring standard (81 FR 28234-28247).

    (1) Criteria for MIPS APMs

    We proposed at § 414.1370 to specify that the APM scoring standard under MIPS would only be applicable to eligible clinicians participating in MIPS APMs, which we proposed to define as APMs (as defined in section II.F.4. of the proposed rule) that meet the following criteria: (1) APM Entities participate in the APM under an agreement with CMS; (2) the APM requires that APM Entities include at least one MIPS eligible clinician on a Participation List; and (3) the APM bases payment incentives on performance (either at the APM Entity or eligible clinician level) on cost/utilization and quality measures. We understood that under some APMs the APM Entity may enter into agreements with clinicians or entities that have supporting or ancillary roles to the APM Entity's performance under the APM, but are not participating under the APM Entity and therefore are not on a Participation List. We proposed not to consider eligible clinicians under such arrangements to be participants for purposes of the APM Entity group to which the APM scoring standard would apply. We also proposed that the APM scoring standard would not apply for certain APMs in which the APM Entities participate under statute or our regulations rather than under an agreement with us. We solicited comments on how the APM scoring standard should apply to those APMs as well.

    The criteria for the identification of MIPS APMs are independent of the criteria for Advanced APM determinations discussed in section II.F.4. of this final rule with comment period, so a MIPS APM may or may not also be an Advanced APM. As such, it would be possible that an APM meets all three proposed criteria to be a MIPS APM, but does not meet the Advanced APM criteria described in section II.F.4. of this final rule with comment period. Conversely, it would be possible that an Advanced APM does not meet the criteria listed above because it does not include MIPS eligible clinicians as participants.

    The APM scoring standard would not apply to MIPS eligible clinicians involved in APMs that include only facilities as participants. APMs that do not base payment on cost/utilization and quality measures also would not meet the proposed criteria for the APM scoring standard. Instead, MIPS eligible clinicians participating in these APMs would need to meet the generally applicable MIPS data submission requirements for the MIPS performance period, and their performance would be assessed using the generally applicable MIPS standards, either as individual eligible clinicians or as a group under MIPS.

    As we explained in the proposed rule, we believe the proposed APM scoring standard would help alleviate certain duplicative, unnecessary, or competing data submission requirements for MIPS eligible clinicians participating in MIPS APMs. However, we were interested in public comments on alternative methods that could reduce MIPS data submission requirements to enable MIPS eligible clinicians participating in Advanced APMs to maximize their focus on the care delivery redesign necessary to succeed within the Advanced APM while maintaining the statutory framework that excludes only certain eligible clinicians from MIPS and reducing reporting burden on Advanced APM participants.

    We proposed that the APM scoring standard would not apply to MIPS eligible clinicians participating in APMs that are not MIPS APMs. Rather, such MIPS eligible clinicians would submit data to MIPS and have their performance assessed either as an individual MIPS eligible clinician or group as described in section II.E.2 of this final rule with comment period. Some APMs may involve certain types of MIPS eligible clinicians that are affiliated with an APM Entity but not included in the APM Entity group because they are not participants of the APM Entity. We proposed that even if the APM meets the criteria to be a MIPS APM, MIPS eligible clinicians who are not included in the MIPS APM Participation List would not be considered part of the participating APM Entity group for purposes of the APM scoring standard. For instance, MIPS eligible clinicians in the Next Generation ACO Model might be involved in the APM through a business arrangement with the APM Entity as “preferred providers” but are not directly tied to beneficiary attribution or quality measurement under the APM.

    The following is a summary of the comments we received regarding our proposals for the criteria for an APM to be a MIPS APM, and for the APM scoring standard to apply only to MIPS eligible clinicians who are included in the APM Entity group on a MIPS APM Participation List.

    Comment: A commenter sought clarity on the term “MIPS APM”.

    Response: The term “MIPS APM” is used to describe an APM that meets the three criteria for purposes of the APM scoring standard: (1) APM Entities participate in the APM under an agreement with CMS; (2) the APM requires that APM Entities include at least one MIPS eligible clinician on a Participation List; and (3) the APM bases payment incentives on performance (either at the APM Entity or eligible clinician level) on cost/utilization and quality measures. Individuals and groups that do not participate in MIPS APMs will be scored under the generally applicable MIPS scoring standards. We note that the APM scoring standard has no bearing on the QP determination for eligible clinicians in Advanced APMs.

    Comment: Some commenters stated that the definition of MIPS APMs is too limiting and prevents eligible clinicians in APMs that are not considered MIPS APMs from reporting as APM Entities. Other commenters indicated that basing payment on quality measures should not be a MIPS APM criterion.

    Response: We continue to believe the criteria we proposed for a MIPS APM will appropriately identify APMs in which the eligible clinicians would be subject to potentially duplicative and conflicting incentives and reporting requirements if they were required to report and be scored under the generally applicable MIPS standard. The eligible clinicians in a MIPS APM that is not also an Advanced APM are considered MIPS eligible clinicians and are subject to MIPS reporting requirements and payment adjustments (unless they are otherwise excluded). The eligible clinicians in a MIPS APM that is an Advanced APM are also considered MIPS eligible clinicians unless they meet the threshold to be a QP for a year. In any MIPS APM, whether or not it is also an Advanced APM, eligible clinicians may already be required to report on the quality, cost and other measures on which their performance is assessed as part of their participation in the APM, leading to potentially duplicative or conflicting reporting under MIPS. Additionally, eligible clinicians in these MIPS APMs already have payment incentives tied to performance on quality and cost/utilization measures, creating the potential for conflicting assessments based on the same or similar data. Although other APMs may have similar reporting requirements to the MIPS APMs such that there is some level of duplicative reporting, unless an APM includes performance metrics tied to payment incentives in the APM, we do not believe there is the same potential for duplication and conflict. We continue to believe that eligible clinicians in APMs that meet all three of the criteria to be MIPS APMs would face a substantial level of duplication and/or conflict between reporting and assessment under the APM and the generally applicable MIPS standard. In addition, the participants in other APMs may not be subject to MIPS at all because the participants are not MIPS eligible clinicians. To the extent that eligible clinicians do participate in APMs that are not MIPS APMs, we believe they would often be in a position to consider group reporting options under MIPS.

    Comment: A few commenters suggested CMS simplify MIPS reporting and scoring by requiring no additional reporting requirements for any MIPS eligible clinicians in MIPS APMs to receive a MIPS final score. One commenter stated the APM Scoring Standard does not go far enough to reduce reporting burden because APM participants will still be required to report improvement activities and advancing care information.

    Response: We believe the proposed policy included meaningful reductions in reporting burden for MIPS APM participants. The additional policies we are finalizing in this rule (such as assigning a MIPS APM improvement activities score) will reduce this burden further. However, we do not believe it would be feasible to fully eliminate reporting requirements for MIPS APM participants while adhering to the core goals and structure of MIPS.

    Comment: A few commenters stated it is untenable to require physician groups to simultaneously pursue quality metrics, reduce costs, and build the infrastructure required to participate in APMs and MIPS. A few commenters indicated that the APM scoring standard may undermine the intent of the statute to have eligible clinicians join APMs by not providing sufficient reductions in burden under MIPS. Another commenter recommended that the third MIPS APM criterion be changed to “the APM bases payment incentives on performance on cost/utilization and/or quality measures” instead of requiring that the APM base payment incentives on both cost/utilization and quality measures. Several commenters recommended that CMS make QP determinations early enough so that eligible clinicians participating in Advanced APMs would know in advance of the MIPS submission period whether they are QPs for the year and, as such would not have to report to MIPS at all. One commenter did not support implementation of the APM scoring standard because the commenter stated that the proposal was confusing and may incentivize physicians to remain in the FFS program rather than progress towards APMs.

    Response: We recognize that MIPS APM participants are diligently working to provide high quality, cost-effective care to their patients. We also recognize the burden of reporting to more than one CMS program. We proposed to adopt the APM scoring standard with the intent of reducing the reporting burden for eligible clinicians and alleviating duplicative and/or conflicting payment methodologies that could potentially distract eligible clinicians from the goals and objectives they agreed to as an APM participant, or provide incentives that conflict with those under the APM. We also acknowledge that some stakeholders may find the APM scoring standard requirements confusing, and we will continue to consider ways to further simplify the APM scoring standard in future rulemaking. We believe much of this confusion will be resolved through continued discussions with all of our stakeholders, participants, and patients, through CMS's planned technical assistance and education and outreach activities for the Quality Payment Program, and through experience with this new program in the first performance year. We also note that the finalized QP Performance Period, described in section II.F.5. of this final rule with comment period, modifies the proposed QP determination timeframe so that eligible clinicians who are QPs for a year will not need to report MIPS data. However, an eligible clinician that is in an Advanced APM but does not meet the QP threshold will still be subject to MIPS. Furthermore, eligible clinicians who are participants in a MIPS APM that is not an Advanced APM cannot be QPs and thus will be subject to MIPS under the APM scoring standard.

    Comment: A commenter recommended that CMS not reward low-value care. The commenter indicated that by reducing the cost performance category to zero and reducing the weight for the quality performance category to zero for MIPS APMs other than the Shared Savings Program and Next Generation ACO Model, CMS may allow such MIPS APMs to perform poorly on measures of efficiency and quality at the expense of other clinicians who are truly delivering high-value care. The commenter suggested that CMS either measure all MIPS eligible clinicians in the same way, or allow MIPS APM participants to elect a neutral score for the quality and cost MIPS performance categories.

    Response: We do not believe the APM scoring standard rewards low-value care, but rather that it provides MIPS eligible clinicians in MIPS APMs a way to meet the requirements of the MIPS while focusing on the goals of the APM to improve quality and lower the cost of care. The terms and conditions of MIPS APMs themselves hold participants accountable for the cost and quality of care. In accordance with the statute, only Partial QPs have the option whether to report and be subject to a MIPS payment adjustment for a year, as described in section II.F.5. of this final rule with comment period. All MIPS eligible clinicians, including those subject to the APM scoring standard, will continue to receive final scores and MIPS payment adjustments.

    Comment: A commenter indicated the creation of the APM scoring standard provides a large advantage to MIPS APM participants, disadvantaging other MIPS eligible clinicians.

    Response: We acknowledge that eligible clinicians in MIPS APMs may achieve high scores in some MIPS performance categories. In some categories such as improvement activities, the statute encourages and credits participation in an APM. In others, MIPS eligible clinicians may perform well because of the requirements they meet by virtue of participating in MIPS APMs. However, we believe all MIPS eligible clinicians have the opportunity to score highly, and as such we do not believe the APM scoring standard will necessarily disadvantage other MIPS eligible clinicians. We believe MIPS eligible clinicians under the APM scoring standard have the potential to receive high MIPS payment adjustments because they successfully perform the requisite activities, not simply because they participate in an APM.

    Comment: One commenter recommended CMS ensure that the APM scoring standard actually reduces administrative burden in order to allow MIPS APM participants to focus on APM efforts.

    Response: We believe this final rule with comment period addresses many of the concerns expressed by commenters about the MIPS reporting burden for MIPS APM participants and we will continue to work to identify ways to ensure APMs and their participants can focus their efforts to achieve the care transformation goals of the APM.

    Comment: Several commenters expressed support for the APM scoring standard as proposed and applauded CMS for its efforts to reduce reporting burden and allow MIPS APM participants to focus on the aims of those APMs without misaligning incentives or having redundant or conflicting requirements across programs. One commenter stated they supported the proposed APM scoring standard, but thought CMS should offer sufficient education and outreach to clinicians so they understand it, as it adds complexity to the program. Two commenters requested that CMS develop a flexible scoring methodology for MIPS APMs that would recognize the significant investments to transform healthcare made by APM participants. One commenter requested that the APM scoring standard incorporate all MIPS eligible clinicians in large multispecialty groups that may have some but not all MIPS eligible clinicians participating in MIPS APMs. Another commenter recommended that the APM scoring standard be retained in the future, allowing APM decisions to be made with clarity, while another commenter supported the APM scoring standard generally but thought it should be optional.

    Response: We appreciate the general support for the proposed APM scoring standard. We will continue to consider future refinements to the APM scoring standard to ensure we are supporting eligible clinicians in their efforts to transform health care and participate in new payment and care delivery models. Although we understand that some organizations may have some members of their practices in APMs and others not in APMs, we do not believe that the APM scoring standards should apply more broadly than the identified group of actual participants in MIPS APMs, that is, the eligible clinicians included on an APM Entity's Participation List.

    Comment: A few commenters disagreed with our statements in the proposed rule suggesting that APMs focused on hospitals do not have any MIPS eligible clinicians as participants, stating that surgeons will be involved in hip and knee replacements under CJR and CJR quality performance measures should count for them for purposes of MIPS. Another commenter stated that the MIPS APM criteria should be broader to include the BPCI Initiative, CJR, and other episode payment models. A few commenters stated that such APMs have been successful at reducing costs and improving quality and that not including them as MIPS APMs discourages clinicians from participation. A few commenters suggested that CMS should amend facility-based APMs to require Participation Lists. One commenter suggested that the APM scoring standard requirement that a MIPS APM must require APM Entities to include at least one eligible clinician on a Participation List should be delayed until more MIPS APMs are available. A few commenters suggested the criteria for a MIPS APM be expanded to include other APMs such as those APMs that have an agreement with another payer outside the Medicare program or those that have a CMS agreement to participate in an APM through another entity such as a convener. One commenter expressed concern that by not including all APMs as MIPS APMs some APM participants will be forced to report twice on quality.

    Response: An APM that is hospital-based may be a MIPS APM if it meets all of the MIPS APM criteria, including the criterion that the APM must require APM Entities to include at least one MIPS eligible clinician on a Participation List. If this criterion is not met, the APM is not a MIPS APM and the APM scoring standard does not apply.

    Particularly relevant to facility- or hospital-based APMs (because some do not require APM Entities to maintain Participation Lists), any MIPS eligible clinicians that do not qualify as QPs or Partial QPs, and are not included on a Participation List of an APM Entity that participates in the MIPS APM, would report to MIPS and be scored according to the generally applicable MIPS requirements for an individual or group. The APM scoring standard is intended to ensure that the MIPS eligible clinicians that are directly and collectively accountable for beneficiary attribution and quality and cost/utilization performance under the MIPS APM are able to focus their efforts on the care transformation objectives of the APM rather than on potentially duplicative reporting of measures. We note that the MIPS eligible clinicians that are subject to the APM scoring standard are not necessarily the same as the eligible clinicians who could become QPs via participation in Advanced APMs, as described in section II.F.5. of this final rule with comment period. For instance, in certain circumstances, Affiliated Practitioners could become QPs, but because the Advanced APM does not base payment incentives for these eligible clinicians (either at the APM Entity or the eligible clinician level) on their performance on cost/utilization and quality measures we do not consider the APM requirements to be sufficiently related to MIPS reporting requirements such that the APM scoring standard should be applied. In other words, the QP determination for the APM incentive and the MIPS performance categories measure different aspects of performance that align differently with the roles of affiliated practitioners. The QP determination depends on the level of payments or patients furnished services through an Advanced APM. In contrast, MIPS payment adjustments depend on an assessment of performance on cost and quality in four categories. Whereas affiliated practitioners may furnish services through an Advanced APM, contributing to collective achievement under the APM, the QP threshold, in and of itself, does not assess or directly incentivize their performance based on cost and quality. Therefore, we do not believe there is the same potential for overlapping requirements under MIPS and APMs for such MIPS eligible clinicians. Under certain Advanced APMs such as CJR, Affiliated Practitioners may be the primary eligible clinicians receiving payment through the Advanced APM, but cost and quality measurement and reporting under the Advanced APM are the responsibility of participating hospitals rather than eligible clinicians. As such, there is minimal potential for overlap between requirements under MIPS and the APM for these MIPS eligible clinicians.

    We agree with commenters that we should continue to consider whether there are opportunities for additional APMs, including existing episode payment models, to become MIPS APMs. As we work toward that goal we believe we should move forward with the policy to avoid potentially duplicative or conflicting reporting or incentives for MIPS eligible clinicians participating in APMs that currently meet the MIPS APM criteria. In the future, we may consider amending existing APMs to meet MIPS APM criteria. However, as stated in the previous response, we do not believe that application of the APM scoring standard should be expanded to include MIPS eligible clinicians such as Affiliated Practitioners whose roles are not directly linked to quality and cost/utilization measures under the APM, or that the MIPS APM criteria should be expanded to include APMs that do not tie payment incentives to performance on quality and cost/utilization measures or APMs (such as CJR) that do not require APM Entities to have at least one eligible clinician on a Participation List. In these instances, we do not believe the requirements of the APM are sufficiently connected to MIPS reporting requirements and scoring such that there is significant potential for duplicative reporting or conflicting incentives between the APM and MIPS, the avoidance of which is the underlying purpose of the APM scoring standard.

    Comment: Two commenters requested that CMS clarify that the MIPS APM payment adjustments resulting from the MIPS APM scoring standard will not be included in the Shared Savings Program and Next Generation ACO Model expenditures for benchmark calculations.

    Response: MIPS payment adjustments resulting from the APM scoring standard are the same as MIPS adjustments for all other MIPS eligible clinicians. There are no unique “MIPS APM payment adjustments.” Rather, the APM scoring standard is only a particular scoring methodology for deriving a final score that results in a MIPS payment adjustment for an eligible clinician. Each APM has its own benchmarking methodology—benchmarking is not necessarily standard across APMs. Making a single determination with respect to the use of MIPS payment adjustments in APM benchmarking is outside the scope of this final rule with comment period.

    Comment: One commenter suggested that CMS create an “Other Payer MIPS APM” category.

    Response: We appreciate the idea of allowing MIPS scoring to be affected by participation in certain payment arrangements with other payers and we may consider the feasibility of doing so in the future in concert with the introduction of the All-Payer Combination Option.

    After considering these comments, we are finalizing the criteria for an APM to be a MIPS APM as proposed with one modification to the first criterion in order to encompass APMs with terms defined through law or regulation. MIPS APMs are APMs that meet the following criteria: (1) APM Entities participate in the APM under an agreement with CMS or by law or regulation; (2) the APM requires that APM Entities include at least one MIPS eligible clinician on a Participation List; and (3) the APM bases payment incentives on performance (either at the APM Entity or eligible clinician level) on cost/utilization and quality measures.

    Below we describe in detail how MIPS APM participants will be identified from an APM Participation List to be included in the APM Entity group under the APM scoring standard.

    We are also finalizing the proposal that the APM scoring standard does not apply to MIPS eligible clinicians who are not on a Participation List for an APM Entity group in a MIPS APM. MIPS eligible clinicians who are not part of the APM Entity group to which the APM scoring standard applies may choose to report to MIPS as individuals or groups according to the generally applicable MIPS rules.

    (2) APM Scoring Standard Performance Period

    We proposed that the performance period for MIPS eligible clinicians participating in MIPS APMs would match the generally applicable performance period for MIPS proposed in section II.E.4. of the proposed rule. We proposed this policy would apply to all MIPS eligible clinicians participating in MIPS APMs (those that meet the criteria specified in section II.E.5.h.1. of the proposed rule) except in the case of a new MIPS APM for which the first APM performance period begins after the start of the corresponding MIPS performance period. In this instance, the participating MIPS eligible clinicians in the new MIPS APM would submit data to MIPS in the first MIPS performance period for the APM either as individual MIPS eligible clinicians or as a group using one of the MIPS data submission mechanisms for all four performance categories, and report to us using the APM scoring standard for subsequent MIPS performance period(s). Additionally, we anticipate that there might be MIPS APMs that would not be able to use the APM scoring standard (even though they met the criteria for the APM scoring standard and were treated as a MIPS APMs in the prior MIPS performance period) in their last year of operation because of technical or resource issues. For example, a MIPS APM in its final year may end earlier than the end of the MIPS performance period (proposed to be December 31). We might not have continuing resources dedicated or available to continue to support the MIPS APM activities under the APM scoring standard if the MIPS APM ends during the MIPS performance period. Therefore, if we determine it is not feasible for the MIPS eligible clinicians participating in the APM Entity to report to MIPS using this APM scoring standard in an APM's last year of operation, the MIPS eligible clinicians in the MIPS APM would need to submit data to MIPS either as individual MIPS eligible clinicians or as a group using one of the MIPS data submission mechanisms for the applicable performance period. We proposed that the eligible clinicians in the MIPS APM would be made aware of this decision in advance of the relevant MIPS performance period.

    The following is a summary of the comments we received regarding our proposal that the APM scoring standard performance period will be same as the MIPS performance period.

    Comment: A few commenters recommended CMS maintain consistency between the reporting period for MIPS and MIPS APMs to reduce administrative burden, and a commenter supported the same 12-month performance period for use by MIPS and APMs. One commenter requested a 90-day reporting period for 2017.

    Response: We agree with the commenters that aligning the performance periods reduces administrative burden. We will maintain the 12-month performance period for the APM scoring standard, but data submitted for the advancing care information and, if necessary, improvement activities performance categories will follow the generally applicable MIPS data submission requirements regarding the number of measures and activities required to be reported during the performance period in order to receive a score for these performance categories. The quality performance category data for MIPS APMs will be submitted in accordance with the specific reporting requirements of the APM, which for most MIPS APMs covers the same 12-month performance period that will be used for the APM scoring standard.

    Comment: Two commenters requested CMS provide guidance for eligible clinicians in a MIPS APM that closes before the end of the performance period.

    Response: We will post the list of MIPS APMs prior to the first day of the MIPS performance period for each year. If the APM would have qualified as a MIPS APM but the APM is ending before the end of the performance period, then the APM will not appear on this list. We will notify participants in any such APMs in advance of the start of the performance period if they will need to report to MIPS using the MIPS individual or group reporting option.

    We are finalizing the APM scoring standard performance period to align with the MIPS performance period.

    (3) How the APM Scoring Standard Differs From the Assessment of Groups and Individual MIPS Eligible Clinicians Under MIPS

    We believe that establishing an APM scoring standard under MIPS will allow APM Entities and their participating eligible clinicians to focus on the goals and objectives of the MIPS APM to improve quality and lower costs of care while avoiding potentially conflicting incentives and duplicative reporting that could occur as a result of having to submit separate or additional data to MIPS. The APM scoring standard we proposed is similar to group assessment under MIPS as described in section II.E.3.d. of the proposed rule, but would differ in one or more of the following ways: (1) Depending on the terms and conditions of the MIPS APM, an APM Entity could be comprised of a sole MIPS eligible clinician (for example, a physician practice with only one eligible clinician could be considered an APM Entity); (2) the APM Entity could include more than one unique TIN, as long as the MIPS eligible clinicians are identified as participants in the APM by their unique APM participant identifiers; (3) the composition of the APM Entity group could include APM participant identifiers with TIN/NPI combinations such that some MIPS eligible clinicians in a TIN are APM participants and other MIPS eligible clinicians in that same TIN are not APM participants. In contrast, assessment as a group under MIPS requires a group to be comprised of at least two MIPS eligible clinicians who have assigned their billing rights to a TIN. It also requires that all MIPS eligible clinicians in the group use the same TIN.

    In addition to the APM Entity group composition being potentially different than that of a group as generally defined under MIPS, we proposed for the APM scoring standard that we would generate a MIPS final score by aggregating all scores for MIPS eligible clinicians in the APM Entity that is participating in the MIPS APM to the level of the APM Entity. As we explained in the proposed rule, we believe that aggregating the MIPS performance category scores at the level of the APM Entity is more meaningful to, and appropriate for, these MIPS eligible clinicians because they have elected to participate in a MIPS APM and collectively focus on care transformation activities to improve the quality of care.

    Further, depending on the type of MIPS APM, we proposed that the weights assigned to the MIPS performance categories under the APM scoring standard for MIPS eligible clinicians who are participating in a MIPS APM may be different from the performance category weights for MIPs eligible clinicians not participating in a MIPS APM for the same performance period. For example, we proposed that under the APM scoring standard, the weight for the cost performance category will be zero and that for certain MIPS APMs, the weight for the quality performance category will be zero for the 2019 payment year. Where the weight for the performance category is zero, neither the APM Entity nor the MIPS eligible clinicians in the MIPS APM would need to report data in these categories, and we would redistribute the weights for the quality and cost performance categories to the improvement activities and advancing care information performance categories to maintain a total weight of 100 percent.

    To implement certain elements of the APM scoring standard, we need to use the Shared Savings Program (section 1899 of the Act) and CMS Innovation Center (section 1115A of the Act) authorities to waive specific statutory provisions related to MIPS reporting and scoring. Section 1899(f) of the Act authorizes waivers of title XVIII requirements as may be necessary to carry out the Shared Savings Program, and section 1115A(d)(1) of Act authorizes waivers of title XVIII requirements as may be necessary solely for purposes of testing models under section 1115A of the Act. For each section in which we proposed scoring methodologies and waivers to enable the proposed approaches, we described how the use of waivers is necessary under the respective waiver authority standards. The underlying purpose of APMs is for CMS to pay for care in ways that are unique from FFS payment and to test new ways of measuring and assessing performance. If the data submission requirements and associated adjustments under MIPS are not aligned with APM-specific goals and incentives, the participants receive conflicting messages from us on priorities, which could create uncertainty and severely degrade our ability to evaluate the impact of any particular APM on the overall cost and quality of care. Therefore, we explained our belief that, for the reasons stated in section II.E.5.h. of the proposed rule certain waivers are necessary for testing and operating APMs and for maintaining the integrity of our evaluation of those APMs.

    In the proposed rule we noted that for at least the first performance year, we do not anticipate that any APMs other than those under sections 1115A or 1899 of the Act would meet the criteria to be MIPS APMs. In the event that we do anticipate other types of APMs (demonstrations under section 1866C of the Act or required by federal law) will become MIPS APMs for a future year, we will address MIPS scoring for eligible clinicians in those APMs in future rulemaking.

    The following is a summary of the comments we received regarding our proposals to use the Shared Savings Program (section 1899 of the Act) and CMS Innovation Center (section 1115A of the Act) authorities to waive specific statutory provisions related to MIPS reporting and scoring to implement the APM Scoring Standard for MIPS APMs and to apply the MIPS final score at the APM Entity level.

    Comment: A few commenters expressed support for CMS' use of waiver authorities to establish the APM scoring standard. Several commenters also supported the proposal to calculate the final score at the APM Entity level. One commenter supported averaging scores for all clinicians in a MIPS APM Entity for purposes of the MIPS payment adjustment. A few commenters had concerns about aggregating all data for the clinicians linked to an APM Entity, and one commenter recommended that the APM scoring standard be optional.

    Response: We continue to believe the final score derived at the APM Entity level should be the score used for purposes of determining the MIPS payment adjustment for each MIPS eligible clinician in that APM Entity group. As part of their participation in any MIPS APM, eligible clinicians should be working collaboratively and advancing shared care goals for aligned patients. We believe this collaboration toward shared goals under the MIPS APM differentiates these MIPS eligible clinicians from those in a MIPS group defined by a billing TIN, and supports our proposal to score these clinicians as a group.

    The APM Entity final score is derived by aggregating the scores for each of the performance categories as applicable. For example, if the CPC+ model is determined to be a MIPS APM, participating MIPS eligible clinicians in CPC+ will not be evaluated in the cost and quality performance categories, which will have a zero weight for the first performance year. In this example, the final score will be calculated for MIPS eligible clinicians at the APM Entity level by adding the weighted advancing care information score and the assigned improvement activities score for the MIPS APM (see below for the final policies on the scoring for these performance categories). This same final score calculated at the APM Entity level will be applied to each MIPS eligible clinician TIN/NPI combination in the APM Entity as identified on the APM Entity's Participation List.

    Comment: A commenter requested clarification on how reporting will be accomplished with groups where MIPS eligible clinicians participate in multiple APMs, especially multiple Advanced APMs.

    Response: As finalized in section II.E.6. of this final rule with comment period, if a single TIN/NPI combination for a MIPS eligible clinician is in two or more MIPS APMs, we will use the highest final score to determine the MIPS payment adjustment for that MIPS eligible clinician. MIPS adjustments apply to the TIN/NPI combination, so to the extent that a MIPS eligible clinician (NPI) participates in multiple MIPS APMs with different TINs, each of those TIN/NPI combinations would be assessed separately under each respective APM Entity.

    We are finalizing the proposal to use the Shared Savings Program and CMS Innovation Center authorities under sections 1899 and 1115A of the Act, respectively, to waive specific statutory requirements related to MIPS reporting and scoring in order to implement the APM scoring standard. We note that although we proposed to use our authority under section 1899(f) of the Act to waive these statutory requirements in order to implement the APM scoring standard for MIPS eligible clinicians participating in Shared Savings Program ACOs, we believe we could also use our authority under section 1899(b)(3)(D) of the Act to accomplish this result. Section 1899(b)(3)(D) of the Act allows us to incorporate reporting requirements under section 1848 of the Act into the reporting requirements for the Shared Savings Program, as we determine appropriate, and to use alternative criteria than would otherwise apply. Thus, we believe that section 1899(b)(3)(D) of the Act also provides authority to apply the APM scoring standard for MIPS eligible clinicians participating in a Shared Savings Program ACO rather than requiring these MIPS eligible clinicians to report individually or as a group using one of the MIPS data submission mechanisms.

    We are also finalizing our proposal to score MIPS eligible clinicians in the MIPS APM at the APM Entity level. The final score calculated at the APM Entity level will be applied to each MIPS eligible clinician in the APM Entity group.

    (4) APM Participant Identifier and Participant Database

    To ensure we have accurately captured performance data for all of the MIPS eligible clinicians that are participating in an APM, we proposed to establish and maintain an APM participant database that would include all of the MIPS eligible clinicians who are part of the APM Entity. We would establish this database to track participation in all APMs, in addition to specifically tracking participation in MIPS APMs and Advanced APMs. We proposed that each APM Entity be identified in the MIPS program by a unique APM Entity identifier, and we also proposed that the unique APM participant identifier for a MIPS eligible clinician would be a combination of four identifiers including: (1) APM identifier established by CMS (for example, AA); (2) APM Entity identifier established by CMS (for example, A1234); (3) the eligible clinician's billing TIN (for example, 123456789); and (4) NPI (for example, 1111111111). The use of the APM participant identifier will allow us to identify all MIPS eligible clinicians participating in an APM Entity, including instances in which the MIPS eligible clinicians use a billing TIN that is shared with MIPS eligible clinicians who are not participating in the APM Entity. In the proposed rule, we stated that we would plan to communicate to each APM Entity the MIPS eligible clinicians who are included in the APM Entity group in advance of the applicable MIPS data submission deadline for the MIPS performance period.

    Under the Shared Savings Program, each ACO is formed by a collection of Medicare-enrolled TINs (ACO participants). Under our regulation at 42 CFR 425.118, all Medicare enrolled individuals and entities that have reassigned their rights to receive Medicare payment to the TIN of the ACO participant must agree to participate in the ACO and comply with the requirements of the Shared Savings Program. Because all providers and suppliers that bill through the TIN of an ACO participant are required to agree to participate in the ACO, all MIPS eligible clinicians that bill through the TIN of an ACO participant are considered to be participating in the ACO. For purposes of the APM scoring standard, the ACO would be the APM Entity. The Shared Savings Program has established criteria for determining the list of eligible clinicians participating under the ACO, and we would use the same criteria for determining the list of MIPS eligible clinicians included in the APM Entity group for purposes of the APM scoring standard.

    We recognize that there may be scenarios in which MIPS eligible clinicians may change TINs, use more than one TIN for billing Medicare, change their APM participation status, and/or change other practice affiliations during a performance period. Therefore, we proposed that only those MIPS eligible clinicians who are on the Participation List for the APM Entity in a MIPS APM on December 31 (the last day of the performance period) would be considered part of the APM Entity group for purposes of the APM scoring standard. Consequently, MIPS eligible clinicians who are not listed as participants of an APM Entity in a MIPS APM at the end of the performance period would need to submit data to MIPS through one of the MIPS data submission mechanisms and would have their performance assessed either as individual MIPS eligible clinicians or as a group for all four MIPS performance categories. For example, under the proposal, a MIPS eligible clinician who participates in the APM Entity on January 1, 2017, and leaves the APM Entity on June 15, 2017, would need to submit data to MIPS using one of the MIPS data submission mechanisms and would have their performance assessed either as an individual MIPS eligible clinician or as part of a group. This approach for defining the applicable group of MIPS eligible clinicians was consistent with our proposal for identifying eligible clinician groups for purposes of QP determinations outlined in section II.F.5.b. of the proposed rule; the group of eligible clinicians we use for purposes of a QP determination would be the same as that used for the APM scoring standard. This would be an annual process for each MIPS performance period.

    The following is a summary of the comments we received regarding our proposals to establish an APM participant identifier, a CMS database to identify and track the APM participants, and the dates that we will use to determine if an MIPS APM eligible clinician will be included in the MIPS APM for purposes of MIPS reporting under the APM scoring standard.

    Comment: A commenter suggested CMS use the current CMS enrollment infrastructure such as PECOS to identity and track APM participants to provide an incentive for eligible clinicians to update their Medicare enrollment information, which in turn would provide CMS with more accurate data on the MIPS eligible clinicians that are in a MIPS APM.

    Response: We will be using existing systems to the extent feasible to ensure we have accurate data on MIPS eligible clinicians and APM participants. Depending on the results of our assessment of available data and systems, we may or may not include any particular system, such as PECOS.

    Comment: A number of commenters supported the use of an APM participant identifier that includes the TIN and NPI for the MIPS APM eligible clinicians and urged collaboration with vendors to build a useful infrastructure. One commenter thought CMS should simplify this APM participant identifier. Two commenters encouraged CMS to make the APM participant identifiers available to stakeholders in real time via an Application Program Interface (API). One commenter indicated the APM participant identifier would add administrative complexity. Another commenter encouraged CMS to make sure there is a consistent approach to identifying both APM and MIPS participants.

    Response: We believe the use of the APM participant identifier will ensure we use accurate information regarding MIPS eligible clinicians and their participation in APMs, and we believe that this will reduce administrative complexity by reducing ambiguity. We appreciate the suggestion to make the APM participant identifier available via an API, and we are exploring a variety of methods to communicate this information.

    Comment: A few commenters were opposed to the December 31 date for determining if the APM Entity participant would be included in the MIPS APM for purposes of the APM scoring standard. A commenter did not support this proposal because MIPS eligible clinicians could be excluded if they were participating throughout the year but not on December 31st. One commenter suggested that the eligible clinician should be included in the group if they were in the MIPS APM for more than half of the performance period and another commenter suggested they be considered as participating in the group if they were in the MIPS APM for 90 days. Yet another commenter stated that CMS's proposed policy for determining who participates in a given APM does not sufficiently respond to the often complex billing relationships clinicians maintain across TINs, and that these complex billing relationships are especially true for academic medical center clinicians who often relocate due to changes in employment based on the academic year. The commenter suggested having a more flexible list of dates for updating the list of MIPS eligible clinicians participating in a MIPS APM (and therefore subject to the APM scoring standard) or looking at claims rather than Participation Lists.

    Response: We agree with the commenters that only using the December 31 date to determine whether an eligible clinician is a MIPS APM participant could potentially impact a clinician's decision on whether or when to leave a MIPS APM and their ability to report to MIPS if they leave the MIPS APM prior to the end of the performance period. We also recognize that an eligible clinician who participates in a MIPS APM in the first 6 months of the performance period and then leaves the MIPS APM may have difficulty reporting to MIPS independent of the APM Entity. If the MIPS eligible clinician leaves the MIPS APM and joins a group or another APM that is not a MIPS APM, the individual would likely be included in the new group's MIPS reporting. But if the MIPS eligible clinician does not join another group, then they would need to report to MIPS as an individual. In such a case, the MIPS eligible clinician may not be able to meet one or more of the MIPS performance category reporting requirements. For example, a MIPS eligible clinician who used CEHRT in an APM Entity through July of a performance period may not have the CEHRT available to report the advancing care information performance category as an individual MIPS eligible clinician during the MIPS submission period. We are revising the points in time at which we will assess whether a MIPS eligible clinician is on a Participation List for purposes of the APM scoring standard. We will review the Participation Lists for MIPS APMs on March 31, June 30, and August 31. A MIPS eligible clinician on the Participation List for an APM Entity in a MIPS APM on at least one of these three dates will be included in the APM Entity group for the purpose of the APM scoring standard. For example, if the Oncology Care Model (OCM) is determined to be a MIPS APM, a MIPS eligible clinician who is identified on the Participation List of an APM Entity participating in OCM from January 1 through April 25 of the performance year would be included in the APM Entity group for purposes of the APM scoring standard for that performance year.

    Comment: A commenter requested clarification on whether a MIPS eligible clinician who participates in a MIPS APM for part of the year but leaves prior to the end of the performance period is allowed to submit a partial year of MIPS data for the time they were not in the MIPS APM.

    Response: As discussed in section II.F.5. of this final rule with comment period, we are adopting a modified version of the proposed policy for defining the APM Entity group, which will be applicable to both QP determinations and the APM scoring standard. Under the final policy, if a MIPS eligible clinician is on the APM Participation List on at least one of the APM participation assessment (Participation List “snapshot”) dates, the MIPS eligible clinician will be included in the APM Entity group for purposes of the APM scoring standard for the applicable performance year. If the MIPS eligible clinician is not on the APM Entity's Participation List on at least one of the snapshots dates (March 31, June 30, or August 31), then the MIPS eligible clinician will need to submit data to MIPS using the MIPS individual or group reporting option and adhere to all generally applicable MIPS data submission requirements to avoid a negative payment adjustment. Therefore, if the applicable data submission requirements include full-year reporting, the MIPS individual or group would need to report for the full year.

    Comment: A commenter recommended that CMS: (1) Allow ACOs to report quality data and other information for MIPS on behalf of participating clinicians who join an ACO mid-performance year but are not included on the ACO Participation List until the following year, and (2) hold harmless from negative MIPS payment adjustments those clinicians who join the ACO mid-performance year but are not included on the ACO Participation List until the following year. Another commenter requested that MIPS APM participants who leave prior to the end of the performance period be exempt from MIPS reporting because this may hinder employment mobility. Some commenters suggested CMS indemnify clinicians who joined an ACO mid-year from any negative MIPS payment adjustments because the commenters believe these clinicians should not be penalized for the hard work they put into the APM during the year solely because they joined the APM Entity after the start of the performance year.

    Response: Each APM has specific rules as to when participants can be added or removed from Participation Lists. If the MIPS eligible clinician is on the MIPS APM Participation List on at least one of the three snapshot dates (March 31, June 30, or August 31), then the MIPS eligible clinician will be included in the APM Entity group and scored according to the APM scoring standard for purposes of MIPS for that performance year. Once an eligible clinician is determined to be part of the APM Entity group in a MIPS APM at one of the snapshot dates, the eligible clinician will be part of the group for purposes of MIPS and the APM scoring standard for that performance period even if they leave the APM Entity at a later date.

    Comment: A commenter requested clarification about whether the APM Entities will submit new Participation Lists for the purpose of MIPS or if CMS will use Participation Lists submitted for the MIPS APM. One commenter indicated it may be easier if the APM Entity provides CMS with the list of MIPS APM participants. Another commenter suggested that instead of using a Participation List CMS should design other approaches to discern which eligible clinicians are in an APM Entity.

    Response: We will use the Participation Lists that the APM Entity provides to us in accordance with the particular MIPS APM's rules. Each APM has particular rules for how the Participation Lists may be updated during a performance year to reflect the APM Entities and their participating eligible clinicians, as identified by their TIN/NPI combinations. We will maintain these Participation Lists for each APM in a dedicated database, and we will use the same Participation Lists for operational purposes within the APM, for QP determinations, and to determine which MIPS eligible clinicians are in the APM Entity group for purposes of the APM scoring standard. Therefore, APM Entities such as ACOs would not be required to submit any additional Participant Lists for purposes of the Quality Payment Program.

    Comment: A commenter requested CMS provide clear guidance as to how each eligible clinician would be scored if they are a QP in a MIPS APM so they can make informed decisions regarding APM participation.

    Response: An eligible clinician who becomes a QP is exempt from MIPS reporting requirements and the payment adjustment for the applicable payment year. For example, if the eligible clinician is determined to be a QP for the 2019 payment year based on 2017 performance, then the clinician is exempt from a MIPS payment adjustment in 2019 and does not need to report data to MIPS data for the 2017 performance period.

    We are finalizing the use of the proposed APM participant identifier to define the APM Entity group that is participating in a MIPS APM. The APM Participation List information will be stored in a database so that, among other uses, we can identify and include the appropriate MIPS eligible clinicians in an APM Entity group to which the APM scoring standard applies. We are revising our proposal to use December 31 as the date on which an eligible clinician must appear on the Participation List to be included in the APM Entity group for a MIPS APM. Instead of identifying MIPS eligible clinicians participating in a MIPS APM at a single point in time on December 31 of the performance year, we will review the MIPS APM Participation Lists on March 31, June 30 and August 31. All eligible clinicians who appear on an APM Entity's list for a MIPS APM on at least one of those three dates will be included in the APM Entity group for purposes of the APM scoring standard for the year. We describe the determination of the APM Entity group in full detail in section II.F.5. of this final rule with comment period.

    (5) APM Entity Group Scoring for the MIPS Performance Categories

    As mentioned previously, section 1848(q)(3)(A) of the Act requires the Secretary to establish performance standards for the measures and activities under the following performance categories: (1) Quality; (2) cost; (3) improvement activities; and (4) advancing care information. We proposed at § 414.1370 to calculate one final score that is applied to the billing TIN/NPI combination of each MIPS eligible clinician in the APM Entity group. Therefore, each APM Entity group (for example, the MIPS eligible clinicians in a Shared Savings Program ACO or an Oncology Care Model practice) would receive a score for each of the four performance categories according to the proposals described in the proposed rule, and we would calculate one final score for the APM Entity group. The APM Entity group score would be applied to each MIPS eligible clinician in the group, and subsequently used to develop the MIPS payment adjustment that is applicable to each MIPS eligible clinician in the group. Thus, the final score for the APM Entity group and the participating MIPS eligible clinician score are the same. For example, in the Shared Savings Program, the MIPS eligible clinicians in each ACO would be an APM Entity group. That group would receive a single final score that would be applied to each of its participating MIPS eligible clinicians. Similarly, in the OCM, the MIPS eligible clinicians identified on an APM Entity's Participation List would comprise an APM Entity group. That group would receive a single final score that would be applied to each of the MIPS eligible clinicians in the group. We note that this APM Entity group final score is not used to evaluate eligible clinicians or the APM Entity for purposes of incentives within the APM, shared savings payments, or other potential payments under the APM, and we currently do not foresee APMs using the final score for purposes of evaluation within the APM. Rather, the APM Entity group final score would be used only for the purposes of the APM scoring standard under MIPS. It should be noted that although we proposed that the APM scoring standard would only apply to participants in MIPS APMs, MIPS eligible clinicians that participate in an APM (including but not limited to a MIPS APM) and submit either individual or group level data to MIPS earn a minimum score of 50 percent of the highest potential improvement activities performance category score as long as such MIPS eligible clinicians are on a list of participants for an APM and are identifiable by the APM participant identifier.

    We explained in the proposed rule that we want to avoid situations in which different MIPS eligible clinicians in the same APM Entity group receive different MIPS scores. APM Entities have a goal of collective success under the terms of the APM, so having a variety of differing MIPS adjustments for eligible clinicians within that collective unit would undermine the intent behind the APM to test a departure from a purely FFS system based on independent clinician activity.

    We proposed, for the first MIPS performance period, a specific scoring and reporting approach for the MIPS eligible clinicians participating in MIPS APMs, which would include the Shared Savings Program, the Next Generation ACO Model, and other APMs that meet the proposed criteria for a MIPS APM. In the proposed rule, we described the APM Entity data submission requirements and proposed a scoring approach for each of the MIPS performance categories for specific MIPS APMs (the Shared Savings Program, Next Generation ACO Model, and all other MIPS APMs).

    The following is a summary of the comments we received regarding our proposal to calculate one final score per APM Entity group in a MIPS APM, and to apply that final score to each MIPS eligible clinician (identified by the billing TIN/NPI combination) in the APM Entity group and our proposal to give one-half of the maximum improvement activities score to any MIPS eligible clinicians who are on a list of participants and identified by the APM participant identifier, regardless of whether they participate in an Advanced APM, MIPS APM, or other APM.

    Comment: A number of commenters supported our proposal. Another commenter was concerned that in a group, poor performance by some eligible clinicians may affect the final score for other eligible clinicians who perform better. A commenter suggested that CMS allow APM participants to receive the MIPS score that is the higher of the APM Entity group score and the group TIN score.

    Response: As previously discussed, we are finalizing MIPS APM scoring at the APM Entity level, and the final score will be applied to each TIN/NPI combination in the APM Entity group. In any group reporting structure, the resulting final score reflects the collective performance of the group. Unless all APM Entity group members score exactly equally, some will receive higher or lower final scores than they would have achieved individually. We believe that, although some group members' lower final scores may offset the final score for higher performers in the APM Entity, the APM Entity level score appropriately reflects the aggregate performance of the eligible clinicians in the APM Entity. APMs are premised on a group of MIPS eligible clinicians working together to collectively achieve the goals of the APM, and providing different MIPS payment adjustments within an APM Entity is not consistent with those goals.

    Under specific circumstances, described below, in which a Shared Savings Program ACO fails to report quality under the Shared Savings Program requirements, participant TINs of such ACOs would be considered the APM Entity groups for purposes of the APM scoring standard. Even under this exception, those TIN groups would still be scored as a cohesive unit, with no individual final score variation within the TIN.

    Comment: A commenter supported allowing participants in other APMs, such as the Accountable Health Communities Model, to receive improvement activities credit. A few commenters requested that CMS clarify how eligible clinicians and groups participating in APMs that are not MIPS APMs would receive credit for APM participation in the improvement activities category.

    Response: MIPS eligible clinicians that participate in an APM that is not a MIPS APM will need to be identified by their APM participant identifier on a CMS-maintained list during the MIPS performance year in order to receive one-half of the maximum improvement activities score for APM participation. This list may be a Participation List, an Affiliated Practitioner List, or another CMS-maintained list, as applicable. Such CMS-maintained lists define APM participation; therefore, MIPS eligible clinicians are not considered to be participating in an APM unless included on a CMS-maintained list. We will notify APM Entities in advance of the first day of the performance period if the APM utilizes such a list. If the specific APM does utilize such a list, then the MIPS eligible clinicians will be eligible for the improvement activities credit.

    Comment: A commenter requested that CMS clarify in the final rule with comment period that a rheumatologist participating in other APMs not listed as an Advanced or MIPS APM in this rule would receive one-half of the maximum improvement activities score for such participation.

    Response: As stated above, an eligible clinician that participates in an APM, even one that is not an Advanced APM or MIPS APM, would still receive one-half the maximum score for improvement activities through APM participation. CMS defines participation in APMs by presence on a CMS-maintained list associated with an APM. Therefore, we will use those lists to validate the APM participation improvement activities credit.

    Comment: A number of commenters supported scoring MIPS eligible clinicians at the APM Entity level, and other commenters supported scoring MIPS eligible clinicians at the TIN level. A commenter stated that evaluating APM Entities, such as ACOs, at the APM Entity level reinforces the APM Entity purpose and avoids fractures within the APM Entity. Another commenter recommended CMS have all ACOs scored at the APM Entity level for the advancing care information performance category to recognize that the health information technology work in most APMs is best measured as a whole. A few commenters requested that the APM participants have a choice as to being scored at the APM Entity level or participant TIN level. A commenter further suggested that scoring at the APM Entity level instead of the participant TIN level overstates the relationship between these clinicians. One commenter stated that the policies in which the primary TIN for an ACO reports the primary-care focused CMS Web Interface measures result in a double standard whereby specialists in ACOs are not held to the same individual level of accountability as those in small group or solo practices where reporting is done at the individual clinician level.

    Response: We believe that APM Entities should be scored at the APM Entity level because the APM Entity is a group of eligible clinicians focused on achieving the collective goals of the APM, which include shared responsibility for cost and quality. That stated, we specifically recognize that there may be rare instances in which an ACO in the Shared Savings Program may fail to report quality as required by the Shared Savings Program, which would adversely impact the MIPS final score of all MIPS eligible clinicians billing under ACO participant TINs. Accordingly, in the event that a Shared Savings Program ACO does not report quality measures as required by the Shared Savings Program, scoring under the APM scoring standard would be calculated at the ACO participant TIN level for MIPS eligible clinicians in that ACO, and each of the ACO participant TINs would receive its own TIN-level final score instead of an APM Entity-level final score. We note, however, that our final policy would not cancel or mitigate any of the negative consequences associated with non-reporting on quality as required under the Shared Savings Program, including ineligibility for shared savings payments and/or potential termination of the ACO from the program.

    We are finalizing our proposal to calculate one final score at the APM Entity level that will be applied to the billing TIN/NPI combination of each MIPS eligible clinician in the APM Entity group. We are also finalizing our policy to give one-half of the maximum improvement activities score to eligible clinicians who are APM participants, with the clarification that we would extend such improvement activities scoring credit to any MIPS eligible clinicians identified by an APM participant identifier on a Participation List, an Affiliated Practitioners List, or other CMS-maintained list of participants at any time during the MIPS performance period.

    In the event that a Shared Savings Program ACO does not report quality measures as required under the Shared Savings Program regulations, then scoring on all MIPS performance categories will be at the ACO participant TIN level, and the resulting TIN-level final score will be applied to each of its constituent TIN/NPI combinations. For purposes of both the Shared Savings Program quality performance requirement and the APM scoring standard, any “partial” reporting of quality measures through the CMS Web Interface that does not satisfy the quality reporting requirements under the Shared Savings Program will be considered a failure to report. We note that in this scenario, each ACO participant TIN would need to report quality data to MIPS according to MIPS group reporting requirements in order to avoid a score of zero for the quality performance category.

    We believe that this exception for the Shared Savings Program recognizes the recommendations of several commenters that the APM scoring standard should apply at the TIN level and concerns that in some cases ACOs are not representative of the potentially widely-varying MIPS performance across ACO participant TINs. Although we maintain that the APM Entity-level scoring is generally appropriate to reflect the collective goals and responsibilities of the group, we believe that ACOs that fail to report quality as required under the Shared Savings Program do not necessarily represent the quality performance of their constituent TINs. Therefore, we believe it is appropriate in such cases to allow ACO participant TINs to avoid a score of zero in the quality performance category and to take responsibility for their own MIPS reporting and scoring independent of the ACO and other TINSs in the ACO. Further, this policy is generally consistent with similar policies that have been proposed for ACO participant TINs under PQRS and the Value Modifier program at (81 FR 46408-46409, 46426-46427).

    Additionally, we recognize that there may be instances when an APM Entity's participation in the APM is terminated during the MIPS performance period. As we state in section II.F.5. of this final rule with comment period, we will not make the first assessment to determine whether a MIPS eligible clinician is on an APM Entity's Participation List until March 31 of the performance period. Therefore if an APM Entity group terminates its participation in the APM prior to March 31, the MIPS eligible clinicians would not be considered part of an APM Entity group for purposes of the APM scoring standard.

    If an APM Entity's participation in the APM is terminated on or after March 31 of a performance period, the MIPS eligible clinicians in the APM Entity group would still be considered an APM Entity group in a MIPS APM for the year, and would report and be scored under the APM scoring standard.

    (6) Shared Savings Program—Quality Performance Category Scoring Under the APM Scoring Standard

    We proposed that beginning with the first MIPS performance period Shared Savings Program ACOs would only need to submit their quality measures to CMS once using the CMS Web Interface through the same process that they use to report to the Shared Savings Program to report quality measures to MIPS. These data would be submitted once but used for both the Shared Savings Program and for MIPS. Shared Savings Program ACOs have used the CMS Web Interface for submitting their quality measures since the program's inception, making this a familiar data submission process. The Shared Savings Program quality measure data reported to the CMS Web Interface would be used by CMS to calculate the MIPS quality performance category score at the APM Entity group level. The Shared Savings Program quality performance data that is not submitted to the CMS Web Interface, for example the CAHPS survey and claims-based measures, would not be included in the MIPS APM quality performance category score. The MIPS quality performance category requirements and performance benchmarks for quality measures submitted via the CMS Web Interface would be used to determine the MIPS quality performance category score at the ACO level for the APM Entity group. We stated that we believe this would reduce the reporting burden for Shared Savings Program MIPS eligible clinicians by requiring quality measure data to be submitted only once and used for both programs.

    In the proposed rule, we explained that we believe that no waivers are necessary to adopt this approach because the quality measures submitted via the CMS Web Interface under the Shared Savings Program are also MIPS quality measures and would be scored under MIPS performance standards. In the event that Shared Savings Program quality measures depart from MIPS measures in the future, we would address such changes including whether further waivers are necessary at such a time in future rulemaking.

    The following is a summary of the comments we received regarding our proposal to have Shared Savings Program ACOs report quality measures to MIPS using the CMS Web Interface as they normally would under Shared Savings Program rules and our proposal to calculate the MIPS quality performance category score at the APM Entity group level based on the data reported by the ACO to the CMS Web Interface and using MIPS performance benchmarks.

    Comment: A commenter wanted to know which set of APM scoring standard rules would apply to CPC+ practices that participate in both CPC+ and the Shared Savings Program. The commenter noted that if the reporting and scoring under the APM scoring standard for other MIPS APMs applies to the CPC+ practice, the quality performance category would be reweighted to zero. The commenter recommended that MIPS eligible clinicians who participate in both the CPC+ and the Shared Savings Program use the Shared Savings Program rules for reporting and scoring under the APM scoring standard.

    Response: In May 2016, CMS announced that practices may participate in both a CPC+ model and in an ACO participating in the Shared Savings Program. More information about dual participation may be found in the CPC+ FAQs or RFA at https://innovation.cms.gov/Files/x/cpcplus-practiceapplicationfaq.pdf or https://innovation.cms.gov/Files/x/cpcplus-rfa.pdf. For purposes of the APM scoring standard, MIPS eligible clinicians in CPC+ practices that are also participating in a Shared Savings Program ACO will be considered part of a Shared Savings Program ACO. CPC+ practices that are part of a Shared Savings Program ACO will report quality to CPC+ as required by the CPC+ model but will not receive the CPC+ performance-based incentive payment. As part of a Shared Savings Program ACO, CPC+ practices, along with the other ACO participants, will be subject to the payment incentives for cost and quality under the Shared Savings Program. Because CPC+ practices that participate in both the CPC+ model and the Shared Savings Program are not eligible to receive the performance-based incentive payment under the CPC+ model, responsibility for cost and quality is assessed more comprehensively under the Shared Savings Program. Therefore, we believe that the Shared Savings Program participation of these “dual participants” should determine the manner in which we assess them under the APM scoring standard.

    Comment: A commenter agreed with the proposed approach of not including CAHPS or other non-CMS Web Interface quality data measures in the MIPS APM quality performance category score for ACOs in the Shared Savings Program. Alternately, a commenter recommended that CAHPS measures be included in Shared Savings Program ACO quality performance category scores.

    Response: Because CAHPS survey responses are not submitted to the CMS Web Interface and may not be available in time for inclusion in the MIPS quality performance category scoring, we are not including these measures in the MIPS quality performance category score for the ACOs in the Shared Savings Program and the Next Generation ACO Model.

    Comment: One commenter requested clarification as to which quality measures, specifically whether MIPS population health measures, would be included in the APM scoring standard for Shared Savings Program ACOs.

    Response: The MIPS population health measures will not be included in the quality performance category score for eligible clinicians participating in the Shared Savings Program, the Next Generation ACO Model or other MIPS APMs under the APM scoring standard.

    Comment: A commenter requested that CMS ensure that all the MIPS eligible clinicians billing under the TIN of an ACO participant in a Shared Savings Program ACO receive the APM Entity group final score even though most ACO quality measures are for primary care physicians.

    Response: All eligible clinicians that bill through the TIN of a Shared Savings Program ACO participant and are included on the Participant List on at least one of the three Participation List snapshot dates will receive the APM Entity group final score.

    Comment: A commenter requested that all ACOs be exempt from the MIPS quality performance category because they are already being assessed for quality under the APM and also requested that Shared Savings Program Track 1 participants have the option to be exempt from MIPS.

    Response: All MIPS eligible clinicians participating in the Shared Savings Program are subject to MIPS unless they are determined to be a QP or a Partial QP whose APM Entity elects not to report under MIPS. This includes MIPS eligible clinicians who are not participating in Advanced APMs. Under the APM scoring standard, MIPS eligible clinicians participating in Shared Savings Program ACOs do not have to do any additional reporting to satisfy MIPS quality performance category reporting requirements.

    We are finalizing our proposal that a Shared Savings Program ACO's quality data reported to the CMS Web Interface as required by Shared Savings Program rules will also be used for purposes of scoring the MIPS quality performance category using MIPS performance benchmarks. We note that for purposes of the Shared Savings Program quality reporting requirement and the APM scoring standard, any “partial” reporting of quality measures through the CMS Web Interface that does not satisfy the requirements under the Shared Savings Program will be considered a failure to report, triggering the exception finalized above in which we will separately assess each ACO participant TIN under the APM scoring standard.

    (7) Shared Savings Program—Cost Performance Category Scoring Under the APM Scoring Standard

    We proposed that for the first MIPS performance period, we would not assess MIPS eligible clinicians participating in the Shared Savings Program (the MIPS APM) under the cost performance category. We proposed this approach because: (1) Eligible clinicians participating in the Shared Savings Program are already subject to cost and utilization performance assessments under the APM; (2) the Shared Savings Program measures cost in terms of an objective, absolute total cost of care expenditure benchmark for a population of attributed beneficiaries, and participating ACOs may share savings and/or losses based on that standard, whereas the MIPS cost measures are relative measures such that clinicians are graded relative to their peers, and therefore different than assessing total cost of care for a population of attributed beneficiaries; and (3) the beneficiary attribution methodologies for measuring cost under the Shared Savings Program and MIPS differ, leading to an unpredictable degree of overlap (for eligible clinicians and for us) between the sets of beneficiaries for which eligible clinicians would be responsible that would vary based on unique APM Entity characteristics such as which and how many TINs comprise an ACO. We believe that with an APM Entity's finite resources for engaging in efforts to improve quality and lower costs for a specified beneficiary population, the population identified through an APM must take priority to ensure that the goals and program evaluation associated with the APM are as clear and free of confounding factors as possible. The potential for different, conflicting results across Shared Savings Program and MIPS assessments—due to the differences in attribution, the inclusion in MIPS of episode-based measures that do not reflect the total cost of care, and the objective versus relative assessment factors listed above—creates uncertainty for MIPS eligible clinicians who are attempting to strategically transform their respective practices and succeed under the terms of the Shared Savings Program.

    For example, Shared Savings Program ACOs are held accountable for expenditure benchmarks that reflect the total Medicare Parts A and B spending for their assigned beneficiaries, whereas many of the proposed MIPS cost measures focus on spending for particular episodes of care or clinical conditions. We consider it a programmatic necessity that the Shared Savings Program has the ability to structure its own measurement and payment for performance on total cost of care independent from other incentive programs such as the cost performance category under MIPS. Thus, we proposed to reduce the MIPS cost performance category weight to zero for all MIPS eligible clinicians in APM Entities participating in the Shared Savings Program.

    Accordingly, under section 1899(f) of the Act, we proposed to waive—for MIPS eligible clinicians participating in the Shared Savings Program—the requirement under section 1848(q)(5)(E)(i)(II) of the Act that specifies the scoring weight for the cost performance category. With the proposed reduction of the cost performance category weight to zero, we believed it would be unnecessary to specify and use cost measures in determining the MIPS final score for these MIPS eligible clinicians. Therefore, under section 1899(f) of the Act, we proposed to waive—for MIPS eligible clinicians participating in the Shared Savings Program—the requirements under sections 1848(q)(2)(B)(ii) and 1848(q)(2)(A)(ii) of the Act to specify and use, respectively, cost measures in calculating the MIPS final score for such MIPS eligible clinicians.

    Given the proposal to waive requirements under section 1848(q)(5)(E)(i)(II) of the Act in order to reduce the weight of the cost performance category to zero, we also needed to specify how that weight would be redistributed among the remaining performance categories in order to maintain a total weight of 100 percent. We proposed to redistribute the cost performance category weight to both the improvement activities and advancing care information performance categories as specified in Table 11 of this final rule with comment period. The MIPS cost performance category is proposed to have a weight of 10 percent for the first performance period. Because the MIPS quality performance category bears a relatively higher weight than the other three MIPS performance categories, and in accordance with section 1848(q)(5)(E)(i)(I) and (II) of the Act, the weight for this category will be reduced from 50 to 30 percent as of the 2021 MIPS payment period, we proposed to evenly redistribute the 10 percent cost performance category weight to the improvement activities and advancing care information performance categories so that the distribution does not change the relative weight of the quality performance category. Because the MIPS quality performance category weight is required under the statute to be reduced to 30 percent after the first 2 years of MIPS, we believe that increasing the quality performance category weight would be incongruous in light of the eventual balance of the weights set forth in the statute. The redistributed cost performance category weight of 10 percent would result in a 5 percentage point increase (from 15 to 20 percent) for the improvement activities performance category and a 5 percentage point increase (from 25 to 30 percent) for the advancing care information performance category. We invited comments on the proposed weights and specifically whether we should increase the MIPS quality performance category weight.

    In the proposed rule we explained that as the MIPS cost performance category evolves over time, there might be greater potential for alignment and less potential duplication or conflict with MIPS cost measurement for MIPS eligible clinicians participating in APMs such as the Shared Savings Program. We will continue to monitor and consider how we might incorporate an assessment in the MIPS cost performance category into the APM scoring standard for MIPS eligible clinicians participating in the Shared Savings Program. We also understand that reducing the cost performance category weight to zero and redistributing the weight to the improvement activities and advancing care information performance categories could, to the extent that improvement activities and advancing care information scores are higher than the scores these MIPS eligible clinicians would have received under the cost performance category, would result in higher final scores on average for MIPS eligible clinicians participating in the Shared Savings Program. We solicited comment on the possibility of assigning a neutral score to the Shared Savings Program APM Entity groups for the cost performance category to moderate MIPS final scores for APM Entities participating in the Shared Savings Program. We also generally solicited comment on our proposed policy, and on whether and how we should incorporate the cost performance category into the APM scoring standard under MIPS for eligible clinicians participating in the Shared Savings Program for future years.

    The following is a summary of the comments we received regarding our proposal to reduce the MIPS cost performance category weight to zero for APM Entity groups participating in the Shared Savings Program.

    Comment: Several commenters supported our proposal not to assess cost for MIPS APMs and our efforts to reduce duplicative measurement. One commenter suggested we give a full score for the cost performance category instead of redistributing the 10 percent weight to other MIPS performance categories. A few commenters recommended the 10 percent weight for the cost performance category be redistributed entirely to the improvement activities performance category.

    One commenter recommended that MIPS eligible clinicians in Shared Savings Program ACOs receive extra credit in the cost performance category if their ACO achieved expenditures below its benchmark. The commenter suggested that CMS consider having a sliding scale of cost category points awarded to MIPS eligible clinicians that participate in Shared Savings Program ACOs with benchmarks of less than $10,000 per beneficiary per year. One commenter proposed that CMS reward Shared Savings Program ACOs that score at or above the average on cost measures, and hold harmless Shared Savings Program ACOs scoring below average. One commenter was opposed to reducing the cost performance category weight to zero.

    Response: We appreciate commenters' widespread support for this proposal to reduce the weight of the MIPS cost performance category to zero under the APM scoring standard for eligible clinicians participating in the Shared Savings Program. While we will continue to monitor and consider how we might in future years incorporate the MIPS cost performance category into the APM scoring standard for eligible clinicians participating in the Shared Savings Program, we believe that assessment in this category would conflict with the assessment of the financial performance of ACOs participating in the Shared Savings Program at this time. Because ACOs in the Shared Savings Program are assessed through particular attribution and benchmarking methodologies for purposes of earning shared savings payments, we believe that adding additional and separate MIPS incentives around cost would be redundant, potentially confusing, and could undermine the incentives built into the Shared Savings Program.

    We are finalizing our proposal to reduce the cost performance category to zero percent for APM Entity groups in the Shared Savings Program and to evenly redistribute the 10 percent cost performance category weight to the improvement activities and advancing care information performance categories. We note that this policy may seem unnecessary given that the MIPS policy for the initial performance year reduces the cost performance category weight to zero for all MIPS eligible clinicians. However, the zero weight for the cost performance category for APM Entity groups in the Shared Savings Program will remain in place for subsequent years unless we modify it through future notice and comment rulemaking, whereas the zero weight given to the cost performance category under the generally applicable MIPS scoring standard is limited to the first performance period, will increase to 10 percent in the second performance period, and will increase to 30 percent in the third performance period. We believe that setting this foundation from the outset of the Quality Payment Program will contribute to consistency and minimize uncertainty for MIPS APM participants at least until such a time as we might identify a means to consider performance in the MIPS cost performance category that is congruent with cost evaluation under the Shared Savings Program.

    We further note that although we proposed to use our authority under section 1899(f) of the Act to waive the requirement under section 1848(q)(5)(E)(i)(II) of the Act to specify the scoring weight for the cost performance category because it was necessary to waive this requirement in order to ensure that the Shared Savings Program retains the ability to structure its own measurement and payment for performance on total cost of care independent of other incentive programs, we believe we could also use our authority under section 1899(b)(3)(D) of the Act to accomplish this result. Section 1899(b)(3)(D) of the Act allows us to incorporate reporting requirements under section 1848 into the reporting requirements for the Shared Savings Program, as we determine appropriate, and to use alternative criteria than would otherwise apply. Thus, we believe that section 1899(b)(3)(D) of the Act also provides authority to reduce the weight of the cost performance category to zero percent for eligible clinicians participating in Shared Savings Program ACOs and to redistribute the 10 percent weight to the improvement categories and advancing care information categories.

    (8) Shared Savings Program—Improvement Activities and Advancing Care Information Performance Category Scoring Under the APM Scoring Standard

    We proposed that MIPS eligible clinicians participating in the Shared Savings Program would submit data for the MIPS improvement activities and advancing care information performance categories through their respective ACO participant billing TINs independent of the Shared Savings Program ACO. Under section 1848(q)(5)(C)(ii) of the Act, all ACO participant group billing TINs would receive a minimum of one half of the highest possible score for the improvement activities performance category. Additionally, under section 1848(q)(5)(C)(i) of the Act, any ACO participant TIN that is determined to be a patient-centered medical home or comparable specialty practice will receive the highest potential score for the improvement activities performance category. The improvement activities and advancing care information scores from all the ACO participant billing TINs would be averaged to a weighted mean MIPS APM Entity group level score. We proposed to use a weighted mean in computing the overall improvement activities and advancing care information quality performance category score to account for difference in the size of each TIN and to allow each TIN to contribute to the overall score based on its size. Then all MIPS eligible clinicians in the APM Entity group, as identified by their APM participant identifiers, would receive that APM Entity score. The weights used for each ACO participant billing TIN would be the number of MIPS eligible clinicians in that TIN. Because all providers and suppliers that bill through the TIN of an ACO participant are required to agree to participate in the ACO, all MIPS eligible clinicians that bill through the TIN of an ACO participant are considered to be participating in the ACO. Any Shared Savings Program ACO participant billing TIN that does not submit data for the MIPS improvement activities and/or advancing care information performance categories would contribute a score of zero for each performance category for which it does not report; and that score would be incorporated into the resulting weighted average score for the Shared Savings Program ACO. All MIPS eligible clinicians in the ACO (the APM Entity group) would receive the same score that is calculated at the ACO level (the APM Entity level).

    In the proposed rule, we recognized that the Shared Savings Program eligible clinicians participate as a complete TIN because all of the eligible clinicians that have reassigned their Medicare billing rights to the TIN of an ACO participant must agree to participate in the Shared Savings Program. This is different from other APMs, which may include APM Entity groups with eligible clinicians who share a billing TIN with other eligible clinicians who do not participate in the APM Entity. We solicited comment on a possible alternative approach in which improvement activities and advancing care information performance category scores would be applied to all MIPS eligible clinicians at the individual billing TIN level, as opposed to aggregated to the ACO level, for Shared Savings Program participants. We also indicated that if MIPS APM scores were applied to each TIN in an ACO at the TIN level, we would also likely need to permit those TINs to make the Partial QP election, as discussed elsewhere in this final rule with comment, at the TIN level. We proposed that under the APM scoring standard, the ACO-level APM Entity group score would be applied to each participating MIPS eligible clinician to determine the MIPS payment adjustment. We explained that we believe calculating the score at the APM Entity level mirrors the way APM participants are assessed for their shared savings and other incentive payments in the APM, but we understand there may be reasons why a group TIN, particularly one that believes it would achieve a higher score than the weighted average APM Entity level score, would prefer to be scored in the improvement activities and advancing care information performance categories at the level of the group billing TIN rather than the ACO (APM Entity level).

    We solicited comment as to whether Shared Savings Program ACO eligible clinicians should be scored at the ACO level or the group billing TIN level for the improvement activities and advancing care information performance categories.

    The following is a summary of the comments we received regarding our proposals for how to score and weight the improvement activities and advancing care information performance categories for the Shared Savings Program under the APM scoring standard and on whether to score these two MIPS performance categories at the APM Entity or the ACO participant TIN level.

    Comment: Several commenters suggested that all APM Entities should receive full credit for improvement activities because they are already performing these activities as a result of being a participant in an APM. A few commenters stated that all APM participants should get at least 80 percent of the maximum score for improvement activities. Some commenters suggested that ACOs are involved in many of the improvement activities on a daily basis in order to meet the stringent requirements of the Shared Savings Program and the Next Generation ACO Model and requested that CMS provide a simple and straightforward way for ACOs to attest that their eligible clinicians have been involved in improvement activities for at least 90 days in the performance year by being a part of an ACO initiative.

    Response: We agree with the comments that eligible clinicians participating in the Shared Savings Program and other MIPS APMs are actively engaged in improvement activities by virtue of participating in an APM. In an effort to further reduce reporting burden for eligible clinicians in MIPS APMs and to better recognize improvement activities work performed through participation in MIPS APMs, we are modifying our proposal with respect to scoring for the improvement activities performance category under the APM scoring standard. Specifically, for APM Entity groups in the Shared Savings Program, Next Generation ACO Model and other MIPS APMs, we will assign a baseline score for the improvement activities performance category based on the improvement activity requirements under the terms of the particular MIPS APM. CMS will review the MIPS APM requirements as they relate to activities specified under the generally applicable MIPS improvement activities performance category and assign an improvement activities score for each MIPS APM that is applicable to all APM Entity groups participating in the MIPS APM. To develop the improvement activities score assigned to a MIPS APM and applicable to all APM Entity groups in the APM, CMS will compare the requirements of the MIPS APM with the list of improvement activities measures in section II.E.5.f. of this final rule with comment period and score those measures in the same manner that they are otherwise scored for MIPS eligible clinicians according to section II.E.5.f. of this final rule with comment period. Thus, points assigned to an APM Entity group in a MIPS APM under the improvement activities performance category will relate to documented requirements under the terms and conditions of the MIPS APM, such as in a participation agreement or regulation. We will apply this improvement activities score for the MIPS APM to each APM Entity group within the MIPS APM. For example, points assigned in the improvement activities performance category for participation in the Next Generation ACO Model will relate to documented requirements under the terms of the model, as set forth in the model's participation agreement. In the event that a MIPS APM incorporates sufficient improvement activities to receive the maximum score, APM Entity groups or their constituent MIPS eligible clinicians (or TINs) participating in the MIPS APM will not need to submit data for the improvement activities performance category in order to receive that maximum improvement activities score. In the event that a MIPS APM does not incorporate sufficient improvement activities to receive the maximum potential score, APM Entities will have the opportunity to report and add points to the baseline MIPS APM-level score on behalf of all MIPS eligible clinicians in the APM Entity group for additional improvement activities that would apply to the APM Entity level improvement activities performance category score. The improvement activities performance category score we assign to the MIPS APM based on improvement activity requirements under the terms of the APM will be published in advance of the MIPS performance period on the CMS Web site.

    Comment: A commenter generally agreed with the proposed reweighting of performance categories for MIPS APMs under the APM scoring standard but recommended the 10 percent for the cost performance category be reallocated to improvement activities instead of both improvement activities and advancing care information. Another commenter also agreed with the scoring and supported the weight for the improvement activities performance category. One commenter recommended that MIPS APM participants have the option of having the APM Entity report improvement activities in order to achieve group scores higher than the initial 50 percent. A few commenters requested that the MIPS APMs only be scored on the quality and improvement activities performance categories.

    Response: After considering comments, we believe the reweighting of the improvement activities and the advancing care information performance categories should be finalized as proposed. We believe the proposed weights represent an appropriate balance between improvement activities and advancing care information, both of which are important goals of the MIPS program. Moreover, because the quality performance category weight will be reduced over time we believe that increasing the quality performance category weight in the first performance period would be incongruent the balance of the weights set forth in the statute.

    For the Shared Savings Program we are finalizing the weights assigned to each of the MIPS performance categories as proposed for Shared Savings Program ACOs: Quality 50 percent; cost 0 percent; improvement activities 20 percent; and advancing care information 30 percent for purposes of the APM scoring standard. We are finalizing the proposal that for the advancing care information performance category, ACO participant TINs will report the category to MIPS, and the TIN scores will be aggregated and weighted in order to calculate one APM Entity score for the category. In the event a Shared Savings Program ACO fails to satisfy quality reporting requirements for measures reported through the CMS Web Interface, advancing care information group TIN scores will not be aggregated to the APM Entity level. Instead, each ACO participant TIN will be scored separately based on its TIN-level group reporting for the advancing care information performance category.

    We are revising our proposal with respect to the scoring of the improvement activities performance category for the Shared Savings Program. We will assign an improvement activities score for the Shared Savings Program based on the improvement activities required under the Shared Savings Program. We consider all Shared Savings Program tracks together for purposes of assigning an improvement activities performance category score because the tracks all require the same activities of their participants. All APM Entity groups in the Shared Savings Program will receive that baseline improvement activities score. To develop the improvement activities score for the Shared Savings Program, we will compare the requirements of the Shared Savings Program with the list of improvement activities measures in section II.E.5.f. of this final rule with comment period and score those measures in the same manner that they would otherwise be scored for MIPS eligible clinicians according to section II.E.5.f. of this final rule with comment period. We will assign points for improvement activities toward the score for the Shared Savings Program based on documented requirements for improvement activities under the terms of the Shared Savings Program. We will publish the assigned scores for Shared Savings Program on the CMS Web site before the beginning of the MIPS performance period. In the event that the assigned score represents the maximum improvement activities score, APM Entity groups will not need to report additional improvement activities. In the event that the assigned score does not represent the maximum improvement activities score, APM Entities will have the opportunity to report additional improvement activities that would apply to the APM Entity group score. Table 11 summarizes the finalized APM scoring standard rules for the Shared Savings Program.

    Table 11—APM Scoring Standard for the Shared Savings Program—2017 Performance Period for the 2019 Payment Adjustment MIPS
  • performance
  • category
  • APM entity submission requirement Performance score Performance
  • category
  • weight
  • %
  • Quality Shared Savings Program ACOs submit quality measures to the CMS Web Interface on behalf of their participating MIPS eligible clinicians The MIPS quality performance category requirements and benchmarks will be used to determine the MIPS quality performance category score at the ACO level 50 Cost MIPS eligible clinicians will not be assessed on cost N/A 0 Improvement Activities ACOs only need to report if the CMS-assigned improvement activities scores is below the maximum improvement activities score CMS will assign the same improvement activities score to each APM Entity group based on the activities required of participants in the Shared Savings Program. The minimum score is one half of the total possible points. If the assigned score does not represent the maximum improvement activities score, ACOs will have the opportunity to report additional improvement activities to add points to the APM Entity group score 20 Advancing Care Information All ACO participant TINs in the ACO submit under this category according to the MIPS group reporting requirements All of the ACO participant TIN scores will be aggregated as a weighted average based on the number of MIPS eligible clinicians in each TIN to yield one APM Entity group score 30
    (9) Next Generation ACO Model—Quality Performance Category Scoring Under the APM Scoring Standard

    We proposed that beginning with the first MIPS performance period, Next Generation ACOs would only need to submit their quality measures to CMS once using the CMS Web Interface through the same process that they use to report to the Next Generation ACO Model. These data would be submitted once but used for purposes of both the Next Generation ACO Model and MIPS. Next Generation ACO Model ACOs have used the CMS Web Interface for submitting their quality measures since the model's inception and would most likely continue to use the CMS Web Interface as the submission method in future years. The Next Generation ACO Model quality measure data reported to the CMS Web Interface would be used by CMS to calculate the MIPS APM quality performance score. The MIPS quality performance category requirements and performance benchmarks for reporting quality measures via the CMS Web Interface would be used to determine the MIPS quality performance category score at the ACO level for the APM Entity group. The Next Generation ACO Model quality performance data that are not submitted to the CMS Web Interface, for example the CAHPS survey and claims-based measures, would not be included in the APM Entity group quality performance score. The APM Entity group quality performance category score would be calculated using only quality measure data submitted through the CMS Web Interface and scored using the MIPS benchmarks, whereas the quality reporting requirements and performance benchmarks calculated for the Next Generation ACO Model would continue to be used to assess the ACO under the APM-specific requirements. We stated in the proposed rule that we believe this approach would reduce the reporting burden for Next Generation ACO Model participants by requiring quality measure data to be submitted only once and used for both MIPS and the Next Generation ACO Model.

    In the proposed rule, we indicated that we believe that no waivers are necessary here because the quality measures submitted via the CMS Web Interface under the Next Generation ACO Model are MIPS quality measures and would be scored under MIPS performance standards. In the event that Next Generation ACO Model quality measures depart from MIPS measures in the future, we stated that we would address such changes, including whether further waivers are necessary, at such a time in future rulemaking.

    The following is a summary of the comments we received regarding our proposal to have Next Generation ACOs report quality measures to MIPS using the CMS Web Interface as they normally would under Next Generation ACO Model rules and our proposal for CMS to calculate the MIPS quality performance category score at the APM Entity group level based on the data reported to the CMS Web Interface and using the MIPS performance standards.

    Comment: A commenter requested clarification regarding whether the population-based quality measures and CAHPS would be included in the Next Generation ACO quality performance category score.

    Response: The population-based quality measures and CAHPS will not be included in the quality scoring under the APM scoring standard. This final rule with comment period does not affect APM-specific measurement and incentives.

    We are finalizing the scoring policy for the quality performance category for the Next Generation ACO Model as proposed. We will use Next Generation ACO Model quality measures submitted by the ACO to the CMS Web Interface and MIPS benchmarks to score quality for MIPS eligible clinicians in a Next Generation ACO at the APM Entity level. An ACO's failure to report quality as required by the Next Generation ACO Model will result in a quality score of zero for the APM Entity group.

    (10) Next Generation ACO Model—Cost Performance Category Scoring Under the APM Scoring Standard

    We proposed that for the first MIPS performance period, we would not assess MIPS eligible clinicians in the Next Generation ACO Model participating in the MIPS APM under the cost performance category. We proposed this approach because: (1) MIPS eligible clinicians participating in the Next Generation ACO Model are already subject to cost and utilization performance assessments under the APM; (2) the Next Generation ACO Model measures cost in terms of an objective, absolute total cost of care expenditure benchmark for a population of attributed beneficiaries, and participating ACOs may share savings and/or losses based on that standard, whereas the MIPS cost measures are relative measures such that clinicians are graded relative to their peers and therefore different than assessing total cost of care for a population of attributed beneficiaries; and (3) the beneficiary attribution methodologies for measuring cost under the Next Generation ACO Model and MIPS differ, leading to an unpredictable degree of overlap (for eligible clinicians and for us) between the sets of beneficiaries for which eligible clinicians would be responsible that would vary based on unique APM Entity characteristics such as which and how many eligible clinicians comprise an ACO. We believe that with an APM Entity's finite resources for engaging in efforts to improve quality and lower costs for a specified beneficiary population, the population identified through the Next Generation ACO Model must take priority to ensure that the goals and model evaluation associated with the APM are as clear and free of confounding factors as possible. The potential for different, conflicting results across the Next Generation ACO Model and MIPS assessments—due to the differences in attribution, the inclusion in MIPS of episode-based measures that do not reflect the total cost of care, and the objective versus relative assessment factors listed above—creates uncertainty for eligible clinicians who are attempting to strategically transform their respective practices and succeed under the terms of the Next Generation ACO Model. For example, Next Generation ACOs are held accountable for expenditure benchmarks that reflect the total Medicare Parts A and B spending for their attributed beneficiaries, whereas many of the proposed MIPS cost measures focus on spending for particular episodes of care or clinical conditions. Therefore, we proposed to reduce the MIPS cost performance category weight to zero for all MIPS eligible clinicians participating in the Next Generation ACO Model. Accordingly, under section 1115A(d)(1) of the Act, we proposed to waive—for MIPS eligible clinicians participating in the Next Generation ACO Model—the requirement under section 1848(q)(5)(E)(i)(II) of the Act that specifies the scoring weight for the cost performance category. With the proposed reduction of the cost performance category weight to zero, we believe it would be unnecessary to specify and use cost measures in determining the MIPS final score for these MIPS eligible clinicians. Therefore, under section 1115A(d)(1) of the Act, we proposed to waive—for MIPS eligible clinicians participating in the Next Generation ACO Model—the requirements under sections 1848(q)(2)(B)(ii) and 1848(q)(2)(A)(ii) of the Act to specify and use, respectively, cost measures in calculating the MIPS final score for such eligible clinicians.

    Given the proposal to waive requirements under section 1848(q)(5)(E) of the Act to reduce the weight of the cost performance category to zero, we must subsequently specify how that weight would be redistributed among the remaining performance categories to maintain a total weight of 100 percent. We proposed to redistribute the cost performance category weight to both the improvement activities and advancing care information performance categories as specified in Table 13 of the proposed rule. The MIPS cost performance category is proposed to have a weight of 10 percent. Because the MIPS quality performance category bears a relatively higher weight than the other three MIPS performance categories and the weight for this category will be reduced from 50 to 30 percent as of the 2021 payment year, we proposed to evenly redistribute the 10 percent cost weight to the improvement activities and advancing care information performance categories so that the distribution does not change the relative weight of the quality performance category in the opposite of the direction it will change in the future. Because the quality performance category weight is required under the statute to be reduced to 30 percent after the first 2 years of MIPS we believe that increasing the quality performance category weight is incongruous with the eventual balance of the weights set forth in the statute. The redistributed cost performance category weight of 10 percent would result in a 5 percentage point increase (from 15 to 20 percent) for the improvement activities performance category and a 5 percentage point increase (from 25 to 30 percent) for the advancing care information performance category. We invited comments on the proposed redistributed weights and specifically on whether we should also increase the MIPS quality performance category weight.

    In the proposed rule, we explained that we understand that as the MIPS cost performance category evolves over time, there might be greater potential for alignment and less potential duplication or conflict with MIPS cost measurement for MIPS eligible clinicians participating in MIPS APMs such as the Next Generation ACO Model. We stated that we would continue to monitor and consider how we might incorporate an assessment in the MIPS cost performance category into the APM scoring standard for the Next Generation ACO Model. We also understand that reducing the cost weight to zero and redistributing the weight to the improvement activities and advancing care information performance categories could, to the extent that improvement activities and advancing care information performance category scores are higher than the scores MIPS eligible clinicians would have received under the cost performance category, result in higher final scores on average for MIPS eligible clinicians in APM Entity groups participating in the Next Generation ACO Model. We solicited comment on the possible alternative of assigning a neutral score to APM Entity groups participating in the Next Generation ACO model for the cost performance category in order to moderate APM Entity scores. We also generally sought comment on our proposed policy, and on whether and how we should incorporate the cost performance category into the APM scoring standard for MIPS eligible clinicians in APM Entity groups participating in the Next Generation ACO model for future years.

    The following is a summary of the comments we received regarding our proposal to reduce the MIPS cost performance category weight to zero for APM Entity groups in the Next Generation ACO Model.

    Comment: Many commenters supported our proposal to not assess cost for MIPS APMs, including the Next Generation ACO Model.

    Response: We appreciate commenters' widespread support for this proposal. While we will continue to monitor and consider how we might in future years incorporate the MIPS cost performance category into the APM scoring standard for participants in the Next Generation ACO Model, we believe that assessment in this category would conflict with Next Generation ACO Model assessment at this time. Participants in the Next Generation ACO Model are assessed through particular attribution and benchmarking methodologies for purposes of earning shared savings payments; adding additional and separate MIPS incentives around cost would be redundant, potentially confusing, and could undermine the incentives built into the Next Generation ACO Model.

    We are finalizing our proposal to reduce the cost performance category weight to zero for MIPS eligible clinicians in APM Entity groups participating in the Next Generation ACO Model and to evenly redistribute the 10 percent cost weight to the improvement activities and advancing care information performance categories without changes.

    (11) Next Generation ACO Model—Improvement Activities and Advancing Care Information Performance Category Scoring Under the APM Scoring Standard

    We proposed that all MIPS eligible clinicians participating in the Next Generation ACO Model would submit data for the improvement activities and advancing care information performance categories. MIPS eligible clinicians participating in the Next Generation ACO Model may bill through a TIN that includes other MIPS eligible clinicians not participating in the APM. Therefore for both the improvement activities and advancing care information performance categories, we proposed that MIPS eligible clinicians participating in the Next Generation ACO Model would submit individual level data to MIPS and not group level data.

    For both the improvement activities and advancing care information performance categories, the scores from all of the individual MIPS eligible clinicians in the APM Entity group would be aggregated to the APM Entity level and averaged for a mean score. Any individual MIPS eligible clinicians that do not report for purposes of the improvement activities performance category or the advancing care information performance category would contribute a score of zero for that performance category in the calculation of the APM Entity score. All MIPS eligible clinicians in the APM Entity group would receive the same APM Entity score.

    Because the MIPS quality performance category bears a relatively higher weight than the other three MIPS performance categories, we proposed to evenly redistribute the 10 percent cost performance category weight to the improvement activities and advancing care information performance categories. Section 1848(q)(5)(C)(i) of the Act requires that MIPS eligible clinicians who are in a practice that is certified as a patient-centered medical home or comparable specialty practice, as determined by the Secretary, for a performance period shall be given the highest potential score for the improvement activities performance category. Accordingly, a MIPS eligible clinician participating in an APM Entity that meets the definition of a patient-centered medical home or comparable specialty practice will receive the highest potential improvement activities score. Additionally, section 1848(q)(5)(C)(ii) of the Act requires that MIPS eligible clinicians participating in APMs that are not patient-centered medical homes for a performance period shall earn a minimum score of one-half of the highest potential score for improvement activities.

    For the APM scoring standard for the first MIPS performance period, we proposed to weight the improvement activities and advancing care information performance categories for the Next Generation ACO Model in the same way that we proposed to weight those categories for the Shared Savings Program: 20 percent and 30 percent for improvement activities and advancing care information, respectively. We solicited comment on our proposals for reporting and scoring the improvement activities and advancing care information performance categories under the APM scoring standard. In particular, we solicited comment on the appropriate weight distributions in the first performance year.

    The following is a summary of the comments we received regarding our proposals to score and weight the improvement activities and advancing care information performance categories for APM Entity groups in the Next Generation ACO under the APM scoring standard.

    Comment: Several commenters suggested that all APM Entities including ACOs in the Next Generation ACO Model should receive full credit for improvement activities because they are already performing these activities as a result of being a participant in an APM. Some commenters also indicated that improvement activities should be reported at the APM Entity level rather than at the individual level then averaged. A few commenters believed that CMS should allow reporting at the APM Entity level for all performance categories. Some commenters also believed that the advancing care information performance category should not be part of the APM scoring standard but rather incorporated into APM design through CEHRT requirements. One commenter indicated that the activities that lead to success in the Next Generation ACO Model directly overlap with the proposed improvement activities.

    Response: We agree that we can streamline reporting and scoring for the improvement activities and advancing care information performance categories while recognizing the work Next Generation ACO Model participants do in pursuit of the APM goals. Therefore, as described below, for purposes of the APM scoring standard we will assign an improvement activities score to the Next Generation ACO Model based on the improvement activities required under the Model.

    Regarding the advancing care information performance category, we do not believe that there is a compelling reason to exclude assessment in this performance category from the APM scoring standard in the same way that we are reducing the weight of the cost performance category. We do not see advancing care information measurement as duplicative or in conflict with Next Generation ACO Model goals and requirements. Participation in the Next Generation ACO Model is aligned with many MIPS improvement activities measures. This is why we are finalizing a policy that further reduces MIPS reporting burdens for Next Generation ACO Model participants and recognizes the similarities between MIPS improvement activities and the requirements of participating in the Next Generation ACO Model.

    Comment: A commenter requested clarification of our proposal that MIPS eligible clinicians participating in the Next Generation ACO would submit data for the improvement activities performance category to MIPS individually, and not as a group.

    Response: The proposed policy involved individual reporting of improvement activities, which would be averaged across the ACO for one APM Entity group score. The finalized policy, described below, no longer requires individual reporting for purposes of the improvement activities performance category.

    Comment: A commenter noted that Next Generation ACO participants who are determined to be Partial QPs for a year may be disadvantaged given the reweighting of MIPS categories under the APM scoring standard.

    Response: We do not believe there is a disadvantage for Partial QPs who achieve that status through participation in any Advanced APM, including the Next Generation ACO Model to the extent it is determined to be an Advanced APM. As discussed in section II.F.5., the eligible clinicians who are Partial QPs can decide at the APM Entity group level to be subject to the MIPS reporting requirements and payment adjustment, in which case the eligible clinicians in the group would be scored under the APM scoring standard, or to be excluded from MIPS for the year.

    In response to comments, we are revising our proposal with respect to the scoring of the improvement activities performance category for the Next Generation ACO Model. CMS will assign all APM Entity groups in the Next Generation ACO Model the same improvement activities score based on the improvement activities required by the Next Generation ACO Model. To develop the improvement activities score assigned to all APM Entity groups in the Next Generation ACO Model, CMS will compare the requirements under the Next Generation ACO Model with the list of improvement activities measures in section II.E.5.f. of this final rule with comment period and score those measures in the same manner that they are otherwise scored for MIPS eligible clinicians according to section II.E.5.f. of this final rule with comment period. Thus, points assigned for participation in the Next Generation ACO Model will relate to documented requirements under the terms of the Next Generation ACO Model. We will publish the assigned improvement activities performance category score for the Next Generation ACO Model, based on the APM's improvement activity requirements, prior to the start of the performance period. In the event that the assigned score does not represent the maximum improvement activities score, APM Entities will have the opportunity to report additional improvement activities that would be applied to the baseline APM Entity group score. In the event that the baseline assigned score represents the maximum improvement activities score, APM Entities will not need to report additional improvement activities.

    In order to further reduce reporting burden and align with the generally applicable MIPS group reporting option, we are revising the advancing care information scoring policy for the Next Generation ACO Model. A MIPS eligible clinician may receive a score for the advancing care information performance category either through individual reporting or through group reporting based on a TIN according to the generally applicable MIPS reporting and scoring rules for the advancing care information performance category, described in section II.E.5.g of this final rule with comment period. We will attribute one advancing care information score to each MIPS eligible clinician in an APM Entity by looking at both individual and group data submitted for a MIPS eligible clinician and using the highest reported score. Thus, instead of only using individual scores to derive an APM Entity-level advancing care information score as proposed, we will use the highest score attributable to each MIPS eligible clinician in an APM Entity group in order to determine the APM Entity group score based on the average of the highest scores for all MIPS eligible clinicians in the APM Entity group.

    Like the proposed policy, each MIPS eligible clinician in the APM Entity group will receive one score, weighted equally with that of the other clinicians in the group, and CMS will calculate a single APM Entity-level advancing care information performance category score. Also like the proposed policy, for a MIPS eligible clinician who has no advancing care information performance category score—if the individual's TIN did not report as a group and the individual did not report—that MIPS eligible clinician will contribute a score of zero to the aggregate APM Entity group score.

    In summary, we will attribute one advancing care information performance category score to each MIPS eligible clinician in an APM Entity group, which will be averaged with the scores of all other MIPS eligible clinicians in the APM Entity group to derive a single APM Entity score. In attributing a score to an individual, we will use the highest score attributable to the TIN/NPI combination of a MIPS eligible clinician. Finally, if there is no group or individual score, we will attribute a zero to the MIPS eligible clinician, which will be included in the aggregate APM Entity score.

    We have revised this policy for the advancing care information performance category for Next Generation ACOs under the APM scoring standard because we recognize that individual reporting in the advancing care information performance category for all MIPS eligible clinicians in an APM Entity group may be more burdensome than allowing some degree of group reporting where applicable, and we believe that requiring individual reporting on advancing care information in the Next Generation ACO Model context will not supply a meaningfully greater amount of information regarding the use of EHR technology as prescribed by the advancing care information performance category. We believe that this revised policy maintains the alignment with the generally applicable MIPS reporting and scoring requirements under the advancing care information performance category while responding to commenters' desires for reduced reporting requirements for MIPS APM participants. Therefore, we believe that the revised policy, relative to the proposed policy, has the potential to substantially reduce reporting burden with little to no reduction in our ability to accurately evaluate the adoption and use of EHR technology. We also believe this final policy balances the simplicity of TIN-level group reporting, which can reduce burden, with the flexibility needed to address partial TIN scenarios common among Next Generation ACOs in which a TIN may have some MIPS eligible clinicians participating in the ACO and some MIPS eligible clinicians not in the ACO. Table 12 summarizes the final APM scoring standard rules for the Next Generation ACO Model.

    Table 12—APM Scoring Standard for the Next Generation ACO Model—2017 Performance Period for the 2019 Payment Adjustment MIPS
  • Performance category
  • APM Entity submission requirement Performance score Performance category weight
  • %
  • Quality ACOs submit quality measures to the CMS Web Interface on behalf of their participating MIPS eligible clinicians The MIPS quality performance category requirements and benchmarks will be used to determine the MIPS quality performance category score at the ACO level 50 Cost MIPS eligible clinicians will not assessed on cost N/A 0 Improvement Activities ACOs only need to report improvement activities data if the CMS-assigned improvement activities scores is below the maximum improvement activities score CMS will assign the same improvement activities score to each APM Entity group based on the activities required of participants in the Next Generation ACO Model
  • This minimum score is one half of the total possible points. If the assigned score does not represent the maximum improvement activities score, ACOs will have the opportunity to report additional improvement activities to add points to the APM Entity group score
  • 20
    Advancing Care Information Each MIPS eligible clinician in the APM Entity group reports advancing care information to MIPS through either group reporting at the TIN level or individual reporting CMS will attribute one score to each MIPS eligible clinician in the APM Entity group. This score will be the highest score attributable to the TIN/NPI combination of each MIPS eligible clinician, which may be derived from either group or individual reporting. The scores attributed to each MIPS eligible clinicians will be averaged to yield a single APM Entity group score 30
    (12) MIPS APMs Other Than the Shared Savings Program and the Next Generation ACO Model—Quality Performance Category Scoring Under the APM Scoring Standard

    For MIPS APMs other than the Shared Savings Program and the Next Generation ACO Model, we proposed that eligible clinicians or APM Entities would submit APM quality measures under their respective MIPS APM as usual, and those eligible clinicians or APM Entities would not also be required to submit quality information under MIPS for the first performance period. Current MIPS APMs have requirements regarding the number of quality measures, measure specifications, as well as the measure reporting method(s) and frequency of reporting, and have an established mechanism for submission of these measures to us. We believe there are operational considerations and constraints that would prevent us from being able to use the quality measure data from some MIPS APMs for the purpose of satisfying the MIPS data submission requirements for the quality performance category in the first performance period. For example, some current APMs use a quality measure data collection system or vehicle that is separate and distinct from the MIPS systems. We do not believe there is sufficient time to adequately implement changes to the current APM quality measure data collection timelines and infrastructure to conduct a smooth hand-off to the MIPS system that would enable use of APM quality measure data to satisfy the MIPS quality performance category requirements in the first MIPS performance period. As we have noted, we are concerned about subjecting MIPS eligible clinicians who participate in MIPS APMs to multiple performance assessments—under MIPS and under the APMs—that are not necessarily aligned and that could potentially undermine the validity of testing or performance evaluation under the APM. As stated in the proposed rule, our goal is to reduce MIPS eligible clinician reporting burden by not requiring APM participants to report quality data twice to us, and to avoid misaligned performance incentives. Therefore, we proposed that, for the first MIPS performance period only, for MIPS eligible clinicians participating in APM Entity groups in MIPS APMs (other than the Shared Savings Program or the Next Generation ACO Model), we would reduce the weight for the quality performance category to zero. As we explained in the proposed rule, we believe it is necessary to do this because we require additional time to make adjustments in systems and processes related to the submission and collection of APM quality measures to align APM quality measures with MIPS and ensure APM quality measure data can be submitted in a time and manner sufficient for use in assessing quality performance under MIPS and under the APM. Additionally, due to the implementation of a new program that does not account for non-MIPS measures sets, the operational complexity of connecting APM performance to valid MIPS quality performance category scores in the necessary timeframe, as well as the uncertainty of the validity and equity of scoring results could unintentionally undermine the quality performance assessments in MIPS APMs. Finally, for purposes of performing valid evaluations of MIPS APMs, we must reduce the number of confounding factors to the extent feasible, which, in this case, would include reporting and assessment on non-APM quality measures. Thus, we proposed to waive certain requirements of section 1848(q) of the Act for the first MIPS performance year to avoid risking adverse operational or program evaluation consequences for MIPS APMs while we work toward incorporating MIPS APM quality measures into MIPS scoring for future MIPS performance periods.

    Accordingly, under section 1115A(d)(1) of the Act, we proposed to waive—for MIPS eligible clinicians participating in MIPS APMs other than the Shared Savings Program or the Next Generation ACO Model—the requirement under section 1848(q)(5)(E)(i)(I) of the Act that specifies the scoring weight for the quality performance category. With the proposed reduction of the quality performance category weight to zero, we believe it would be unnecessary to establish an annual final list of quality measures as required under section 1848(q)(2)(D) of the Act, or to specify and use quality measures in determining the MIPS final score for these MIPS eligible clinicians. Therefore, under section 1115A(d)(1) of the Act, we proposed to waive— for MIPS eligible clinicians participating in MIPS APMs other than the Shared Savings Program or the Next Generation ACO Model—the requirements under sections 1848(q)(2)(D), 1848(q)(2)(B)(i) and 1848(q)(2)(A)(i) of the Act to establish a final list of quality measures (using certain criteria and processes); and to specify and use, respectively, quality measures in calculating the MIPS final score, for these MIPS eligible clinicians.

    We anticipated that beginning in the second MIPS performance period, the APM quality measure data submitted to us during the MIPS performance period would be used to derive a MIPS quality performance score for APM Entities in all APMs that meet criteria for application of the APM scoring standard. We also anticipated that it may be necessary to propose policies and waivers of different requirements of the statute—such as one for section 1848(q)(2)(D) of the Act, to enable the use of non-MIPS quality measures in the quality performance category score—through future rulemaking. We indicated that we expect that by the second MIPS performance period we will have had sufficient time to resolve operational constraints related to use of separate quality measure systems and to adjust quality measure data submission timelines. Therefore, beginning with the second MIPS performance period, we anticipated that through use of the waiver authority under section 1115A(d)(1) of the Act, the quality measure data for APM Entities for which the APM scoring standard applies would be used for calculation of a MIPS quality performance score in a manner specified in future rulemaking. We solicited comment on this transitional approach to use of APM quality measures for the MIPS quality performance category for purposes of the APM scoring standard under MIPS in future years.

    The following is a summary of the comments we received regarding our proposal to, for the first MIPS performance period, reweight the quality performance category to zero for APM Entity groups in MIPS APMs other than the Shared Savings Program or the Next Generation ACO Model.

    Comment: A commenter supported exempting MIPS APMs that are not using the CMS Web Interface to report quality from reporting for purposes of the MIPS quality performance category in the first performance year. One commenter was concerned that these MIPS APMs will not receive a quality score for the first performance year and another commenter recommended revising the performance category weights so that quality is included.

    Response: We agree that it would be ideal to include performance on quality for all MIPS APMs in the first MIPS performance year. As noted, we are only reweighting the quality performance category to zero for the first performance year due to operational limitations. APM Entities in MIPS APMs are, under the policies adopted in this final rule with comment period, required to base payment incentives on cost/utilization and quality measure performance. As such they will continue to report quality as required under the APM, and are not truly exempt from quality assessment for the year. We are finalizing the inclusion of a MIPS quality performance category score under the APM scoring standard for the 2018 performance year at § 414.1370(f), and will develop additional scoring policies for that year through future notice-and-comment rulemaking.

    We are finalizing as proposed the policy to reweight the MIPS quality performance category to zero percent for APM Entity groups in MIPS APMs other than the Shared Savings Program or the Next Generation ACO Model for the first performance year.

    (13) MIPS APMs Other Than the Shared Savings Program and Next Generation ACO—Cost Performance Category Scoring Under the APM Scoring Standard

    For the first MIPS performance period, we proposed that, for MIPS eligible clinicians participating in MIPS APMs other than the Shared Savings Program or the Next Generation ACO Model, to reduce the weight of the cost performance category to zero. We proposed this approach because: (1) APM Entity groups are already subject to cost and utilization performance assessments under MIPS APMs; (2) MIPS APMs usually measure cost in terms of total cost of care, which is a broader accountability standard that inherently encompasses the purpose of the claims-based measures that have relatively narrow clinical scopes, and MIPS APMs that do not measure cost in terms of total cost of care may depart entirely from MIPS measures; and (3) the beneficiary attribution methodologies differ for measuring cost under APMs and MIPS, leading to an unpredictable degree of overlap (for eligible clinicians and for CMS) between the sets of beneficiaries for which eligible clinicians would be responsible that would vary based on unique APM Entity characteristics such as which and how many eligible clinicians comprise an APM Entity. We believe that with an APM Entity's finite resources for engaging in efforts to improve quality and lower costs for a specified beneficiary population, the population identified through an APM must take priority to ensure that the goals and model evaluation associated with the APM are as clear and free of confounding factors as possible. The potential for different, conflicting results across APM and MIPS assessments creates uncertainty for MIPS eligible clinicians who are attempting to strategically transform their respective practices and succeed under the terms of an APM. Accordingly, under section 1115A(d)(1) of the Act, we proposed to waive—for MIPS eligible clinicians participating in MIPS APMs other than the Shared Savings Program or the Next Generation ACO Model—the requirement under section 1848(q)(5)(E)(i)(II) of the Act that specifies the scoring weight for the cost performance category.

    With the proposed reduction of the cost performance category weight to zero, we believed it would be unnecessary to specify and use cost measures in determining the MIPS final score for these MIPS eligible clinicians. Therefore, under section 1115A(d)(1) of the Act, we proposed to waive—for MIPS eligible clinicians participating in MIPS APMs other than the Shared Savings Program or the Next Generation ACO Model—the requirements under section under sections 1848(q)(2)(B)(ii) and 1848(q)(2)(A)(ii) of the Act to specify and use, respectively, cost measures in calculating the MIPS final score for such eligible clinicians.

    Given the proposal to waive requirements of section 1848(q) of the Act to reduce the weight of the quality and cost performance categories to zero, we also needed to specify how those weights would be redistributed among the remaining improvement activities and advancing care information categories in order to maintain a total weight of 100 percent. We proposed to redistribute the quality and the cost performance category weights as specified in Table 14 of the proposed rule.

    We understand that as the cost performance category evolves, the rationale we discussed in the proposed rule for establishing a weight of zero for this performance category might not be applicable in future years. We solicited comment on whether and how we should incorporate the cost performance category into the APM scoring standard under MIPS. We also understand that reducing the quality and cost performance category weight to zero and redistributing the weight to the improvement activities and advancing care information performance categories could, to the extent that improvement activities and advancing care information scores are higher than the scores MIPS eligible clinicians would have received under the cost performance category, would result in higher final scores on average for MIPS eligible clinicians in APM Entity groups participating in MIPS APMs. We solicited comment on the possible alternative of assigning a neutral score to MIPS eligible clinicians in APM Entity groups participating in MIPS APMs for the quality and cost performance categories in order to moderate APM Entity scores.

    The following is a summary of the comments we received regarding our proposal to establish a MIPS cost performance category weight of zero for all MIPS eligible clinicians in APM Entities participating in the MIPS APMs other than the Shared Savings Program and the Next Generation ACO model.

    Comment: The majority of commenters supported not assessing cost for MIPS APMs by reducing the weight for the cost performance category to zero.

    Response: We appreciate commenters' widespread support for this proposal. While we will continue to monitor and consider how we might in future years incorporate the MIPS cost performance category into the APM scoring standard for all MIPS APMs, we believe that inclusion of this category would conflict with the assessment of cost made within MIPS APMs at this time. Participants in MIPS APMs are assessed through particular attribution and benchmarking methodologies for purposes of incentives and penalties; adding additional and separate MIPS incentives around cost would be redundant, potentially confusing, and could undermine the incentives built into these MIPS APMs.

    We are finalizing the proposal to reduce the cost performance category weight to zero percent for APM Entity groups in MIPS APMs other than the Shared Savings Program or the Next Generation ACO Model.

    (14) MIPS APMs Other Than the Shared Savings Program and Next Generation ACO Model—Improvement Activities and Advancing Care Information Performance Category Scoring Under the APM Scoring Standard

    We proposed that all MIPS eligible clinicians participating in a MIPS APM other than the Shared Savings Program or the Next Generation ACO Model would submit data for the improvement activities and advancing care information performance categories as individual MIPS eligible clinicians. MIPS eligible clinicians in these other APMs may bill through a TIN that includes MIPs eligible clinicians that do not participate in the APM. Therefore for both the improvement activities and the advancing care information performance categories, we proposed that these MIPS eligible clinicians submit individual level data to MIPS and not group level data. For both the improvement activities and advancing care information performance categories, the scores from all of the individual MIPS eligible clinicians in the APM Entity group would be aggregated to the APM Entity level and averaged for a mean score. Any individual MIPS eligible clinicians that do not submit data for the improvement activities performance category or the advancing care information performance category would contribute a score of zero for that performance category in the calculation of the APM Entity score. All MIPS eligible clinicians in the APM Entity group would receive the same APM Entity group score.

    Section 1848(q)(5)(C)(i) of the Act requires that MIPS eligible clinicians who are in a practice that is certified as a patient-centered medical home or comparable specialty practice, as determined by the Secretary, for a performance period shall be given the highest potential score for the improvement activities performance category. Accordingly, a MIPS eligible clinician in an APM Entity group that meets the definition of a patient-centered medical home or comparable specialty practice will receive the highest potential score. Additionally, section 1848(q)(5)(C)(ii) of the Act requires that MIPS eligible clinicians participating in APMs that are not patient-centered medical homes for a performance period shall earn a minimum score of one-half of the highest potential score for improvement activities. We acknowledged that using this increased weight for improvement activities may make it easier in the first performance period for eligible clinicians in a MIPS APM to attain a higher MIPS score. We do not have historical data to assess the range of scores under improvement activities because this is the first time such activities are being assessed in such a manner.

    For the advancing care information performance category, we explained our belief that MIPS eligible clinicians participating in MIPS APMs would be using certified health IT and other health information technology to coordinate care and deliver better care to their patients. Most MIPS APMs encourage participants to use health IT to perform population management, monitor their own quality improvement activities and, better coordinate care for their patients in a way that aligns with the goals of the advancing care information performance category. In the proposed rule, we indicated that we want to ensure that where we proposed reductions in weights for other MIPS performance categories, such weights are appropriately redistributed to the advancing care information performance category.

    Therefore, for the first MIPS performance period, we proposed that the weights for the improvement activities and advancing care information performance categories would be 25 percent and 75 percent, respectively. We solicited comment on our proposals for reporting and scoring the improvement activities and advancing care information performance categories under the APM scoring standard. In particular, we solicited comment on the appropriate weight distributions in the first performance year and subsequent years when we anticipate incorporating assessment in the quality performance category for all MIPS eligible clinicians participating in MIPS APMs.

    The following is a summary of the comments we received regarding our proposals to score and weight the improvement activities and advancing care information performance categories for MIPS eligible clinicians participating in APM Entity groups in MIPS APMs other than the Shared Savings Program and the Next Generation ACO Model under the APM scoring standard.

    Comment: Some commenters were concerned that if eligible clinicians in MIPS APMs would be scored only on the advancing care information and improvement activities performance categories, clinicians in those MIPS APMs could disproportionately receive upward MIPS payment adjustments because they would not be assessed in the quality or cost performance categories. Commenters believed that it may be easier for clinicians to perform well in the improvement activities and advancing care information performance categories than in the quality and cost performance categories. Although a few commenters supported the proposed performance category weights, other commenters suggested alternatives. Two commenters were concerned about the performance category scoring weights for MIPS APMs under the APM scoring standard and suggested that the weights for the advancing care information and improvement activities performance categories should be similar to the ones proposed for the Shared Savings Program and Next Generation ACO Model. Two other commenters suggested assigning greater weight to the improvement activities performance category instead of redistributing so much of the weight to the advancing care information performance category. A few commenters suggested redistributing the weights from the quality and cost performance categories to the improvement activities and advancing care information performance categories differently—for example, 50 percent for improvement activities and 50 percent for advancing care information. One commenter indicated they understood the need to reweight the improvement activities and advancing care information for MIPS APMs other than the Shared Savings Program and the Next Generation ACO Model but requested that, in making reweighting decisions, CMS give consideration to ensuring a “level playing field.” A few commenters expressed concern that the proposed APM scoring standard for MIPS APMs increases the advancing care information category weight to 75 percent, and a commenter stated that performance in this category could be challenging for many clinicians, particularly those with little control over the IT choices and decisions made by their employers. A commenter recommended basing performance in this category on the adoption and use of EHR technology tailored to a specialty-appropriate assessment of meaningful use and urged CMS to work closely with physician societies.

    Response: We understand that an APM Entity group's final score under the proposed weights for the APM scoring standard could differ from the final score such APM Entity groups could receive if they were subject to both the quality and cost performance categories. However, for reasons discussed above, reweighting the quality performance category to zero percent is necessary for operational and programmatic reasons only for the first performance year, and we anticipate being able to incorporate performance under MIPS APM quality measures beginning in the second year of the Quality Payment Program, subject to future rulemaking. Also, in light of the MIPS scoring policies we are finalizing for the first performance year, we do not believe that this will cause a material adverse impact on MIPS scoring because the impact on MIPS payment adjustments for an eligible clinician will be affected more by meeting the minimum reporting requirements than by the weighting of performance categories. In subsequent years, we intend to incorporate assessments in the quality performance category into the APM scoring standard for all MIPS APMs, and the performance category weights will no longer so heavily emphasize advancing care information. For the first performance year, we believe that the proposed balance between improvement activities and advancing care information is appropriate, especially given the possibility that MIPS APM participants may be assigned the maximum improvement activities score under our final policy, as described below.

    Comment: A commenter stated that improvement activities reporting should be done by the APM Entity and that advancing care information should not be part of the APM scoring standard. Several commenters suggested that all APM Entities should receive full credit for improvement activities because they are already performing these activities as a result of being a participant in an APM. Other commenters suggested that both advancing care information and improvement activities be reported and scored at the individual level instead of being aggregated to the APM Entity level. A few commenters believed that CMS should allow reporting at the APM Entity level for all performance categories.

    Response: In contrast to the cost performance category, we do not find a compelling reason to reduce the weight of the advancing care information performance category because we do not believe it would potentially conflict with or duplicate assessments that are made within the MIPS APM.

    We agree with commenters that reporting in the improvement activities performance category could be more efficient if done by an APM Entity on behalf of the APM Entity group. In order to further reduce reporting burden on all parties and to better recognize improvement activities work performed through participation in MIPS APMs, we are modifying our proposal with respect to scoring for the improvement activities performance category under the MIPS APM scoring standard. As described above, we will assign an improvement activities performance category score at the MIPS APM level based on the requirements of participating in the particular MIPS APM. The baseline score will be applied to each APM Entity group in the MIPS APM. In the event that the assigned score is less than the maximum score, we would allow the APM Entity to report additional activities to add points to the APM Entity group score. With regards to the comment suggesting scoring improvement activities at the individual level, we believe that reporting and scoring improvement activities at the APM Entity level support the goals of APM participation, which focus on collective responsibility for the cost and quality of care for beneficiaries. Similarly, we agree with the comments pointing out that eligible clinicians participating in MIPS APMs are actively engaged in improvement activities by virtue of participating in the APM.

    Comment: A commenter sought clarification regarding how a subgroup of MIPS eligible clinicians that is not participating in a MIPS APM will be treated when other MIPS eligible clinicians in the same large multispecialty practice participate in a MIPS APM.

    Response: We maintain lists of participants that are in the MIPS APM using the APM participant identifier, and those MIPS eligible clinicians will be scored as an APM Entity group under the APM scoring standard. The non-APM participants in the practice will report to MIPS under the generally applicable MIPS requirements for reporting as an individual or group. If the practice decides to report to MIPS as a group under its TIN, then its reporting may include some data from the MIPS APM participants, even though those TIN/NPI combinations will receive their MIPS final score based on the APM Entity group according to the scoring hierarchy in section II.E.6. of this final rule with comment period.

    We are revising the proposed improvement activities scoring policy for MIPS APMs other than the Shared Savings Program or the Next Generation ACO Model. CMS will assign a score for the improvement activities performance category to each MIPS APM, and that score will be applied to each APM Entity group in the MIPS APM. To develop the improvement activities score for a MIPS APM, CMS will compare the requirements of the MIPS APM with the list of improvement activities measures in section II.E.5.f. of this final rule with comment period and score those measures in the same manner that they are otherwise scored for MIPS eligible clinicians according to section II.E.5.f. of this final rule with comment period. Thus, points assigned to an APM Entity group in a MIPS APM under the improvement activities performance category will relate to documented requirements under the terms and conditions of the MIPS APM. We will publish the assigned improvement activities scores for each MIPS APM on the CMS Web site prior to the beginning of the MIPS performance period. In the event that the assigned score does not represent the maximum improvement activities score, APM Entities will have the opportunity to report additional improvement activities that would apply to the APM Entity group score. In the event that the assigned score represents the maximum improvement activities score, APM Entity groups will not need to report additional improvement activities.

    In order to further reduce reporting burden and align with the generally applicable MIPS group reporting option, we are also revising the proposed advancing care information scoring policy for MIPS APMs other than the Shared Savings Program and the Next Generation ACO Model.

    A MIPS eligible clinician may receive a score for the advancing care information performance category either through individual reporting or through group reporting based on a TIN according to the generally applicable MIPS reporting and scoring rules for the advancing care information performance category, described in section II.E.5.g. of this final rule with comment period. We will attribute one score to each MIPS eligible clinician in an APM Entity group by looking for both individual and group data submitted for a MIPS eligible clinician and using the highest score. Thus, instead of only using individual scores to derive an APM Entity-level advancing care information score as proposed, we will use the highest score attributable to each MIPS eligible clinician in an APM Entity group in order to create the APM Entity group score based on the average of the highest scores for all MIPS eligible clinicians in the APM Entity group.

    Like the proposed policy, each MIPS eligible clinician in the APM Entity group will receive one score, weighted equally with that of the other clinicians in the group, and we will calculate a single APM Entity-level advancing care information score. Also like the proposed policy, for a MIPS eligible clinician who has no advancing care information score attributable to the individual—the individual's TIN did not report as a group and the individual did not report—that MIPS eligible clinician will contribute a score of zero to the aggregate APM Entity group score.

    In summary, we will attribute one advancing care information score to each MIPS eligible clinician in an APM Entity group, which will be averaged with the scores of all other MIPS eligible clinicians in the APM Entity group to derive a single APM Entity score. In attributing a score to an individual, we will use the highest score attributable to the TIN/NPI combination of a MIPS eligible clinician. Finally, if there is no group or individual score, we will attribute a zero to the MIPS eligible clinician, which will be included in the aggregate APM Entity score.

    We have revised the proposed policy for the advancing care information performance category for MIPS APM participants under the APM scoring standard because we recognize that individual reporting in the advancing care information performance category for all MIPS eligible clinicians in an APM Entity group may be more burdensome than allowing some degree of group reporting where applicable, and we believe that requiring individual reporting on advancing care information in the MIPS APM context will not supply a meaningfully greater amount of information regarding the use of EHR technology as prescribed by the advancing care information performance category. We believe that this revised policy maintains the alignment with the generally applicable MIPS reporting and scoring requirements under the advancing care information performance category while responding to commenters' desires for reduced reporting requirements for MIPS APM participants. Therefore, we believe that the revised policy, relative to the proposed policy, has the potential to substantially reduce reporting burden with little to no reduction in our ability to accurately evaluate the adoption and use of EHR technology. We also believe this final policy balances the simplicity of TIN-level group reporting, which can reduce burden, with the flexibility needed to address partial TIN scenarios common among APM Entities in MIPS APMs in which a TIN may have some MIPS eligible clinicians participating in the APM Entity and some MIPS eligible clinicians not in the APM Entity. Table 13 summarizes the finalized APM scoring standard rules for MIPS APMs other than the Shared Savings Program and Next Generation ACO Model.

    Table 13—APMs Scoring Standard for MIPS APMs Other Than the Shared Savings Program and Next Generation ACO Model—2017 Performance Period for the 2019 Payment Adjustment MIPS
  • Performance
  • category
  • APM Entity submisson requirement Performance score Performance category weight
  • %
  • Quality The APM Entity group will not be assessed on quality under MIPS in the first performance period. The APM Entity will submit quality measures to CMS as required by the APM N/A 0 Cost MIPS eligible clinicians will not be assessed on cost N/A 0 Improvement Activities APM Entities only need to report improvement activities data if the CMS-assigned improvement activities scores is below the maximum improvement activities score CMS will assign the same improvement activities score to each APM Entity group based on the activities required of participants in the MIPS APM. The minimum score if one half of the total possible points. If the assigned score does not represent the maximum improvement activities score, APM Entities will have the opportunity to report additional improvement activities to add points to the APM Entity group score 25 Advancing Care Information Each MIPS eligible clinician in the APM Entity group reports advancing care information to MIPS through either group reporting at the TIN level or individual reporting CMS will attribute one score to each MIPS eligible clinician in the APM Entity group. This score will be the highest score attributable to the TIN/NPI combination of each MIPS eligible clinician, which may be derived from either group or individual reporting. The scores attributed to each MIPS eligible clinician will be averaged to yield a single APM Entity group score 75
    (15) APM Entity Data Submission Method

    Presently, we require APM Entities in MIPS APMs to either use the CMS Web Interface or another data submission mechanism for submitting data on the quality measures for purposes of the APM. We are not currently proposing to change the method used by APM Entities to submit their quality measure data to CMS. Therefore, we expect that APM Entities like the Shared Savings Program ACOs will continue to submit their data on quality measures using the CMS Web Interface data submission mechanism. Similarly, in the event that the Comprehensive ESRD Care (CEC) Initiative is determined to be a MIPS APM, APM Entities in the CEC would continue to submit their quality measures to CMS using the Quality Measures Assessment Tool (QMAT) for purposes of the CEC quality performance assessment under the APM. We proposed that all MIPS eligible clinicians in APM Entities participating in MIPS APMs would be required to use one of the proposed MIPS data submission mechanisms to submit data for the advancing care information performance category.

    The following is a summary of the comments we received regarding the method used by APM Entities to submit quality data for purposes of MIPS.

    Comment: One commenter requested that all APM Entities be required to use the QRDA III data submission method because many EHRs now support this standard. Another commenter supported retaining the CMS Web Interface as the submission method for quality data for APM Entities participating in the Shared Savings Program. One commenter suggested that the improvement activities information could be collected via the CMS Web Interface. Another commenter suggested that all MIPS performance categories be submitted via web-based reporting. Some commenters communicated that MIPS eligible clinicians participating in APMs should not have to report quality data separately to both APMs and MIPS and another commenter suggested that MIPS APM participants only be required to submit data for the quality and improvement activities performance categories.

    Response: We appreciate the commenter's support and suggestions. We believe the policies that we are adopting in this final rule regarding data submission minimize reporting burden and disruption to APM participants and we will continue to consider new reporting methods in the future.

    Comment: A commenter recommended that the data collection processes be standardized and data submission be minimized to the extent that data can be used for various purposes within the Medicare program because rural practices often have human and IT infrastructure resource limitations.

    Response: We thank the commenters for their input and believe that the finalized policies for the APM scoring standard represent further reductions in reporting burden and reflect our commitment to streamline submissions wherever possible. We will continue to look for ways to reduce reporting burdens without compromising the robustness of our assessments.

    We are finalizing without changes our proposal regarding APM Entity data submission for the quality performance category in all MIPS APMs and the advancing care information performance category in the Shared Savings Program. APM Entity groups will not submit data for the improvement activities performance category unless the improvement activities performance category score we assign at the MIPS APM level is less than the maximum score. In this instance, the APM Entities in the MIPS APM would use one of the MIPS data submission mechanisms if they opt to report additional improvement activities in order to increase their score for the improvement activities performance category. MIPS eligible clinicians in APM Entity groups participating in MIPS APMs other than the Shared Savings Program may report advancing care information performance category to MIPS using a MIPS data submission mechanism for either group reporting at the TIN level or individual reporting. Table 14 describes data submission methods for the MIPS performance categories under the APM scoring standard.

    Table 14—APM Entity Submission Method for Each MIPS Performance Category MIPS performance category APM Entity eligible clinician submission method Quality The APM Entity group submits quality measure data to CMS as required under the APM. Cost No data submitted by APM Entity group to MIPS. Improvement Activities No data submitted by APM Entity group to MIPS unless the assigned score at the MIPS APM level does not represent the maximum improvement activities score, in which case the APM Entity may report additional improvement activities using a MIPS data submission mechanism. Advancing Care Information Shared Savings Program ACO participant TINs submit data using a MIPS data submission mechanism. Next Generation ACO Model and other MIPS APM eligible clinicians submit data at either the individual level or at the TIN level using a MIPS data submission mechanism. (16) MIPS APM Performance Feedback

    For the first MIPS performance feedback specified under section 1848(q)(12) of the Act to be published by July 1, 2017, we proposed that all MIPS eligible clinicians participating in MIPS APMs would receive the same historical information prepared for all MIPS eligible clinicians except the report would indicate that the historical information provided to such MIPS eligible clinicians is for informational purposes only. MIPS eligible clinicians participating in APMs have been evaluated for performance only under the APM. Thus, historical information may not be representative of the scores that these MIPS eligible clinicians would receive under MIPS.

    For MIPS eligible clinicians participating in MIPS APMs, we proposed that the MIPS performance feedback would consist only of the scores applicable to the APM Entity group for the specific MIPS performance period. For example, the MIPS eligible clinicians participating in the Shared Savings Program and Next Generation ACO Model would receive performance feedback for the quality, improvement activities, and advancing care information performance categories for the 2017 performance period. Because these MIPS eligible clinicians would not be assessed for the cost performance category, information on MIPS performance scores for the cost performance category would not be applicable to these MIPS eligible clinicians.

    We also proposed that, for the Shared Savings Program, the performance feedback would be available to the eligible clinicians participating in the Shared Savings Program at the group billing TIN level. For the Next Generation ACO Model we proposed that the performance feedback would be available to all MIPS eligible clinicians participating in the MIPS APM Entity.

    We proposed that in the first MIPS performance period, the MIPS eligible clinicians participating in MIPS APMs other than the Shared Savings Program or the Next Generation ACO Model would receive performance feedback for the improvement activities and advancing care information performance categories only, as they would not be assessed under the quality or cost performance categories. The information such as MIPS measure score comparisons for the quality and cost performance categories would not be applicable to these MIPS eligible clinicians because no such comparative data would exist. We proposed the performance feedback for MIPS eligible clinicians participating in these other APMs would be available for each MIPS eligible clinician that submitted MIPS data for these performance categories under their respective APM Entities. We invited comment on these proposals.

    The following is a summary of the comments we received regarding our proposals to provide the same historical information as those participating in MIPS, provide feedback on scores for applicable performance categories to the APM Entity group for the specific MIPS performance period, and provide feedback for those participating in the Shared Savings Program at the group TIN level and feedback for those participating in the Next Generation ACO Model and all other MIPS APMs at the individual level.

    Comment: One commenter recommended that CMS deliver feedback to clinicians or organizations by no later than October 1 of the reporting year to allow the organization to make appropriate changes in care improvement. One commenter stated that eligible clinicians participating in APMs need timely feedback to provide a clear understanding of patient attribution and performance measurement, and several commenters requested that CMS give feedback more frequently than annually during the first few years of the program.

    Response: We appreciate that MIPS eligible clinicians participating in MIPS APMs would prefer to receive feedback as early and often as possible in order to succeed in the Quality Payment Program and continue to improve, and we will continue to explore opportunities to provide more frequent feedback in the future.

    We are revising the proposed policy in order to maintain alignment with the generally applicable MIPS performance feedback policies. As noted in section II.E.8.a. of this final rule with comment period, the September 2016 QRUR will be used to satisfy the requirement under section 1848(q)(12)(A)(i) of the Act to provide MIPS eligible clinicians performance feedback on the quality and cost performance categories beginning July 1, 2017. We are finalizing a policy that all MIPS eligible clinicians scored under the APM scoring standard will also receive this performance feedback to the extent applicable, unless they did not have data included in the September 2016 QRUR. MIPS eligible clinicians without data included in the September 2016 QRUR will not receive performance feedback until CMS is able to use data acquired through the Quality Payment Program for performance feedback.

    6. MIPS Final Score Methodology

    By incentivizing quality and value for all MIPS eligible clinicians, MIPS creates a new mechanism for calculating MIPS eligible clinician payments. To implement this vision, we proposed a scoring methodology that allows for accountability and alignment across the performance categories and minimizes burden on MIPS eligible clinicians. Further, we proposed a scoring methodology that is meaningful, understandable and flexible for all MIPS eligible clinicians. Our proposed methodology would allow for multiple pathways to success with flexibility for the variety of practice types and reporting options. First, we proposed multiple ways that MIPS eligible clinicians may submit data to MIPS for the quality performance category. Second, we provided greater flexibility in the reporting requirements and scoring for MIPS. Third, we proposed that bonus points would be available for reporting high priority measures and electronic reporting of quality data. Recognizing that MIPS is a new program, we also outlined proposals which we believed are operationally feasible for us to implement in the transition year, while maintaining our longer-term vision.

    Section 1848(q) of the Act requires the Secretary to: (1) Develop a methodology for assessing the total performance of each MIPS eligible clinician according to performance standards for a performance period for a year; (2) using the methodology, provide a final score for each MIPS eligible clinician for each performance period; and (3) use the final score of the MIPS eligible clinician for a performance period to determine and apply a MIPS payment adjustment factor (and, as applicable, an additional MIPS payment adjustment factor) to the MIPS eligible clinician for the MIPS payment year. In section II.E.5 of the proposed rule (81 FR 28181), we proposed the measures and activities for each of the four MIPS performance categories: Quality, cost, improvement activities, and advancing care information. This section of the final rule with comment period discusses our proposals of the performance standards for the measures and activities for each of the four performance categories under section 1848(q)(3) of the Act, the methodology for determining a score for each of the four performance categories (referred to as a “performance category score”), and the methodology for determining a final score under section 1848(q)(5) of the Act based on the scores determined for each of the four performance categories. We proposed to define the performance category score in section II.E.6 of the proposed rule (81 FR 28247) as the assessment of each MIPS eligible clinician's performance on the applicable measures and activities for a performance category for a performance period based on the performance standards for those measures and activities. In section II.E.7 of the proposed rule (81 FR 28271), we included proposals for determining the MIPS adjustments factors based on the final score.

    As noted in section II.E.2 of the proposed rule (81 FR 28176), we proposed to use multiple identifiers to allow MIPS eligible clinicians to be measured as individuals, or collectively as part of a group or an APM Entity group (an APM Entity participating in a MIPS APM). Further, in section II.E.5.a.(2) of the proposed rule (81 FR 28182), we proposed that data for all four MIPS performance categories would be submitted using the same identifier (either individual or group) and that the final score would be calculated using the same identifier. Section II.E.5.h of the final rule with comment period describes our policies in the event that an APM Entity scored through the APM scoring standard fails reporting. The scoring proposals in section II.E.6 of the proposed rule (81 FR 28247), would be applied in the same manner for either individual submissions, proposed as TIN/NPI, or for the group submissions using the TIN identifier. Unless otherwise noted, for purposes of this section on scoring, the term “MIPS eligible clinician” will refer to clinicians that are reporting and are scored at either the individual or group level, but will not refer to clinicians participating in an APM Entity scored through the APM scoring standard.

    Comments related to APM Entity group reporting and scoring for MIPS eligible clinicians participating in MIPS APMs are summarized in section II.E.5.h of this final rule with comment period. All eligible clinicians that participate in APMs are considered MIPS eligible clinicians unless and until they are determined to be either QPs or Partial QPs who elect not to report under MIPS, and are excluded from MIPS, or unless another MIPS exclusion applies. We finalize at § 414.1380(d) that MIPS eligible clinicians in APM Entities that are subject to the APM scoring standard are scored using the methodology under § 414.1370, as described in II.E.5.h of this final rule with comment period.

    MIPS eligible clinicians who participate in APMs that are not MIPS APMs as defined in section II.E.5.h of the proposed rule (81 FR 28234) would report to MIPS as an individual MIPS eligible clinician or group. Unless otherwise specified, the proposals in section II.E.6.a of the proposed rule (81 FR 28247) that relate to reporting and scoring of measures and activities do not affect the APM scoring standard.

    Our rationale for our scoring methodology is grounded in the understanding that the MIPS scoring system has many components and numerous moving parts. Thus, we believe it is necessary to set up key parameters around scoring, including requiring MIPS eligible clinicians to report at the individual or group level across all performance categories and generally, to submit information for a performance category using a single submission mechanism. Too many different permutations would create additional complexities that could create confusion amongst MIPS eligible clinicians as to what is or is not allowed.

    We have heard from stakeholders about our MIPS proposals. There are some major concerns, particularly for the transition year (MIPS payment year 2019), about program complexity, not having sufficient time to understand the program before being measured, and potentially receiving negative adjustments. Based on stakeholder feedback discussed in this section, we are adjusting multiple parts of our proposed scoring approach to enhance the likelihood MIPS eligible clinicians who may have not had time to prepare can succeed under the program. We believe that these adjustments will enable more robust and thorough engagement with the program over time. Specifically, we have modified performance standards for the performance categories used to evaluate the measures and activities as well as the methodology to create a final score, and we lowered the performance threshold. Thus, we have created a transition year scoring methodology that does the following:

    • Provides a negative 4 percent payment adjustment to MIPS eligible clinicians who do not submit any data to MIPS;

    • Ensures that MIPS eligible clinicians who submit data and meet program requirements under any of the three performance categories for which data must be submitted (quality, improvement activities, and advancing care information) for at least a 90-day period,20 and have low overall performance in the performance category or categories on which they choose to report may receive a final score at or slightly above the performance threshold and thus a neutral to small positive adjustment, and

    20 We note there are special circumstances in which MIPS eligible clinicians may submit data for a period of less than 90 days and avoid a negative MIPS payment adjustment. For example, in some circumstances, MIPS eligible clinicians may meet data completeness criteria for certain quality measures in less than the 90-day period. Also, in instances where MIPS eligible clinicians do not meet the data completeness criteria for quality measures submitted, we will provide partial credit for submission of these measures.

    • Ensures that MIPS eligible clinicians who submit data and meet program requirements under each of the three performance categories for which data must be submitted (quality, improvement activities, and advancing care information) for at least a 90-day period, and have average to high overall performance across the three categories may receive a final score above the performance threshold and thus a higher positive adjustment, and, for those MIPS eligible clinicians who receive a final score at or above the additional performance threshold, an additional positive adjustment.

    a. Converting Measures and Activities Into Performance Category Scores (1) Policies That Apply Across Multiple Performance Categories

    The detailed policies for scoring the four performance categories are described in section II.E.6.a of the proposed rule (81 FR 28248). However, as the four performance categories collectively create a single MIPS final score, there are some cross-cutting policies that we proposed to apply to multiple performance categories.

    (a) Performance Standards

    Section 1848(q)(3)(A) of the Act requires the Secretary to establish performance standards for the measures and activities in the four MIPS performance categories. Section 1848(q)(3)(B) of the Act requires the Secretary, in establishing performance standards for measures and activities for the four MIPS performance categories, to consider historical performance standards, improvement, and the opportunity for continued improvement. We proposed to define the term, performance standards, at § 414.1305 as the level of performance and methodology that the MIPS eligible clinician is assessed on for a MIPS performance period at the measures and activities level for all MIPS performance categories. We defined the term, MIPS payment year, at § 414.1305 as the calendar year in which MIPS payment adjustments are applied. Performance standards for each performance category were proposed in more detail in section II.E.6 of the proposed rule (81 FR 28247). MIPS eligible clinicians would know the actual performance standards in advance of the performance period, when possible. Further, each performance category is unified under the principle that MIPS eligible clinicians would know, in advance of the performance period, the methodology for determining the performance standards and the methodology that would be used to score their performance. Table 16 of the proposed rule (81 FR 28249), summarizes the proposed performance standards.

    The following is a summary of the comments we received regarding our performance standard proposals.

    Comment: Multiple commenters were concerned that the performance standards may not be available in advance of the performance period, or that the performance standards methodologies would only be available “when possible”. Commenters requested that CMS publish the performance standards with as much advance notice as possible so that MIPS eligible clinicians will be able to plan and know the standards against which they will be measured.

    Response: The performance standard methodology will be known in advance so that MIPS eligible clinicians can understand how they will be measured. For improvement activities and advancing care information, the performance standards are known prior to the performance period and are delineated in this final rule with comment period. For the quality performance category, benchmarks are known prior to the performance period when benchmarks are based on the baseline period. For new measures in the quality performance category, for quality measures where there is no historical baseline data to build the benchmarks, and for measures in the cost performance category, the benchmarks will be based on performance period data and therefore, will not be known prior to the performance period.

    When performance standards for certain quality measures are not known prior to the performance period, we are implementing protections for MIPS eligible clinicians who ultimately perform poorly on these measures. For example, as discussed in section II.E.6.a.(2)(b) of this final rule with comment period, we have added quality performance floors for the transition year to protect MIPS eligible clinicians against unexpectedly low performance scores. For cost measures, the benchmarks will be based on performance period data and cannot be published in advance. However, we do plan to provide feedback on performance so that MIPS eligible clinicians can understand their performance and improve in subsequent years. We will provide feedback before the performance period based on prior period data, illustrating how MIPS eligible clinicians might perform on these measures and we will provide feedback after the performance period based on performance period data, illustrating how MIPS eligible clinicians actually performed on these measures.

    In addition, as discussed in section II.E.5.e.(2) of this final rule with comment period, we are also lowering the weight of the cost performance category to 0 percent of the final score for the transition year.

    Finally, as discussed in section II.E.7.c of this final rule with comment period, we are lowering the performance threshold for this transition year.

    Comment: One commenter stated that the government should not decide on definitions of quality and financial rewards or penalties for meeting such standards.

    Response: Section 1848(q)(3)(A) of the Act requires the Secretary to establish performance standards for the measures and activities in the four MIPS performance categories, including quality, and section 1848(q)(1)(A) of the Act generally requires us to develop a scoring methodology for assessing the total performance of each MIPS eligible clinician according to those standards and to use such scores to determine and apply MIPS payment adjustment factors and, as applicable, additional MIPS adjustments. We believe our proposals are consistent with these statutory requirements.

    After consideration of the comments, we are finalizing the term, performance standards, at § 414.1305 as the level of performance and methodology that the MIPS eligible clinician is assessed on for a MIPS performance period at the measures and activities level for all MIPS performance categories. We are finalizing at § 414.1380(a) that MIPS eligible clinicians are scored under MIPS based on their performance on measures and activities in four performance categories. MIPS eligible clinicians are scored against performance standards for each performance category and receive a final score, composed of their scores on individual measures and activities, and calculated according to the final score methodology. We are also finalizing at § 414.1380(a)(1) that measures and activities in the four performance categories are scored against performance standards.

    MIPS eligible clinicians will know, in advance of the performance period, the methodology for determining the performance standards and the methodology that will be used to score their performance. MIPS eligible clinicians will know the numerical performance standards in the quality performance category in advance of the performance period, when possible. A summary of the performance standards per performance category is provided in Table 15. As discussed in section II.E.6.a.(2) of this final rule with comment period, we are finalizing at § 414.1380(a)(1)(i) that for the quality performance category, measures are scored between zero and 10 points. Performance is measured against benchmarks. Bonus points are available for both submitting specific types of measures and submitting measures using end-to-end electronic reporting. As discussed in section II.E.6.a.(3) of this final rule with comment period, we are finalizing at § 414.1380(a)(1)(ii) that for the cost performance category, that measures are scored between one and 10 points. Performance is also measured against benchmarks. As discussed in section II.E.6.a.(4), we are also finalizing at § 414.1380(a)(1)(iii) that for the improvement activities performance category each improvement activity is worth a certain number of points. The points for each reported activity are summed and scored against a total potential performance category score of 40 points as discussed in section. As discussed in section II.E.6.a.(5) of this final rule with comment period, we are finalizing at § 414.1380(a)(1)(iv), that for the advancing care information performance category, the performance category score is the sum of a base score, performance score, and bonus score.

    As discussed in section II.E.6.a.(2) of this final rule with comment period, we are making changes to the quality performance category in response to comments received and are providing a minimum floor for all submitted measures to provide additional safeguards in the transition year. As discussed in section II.E.6.a.(4) of this final rule with comment period, we are making a minor modification to the improvement activities standard to provide additional clarification on improvement activities scoring and to align with comments received. Further, as discussed in section II.E.5.f of this final rule with comment period, we are making additional changes to the advancing care information performance category to align with comments received. We are also finalizing our definition of performance category score as defined in § 414.1305 as the assessment of each MIPS eligible clinician's performance on the applicable measures and activities for a performance category for a performance period based on the performance standards for those measures and activities. Additionally, we are finalizing the definition of the term, MIPS payment year with a modification for further consistency with the statute. Specifically, MIPS payment year is defined at § 414.1305 as a calendar year in which the MIPS payment adjustment factor, and if applicable the additional MIPS payment adjustment factor, are applied to Medicare Part B payments.

    Table 15—Performance Category Performance Standards for the 2017 Performance Period Performance category Proposed performance standard Final performance standard Quality Measure benchmarks to assign points, plus bonus points Measure benchmarks to assign points, plus bonus points with a minimum floor for all measures. Cost Measure benchmarks to assign points Measure benchmarks to assign points. Improvement Activities Based on participation in activities that align with the patient-centered medical home Based on participation in activities listed in Table H of the Appendix final rule with comment period. Number of points from reported activities compared against a highest potential score of 60 points Based on participation as a patient-centered medical home or comparable specialty practice. Based on participation in the CMS study on improvement activities and measurement; Based on participation as an APM. Number of points from reported activities or credit from participation in an APM compared against a highest potential score of 40 points Advancing Care Information Based on participation (base score) and performance (performance score) Based on participation (base score) and performance (performance score). Base score: Achieved by meeting the Protect Patient Health Information objective and reporting the numerator (of at least one) and denominator or yes/no statement as applicable (only a yes statement would qualify for credit under the base score) for each required measure Base score: Achieved by meeting the Protect Patient Health Information objective and reporting the numerator (of at least one) and denominator or yes/no statement as applicable (only a yes statement would qualify for credit under the base score) for each required measure. Performance score: Decile scale for additional achievement on measures above the base score requirements, plus 1 bonus point Performance score: Between zero and 10 or 20 percent per measure (as designated by CMS) based upon measure reporting rate, plus up to 15 percent bonus score. (b) Unified Scoring System

    Section 1848(q)(5)(A) of the Act requires the Secretary to develop a methodology for assessing the total performance of each MIPS eligible clinician according to performance standards for applicable measures and activities in each performance category applicable to the MIPS eligible clinician for a performance period. While MIPS has four different performance categories, we proposed a unified scoring system that enables MIPS eligible clinicians, beneficiaries, and stakeholders to understand what is required for a strong performance in MIPS while being consistent with statutory requirements. We sought to keep the scoring as simple as possible, while providing flexibility for the variety of practice types and reporting options. We proposed to incorporate the following characteristics into the scoring methodologies for each of the four MIPS performance categories:

    • For the quality and cost performance categories, all measures would be converted to a 10-point scoring system which provides a framework to universally compare different types of measures across different types of MIPS eligible clinicians. We noted that a similar point framework has been successfully implemented in several other CMS quality programs including the Hospital VBP Program.

    • The measure and activity performance standards would be published, where feasible, before the performance period begins, so that MIPS eligible clinicians can track their performance during the performance period. This transparency would make the information more actionable to MIPS eligible clinicians.

    • Unlike the PQRS or the EHR Incentive Program, we proposed that we generally would not include “all-or-nothing” reporting requirements for MIPS. The methodology would score measures and activities that meet certain standards defined in section II.E.5 of the proposed rule (81 FR 28181 through 28247) and this section of the final rule with comment period. However, section 1848(q)(5)(B)(i) of the Act provides that under the MIPS scoring methodology, MIPS eligible clinicians who fail to report on an applicable measure or activity that is required to be reported shall be treated as receiving the lowest possible score for the measure or activity. Therefore, MIPS eligible clinicians that fail to report specific measures or activities would receive zero points for each required measure or activity that they do not submit to MIPS.

    • The scoring system would ensure sufficient reliability and validity by only scoring the measures that meet certain standards (such as the required case minimum). The standards are described later in this section.

    • The scoring proposals provide incentives for MIPS eligible clinicians to invest and focus on certain measures and activities that meet high priority policy goals such as improving beneficiary health, improving care coordination through health information exchange, or encouraging APM Entity participation.

    • Performance at any level would receive points towards the performance category scores.

    We noted that we anticipated scoring in future years would continue to align and simplify. We requested comment on the characteristics of the proposed unified scoring system.

    We also proposed at § 414.1325 that MIPS eligible clinicians and groups may elect to submit information via multiple mechanisms; however, they must use the same identifier for all performance categories and they may only use one submission mechanism per performance category. For example, a MIPS eligible clinician could use one submission mechanism for sending quality measures and another for sending improvement activities data, but a MIPS eligible clinician could not use two submission mechanisms for a single performance category, such as submitting three quality measures via claims and three quality measures via registry. We did intend to allow flexibility, for example, in rare situations where a MIPS eligible clinician submits data for a performance category via multiple submission mechanisms (for example, submits data for the quality performance category through a registry and QCDR), we would score all the options (such as scoring the quality performance category with data from a registry, and also scoring the quality performance category with data from a QCDR) and use the highest performance category score for the MIPS eligible clinician final score. We would not however, combine the submission mechanisms to calculate an aggregated performance category score.

    In carrying out MIPS, section 1848(q)(1)(E) of the Act requires the Secretary to encourage the use of QCDRs under section 1848(m)(3)(E) of the Act. In addition, section 1848(q)(5)(B)(ii) of the Act provides that under the methodology for assessing the total performance of each MIPS eligible clinician, the Secretary shall encourage MIPS eligible clinicians to report on applicable measures under the quality performance category through the use of CEHRT and QCDRs. To encourage the use of QCDRs, we proposed opportunities for QCDRs to report new and innovative quality measures. In addition, several improvement activities emphasize QCDR participation. Finally, we proposed under section II.E.5.a of the proposed rule (81 FR 28181) for QCDRs to be able to submit data on all MIPS performance categories. We believe these flexible options would allow MIPS eligible clinicians to meet the submission criteria for MIPS in a low burden manner, which in turn may positively affect their final score. We further believe these flexibilities encourage use of end-to-end electronic data extraction and submission where feasible today, and foster further development of methods that avoid manual data collection where automation is a valid, reliable option and that promote the goal of capturing data once and re-using it for multiple appropriate purposes.

    In addition, section 1848(q)(5)(D) of the Act lays out the requirements for incorporating performance improvement into the MIPS scoring methodology beginning with the second MIPS performance period, if data sufficient to measure improvement is available. Section 1848(q)(5)(D)(ii) of the Act also provides that achievement may be weighted higher than improvement. Stated generally, we consider achievement to mean how a MIPS eligible clinician performs relative to performance standards, and improvement to mean how a MIPS eligible clinician performs compared to the MIPS eligible clinician's own previous performance on measures and activities in a performance category. Improvement would not be scored for the transition year of MIPS, but we solicited comment on how best to incorporate improvement scoring for all performance categories.

    The following is a summary of the comments we received regarding our proposal for a unified scoring system.

    Comment: Some commenters expressed support for the unified scoring system and agreed with having a unified and simplified scoring system, but some believed the proposed scoring methodology for MIPS is confusing and requires more alignment across performance categories. Commenters noted that physicians will not be able to understand how CMS calculated their score and would not know if appeals to CMS would be needed in order to correct information or plan for the future. Several commenters requested one single score, or fewer than four separate performance category scores, rather than aggregating individual scores for the four performance categories. Others noted the need for feedback prior to scoring. Others recommended simplifying the scoring system by aligning it across performance categories, and one commenter expressed concern about the total number of measures and activities across the four performance categories adding complexity to the scoring.

    Response: Despite our efforts to create a transparent and standardized scoring system, we understand that some stakeholders may be concerned about the scoring complexity and may want more alignment across categories. We also understand stakeholders' requests for feedback prior to scoring. Several of our core objectives for MIPS are to promote program understanding and participation through customized communication, education, outreach and support, and to improve data and information sharing to provide accurate, timely, and actionable feedback to MIPS eligible clinicians. Prior to receiving a payment adjustment, MIPS eligible clinicians will receive timely confidential feedback on their program performance as discussed in section II.E.8.a of this final rule with comment period.

    We have simplified the overall scoring approach for MIPS eligible clinicians in the transition year. Under this scoring approach, MIPS eligible clinicians who report measures/activities with minimal levels of performance will not be subject to negative payment adjustments if their final score is at or above the performance threshold. We believe having scores for individual performance categories aligns with the statute; however, we have provided numerous examples within section II.E.6.a.(2)(g) of this final rule with comment period to provide transparency as to how we will calculate MIPS eligible clinicians' scores and help MIPS eligible clinicians to understand how to succeed in the program. Further, we will continue to provide additional materials to create a transparent and standardized scoring system.

    Comment: Commenters expressed concern that the unified scoring system may not allow consumers and payers to make meaningful comparisons across MIPS eligible clinicians. The commenters' reasons for concern include the varied reporting options and different score denominators.

    Response: We have taken a patient-centered approach toward implementing our unified scoring system, which does allow for special circumstances for certain types of practices such as non-patient facing professionals, as well as small practices, rural practices and those in HPSA geographic areas. We believe our approach balances the interests of patients and payers while also providing flexibility for the variety of MIPS eligible clinician practices and encourages more collaboration across practice types.

    Comment: Multiple commenters requested clarification on evaluating group performance within each of the four performance categories; specifically, whether it is CMS's intent to evaluate each individual within a group and somehow aggregate that performance into a composite group score or to evaluate the group as a single entity.

    Response: Evaluation of group practices and individual practices is discussed under each performance category in sections II.E.5.b., II.E.5.e., II.E.5.f., and II.E.5.g. of this final rule with comment period.

    Comment: One commenter requested that CMS explain the benefit of reporting via QCDR and why this method is emphasized in the proposed rule.

    Response: QCDRs have more flexibility to collect data from different data sources and to rapidly develop innovative measures that can be incorporated into MIPS. Therefore, we believe that QCDRs provide an opportunity for innovative measurement that is both relevant to MIPS eligible clinicians and beneficial to Medicare beneficiaries. In addition, section 1848(q)(1)(E) of the Act requires us to encourage the use of QCDRs.

    Comment: Some commenters supported the removal of “all-or-nothing” scoring. One commenter encouraged CMS to create more partial-scoring opportunities.

    Response: We appreciate the comment on the removal of “all-or-nothing” scoring. We will take these comments into consideration when considering additional recommendations for partial credit in future rulemaking

    Comment: One commenter expressed concern that CMS cannot measure physician “performance” accurately. The commenter cited multiple sources that supported this statement.

    Response: We recognize the challenges in measuring clinician performance and continue to work with stakeholders to address concerns.

    After consideration of these comments, we are finalizing all of our policies related to unified scoring as proposed, except we are modifying our proposed policy on scoring quality measures.

    We list below all policies we are finalizing related to our proposed unified scoring system.

    • For the quality and cost performance categories, all measures will be converted to a 10-point scoring system which provides a framework to universally compare different types of measures across different types of MIPS eligible clinicians.

    • The measure and activity performance standards will be published, where feasible, before the performance period begins, so that MIPS eligible clinicians can track their performance during the performance period.

    • MIPS eligible clinicians who fail to report specific measures or activities would receive zero points for each required measure or activity that they do not submit to MIPS.

    • The scoring policies provide incentives for MIPS eligible clinicians to invest and focus on certain measures and activities that meet high priority policy goals such as improving beneficiary health, improving care coordination through health information exchange, or encouraging APM Entity participation.

    • Performance at any level would receive points towards the performance category scores.

    We also are finalizing at § 414.1325 that MIPS eligible clinicians and groups may elect to submit information via multiple mechanisms; however, they must use the same identifier for all performance categories and they may only use one submission mechanism per performance category. For example, a MIPS eligible clinician could use one submission mechanism for sending quality measures and another for sending improvement activities data, but a MIPS eligible clinician could not use two submission mechanisms for a single performance category, such as submitting three quality measures via claims and three quality measures via registry. We did intend to allow flexibility, for example, in rare situations where a MIPS eligible clinician submits data for a performance category via multiple submission mechanisms (for example, submits data for the quality performance category through a registry and QCDR), we will score all the options (such as scoring the quality performance category with data from a registry, and also scoring the quality performance category with data from a QCDR) and use the highest performance category score for the MIPS eligible clinician final score. We will not however, combine the submission mechanisms to calculate an aggregated performance category score. The one exception to this policy is CAHPS for MIPS, which is submitted using a CMS-approved survey vendor. CAHPS for MIPS can be scored in conjunction with other submission mechanisms.

    With regard to the above policy, we note that some submission mechanisms allow for multiple measure types, such as a QCDR could submit data on behalf of an eligible clinician for a mixture of MIPS eCQMs and non-MIPS measures. However, we recognize that the scoring of only one submission mechanism in the transition year may influence which measures a MIPS eligible clinician selects to submit for the performance period. For example, a MIPS eligible clinician or group may only be able to report a limited number of measures relevant to their practice through a given submission mechanism, and therefore they may elect to choose a different submission mechanism through which a more robust set of measures relevant to their practice is available. We are seeking comment on whether we should modify this policy to allow combined scoring on all measures submitted across multiple submission mechanisms within a performance category. Specifically, we are seeking comment on the following questions:

    • Would offering a combined performance category score across submissions mechanisms encourage electronic reporting and the development of more measures that effectively use highly reliable, accurate clinical data routinely captured by CEHRT in the normal course of delivering safe and effective care? If so, are there particular approaches to the performance category score combination that would provide more encouragement than others?

    • What approach should be used to combine the scores for quality measures from multiple submission mechanisms into a single aggregate score for the quality performance category? For example, should CMS offer a weighted average score on quality measures submitted through two or more different mechanisms? Or take the highest scores for any submitted measure regardless of how the measure is submitted?

    • What steps should CMS and ONC consider taking to increase clinician and consumer confidence in the reliability of the technology used to extract, aggregate, and submit electronic quality measurement data to CMS?

    • What enhancements to submission mechanisms or scoring methodologies for future years might reinforce incentives to encourage electronic reporting and improve reliability and comparability of CQMs reported by different electronic mechanisms?

    We are modifying our proposed policy on scoring quality measures. Specifically, as discussed in section II.E.6.a.(2)(b) of this final rule with comment period, for the transition year, we are providing a global minimum floor of 3 points for all quality measures submitted. As discussed in section II.E.6.a.(2)(c) of the final rule with comment period, we are also modifying our proposed policy in which we would only score the measures that meet certain standards (such as required case minimum). For the transition year, we are automatically providing 3 points for quality measures that are submitted, regardless of whether they lack a benchmark or do not meet the case minimum or data completeness requirements. Finally, as discussed in section II.E.6.h of this final rule with comment period, we intend to propose options for scoring based on improvement through future rulemaking.

    Various policies related to scoring the four performance categories are finalized at § 414.1380(b) and described in more detail in sections II.E.6.a.(2), II.E.6.a.(3), II.E.6.a.(4), and II.E.5.g.(6) of this final rule with comment period.

    (c) Baseline Period

    In other Medicare quality programs, such as the Hospital VBP Program, we have adopted a baseline period that occurs prior to the performance period for a program year to measure improvement and to establish performance standards. We view the MIPS Program as necessitating a similar baseline period for the quality performance category. We intend to establish a baseline period for each performance period for a MIPS payment year to measure improvement for the quality performance category and to enable us to calculate performance standards that we can establish and announce prior to the performance period. As with the Hospital VBP Program, we intend to adopt one baseline period for each MIPS payment year that is as close as possible in duration to the performance period specified for a MIPS payment year. In addition, evaluating performance compared to a baseline period may enable other payers to incorporate MIPS benchmarks into their programs. For each MIPS payment year, we proposed at section II.E.6.a.(1)(c) of the proposed rule (81 FR 28250) that the baseline period would be the 12-month calendar year that is 2 years prior to the performance period for the MIPS payment year. Therefore, for the first MIPS payment year (CY 2019 payment adjustments), for the quality performance category, we proposed that the baseline period would be CY 2015 which is 2 years prior to the proposed CY 2017 performance period. As discussed in section II.E.6.a.(2)(a) of the proposed rule (81 FR 28251), we proposed to use performance in the baseline period to set benchmarks for the quality performance category, with the exception of new measures for which we would set the benchmarks using performance in the performance period and an exception for CMS Web Interface reporters, which will use the benchmarks associated with Shared Savings Program. For the cost performance category, we proposed to set the benchmarks using performance in the performance period and not the baseline period, as discussed in section II.E.6.a.(3) of the proposed rule (81 FR 28259). For the cost performance category, we also made an alternative proposal to set the benchmarks using performance in the baseline period. We proposed to define the term “measure benchmark” for the quality and cost performance categories (81 FR 28250) as the level of performance that the MIPS eligible clinician will be assessed on for a performance period at the measures and activities level.

    The following is a summary of the comments we received regarding our proposal to define the baseline period.

    Comment: One commenter expressed concern that baseline scoring may be misaligned when using benchmarks from 1 year for the cost performance category and a different year for measures in the quality performance category. Multiple commenters believe all categories should use the same year to determine benchmarks. Some commenters requested that CMS measure MIPS eligible clinicians as close as possible to the performance period, ideally, less than 2 years from the performance period. Others noted concern about the ability of a clinician to correct actions with 2-year old data.

    Response: Ideally, we would like to have data sources for our benchmarks aligned across the quality and cost performance categories. However, we have purposefully chosen different periods for the quality and cost performance categories. We proposed to use the baseline period for benchmarks for the quality performance category so that MIPS eligible clinicians can know quality performance category benchmarks in advance; however, we believe there are disadvantages to benchmarking cost measures to a previous year. For example, development of a new technology or a change in payment policy could result in a significant change in typical cost from year to year. Therefore, for more accurate data, it is better to build cost benchmarks from performance period data than the baseline period. We believe there is more value in the advance notice for quality performance measures so that MIPS eligible clinicians can benchmark themselves for quality measures when historical data is available. In contrast, for the cost performance category, we believe it is more beneficial to base benchmarks on the performance period. After considering comments, we are finalizing that the baseline period will be the 12-month calendar year that is 2 years prior to the performance period for the MIPS payment year. We believe that 2 years is the most recent data we can use to develop benchmarks prior to the performance period.

    We will use performance in the baseline period to set benchmarks for the quality performance category, with the exception of new quality measures, or quality measures that lack historical data, for which we would set the benchmarks using performance in the performance period, and an exception for CMS Web Interface reporters which we will use the benchmarks associated with the Shared Savings Program. For the cost performance category, we will set the benchmarks using performance in the performance period and not the baseline period. We are defining the term “measure benchmark” for the quality and cost performance categories at § 414.1305 as the level of performance that the MIPS eligible clinician is assessed on for a specific performance period at the measures and activities level.

    (2) Scoring the Quality Performance Category

    In section II.E.5.b.(3) of the proposed rule, we proposed multiple ways that MIPS eligible clinicians may submit data for the quality performance category to MIPS; however, we proposed that the scoring methodology would be consistent regardless of how the data is submitted. In summary, we proposed at § 414.1380(b)(1) to assign 1-10 points to each measure based on how a MIPS eligible clinician's performance compares to benchmarks. Measures must have the required case minimum to be scored. We proposed that if a MIPS eligible clinician fails to submit a measure required under the quality performance category criteria, then the MIPS eligible clinician would receive zero points for that measure. We proposed that MIPS eligible clinicians would not receive zero points if the required measure is submitted (meeting the data completeness criteria as defined in section II.E.5.b.(3)(b) of the proposed rule (81 FR 28188) but is unable to be scored for any of the reasons listed in section II.E.6.a.(2) of the proposed rule (81 FR 28250), such as not meeting the required case minimum or a measure lacks a benchmark. We described in section II.E.6.a.(2)(d) of the proposed rule (81 FR 28254), examples of how points would be allocated and how to compute the overall quality performance category score under these scenarios. Bonus points would be available for reporting high priority measures, defined as outcome, appropriate use, efficiency, care coordination, patient safety, and patient experience measures.

    As discussed in section II.E.6.a.(2)(g) of the proposed rule (81 FR 28256), the quality performance category score would be the sum of all the points assigned for the scored measures required for the quality performance category plus the bonus points (subject to the cap) divided by the sum of total possible points. Examples of the calculations were provided in the proposed rule (81 FR 28256).

    In section II.E.6.b of the proposed rule (81 FR 28269), we discussed how we would score MIPS eligible clinicians who do not have any scored measures in the quality performance category. The details of the proposed scoring methodology for the quality performance category are described below.

    (a) Quality Measure Benchmarks

    For the quality performance category, we proposed at section II.E.6.a.(2)(a) of the proposed rule (81 FR 28251) that the performance standard is measure-specific benchmarks. Benchmarks would be determined based on performance on measures in the baseline period. For quality performance category measures for which there are baseline period data, we proposed to calculate an array of measure benchmarks based on performance during the baseline period, breaking baseline period measure performance into deciles. Then, a MIPS eligible clinician's actual measure performance during the performance period would be evaluated to determine the number of points that should be assigned based on where the actual measure performance falls within these baseline period benchmarks. If a measure does not have baseline period information (for example, new measures), or if the measure specifications for the baseline period differ substantially from the performance period (for example, when the measure requirements change due to updated clinical guidelines), then we proposed to determine the array of benchmarks based on performance on the measure in the performance period, breaking the actual performance on the measure into deciles. In addition, we proposed to create separate benchmarks for submission mechanisms that do not have comparable measure specifications. For example, several eCQMs have specifications that are different than the corresponding measure from registries. We proposed to develop separate benchmarks for EHR submission mechanisms, claims submission mechanisms, and QCDRs and qualified registry submission mechanisms.

    For CMS Web Interface reporting, we proposed to use the benchmarks from the Shared Savings Program as described at https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/sharedsavingsprogram/Quality-Measures-Standards.html, which were finalized in previous rulemaking.21 We proposed to adopt the Shared Savings Program performance year benchmarks for measures that are reported through the CMS Web Interface for the MIPS performance period, but proposed to apply the MIPS method of assigning 1 to 10 points to each measure as an alternative to calculating separate MIPS benchmarks. Because the Shared Savings Program does not publicly post or use benchmarks below the 30th percentile, we proposed to assign all scores below the 30th percentile a value of 2 points, which is consistent with the mid-cluster approach we proposed for topped out measures. We believed using the same benchmarks for MIPS and the Shared Savings Program for the CMS Web Interface measures would be appropriate because, as is discussed in the proposed rule (81 FR 28237 through 28243), we proposed to use the MIPS benchmarks to score MIPS eligible clinicians in the Shared Savings Program and the Next Generation ACO Model on the quality performance category and believe it is important to not have conflicting benchmarks. We would post the MIPS CMS Web Interface benchmarks with the other MIPS benchmarks.

    21 Shared Saving Program quality performance benchmarks and scoring methodology regulations: Medicare Program; Medicare Shared Savings Program: Accountable Care Organizations; Final Rule, 76 FR 67802 (Nov. 2, 2011). Medicare Program; Revisions to Payment Policies under the Physician Fee Schedule, Clinical Laboratory Fee Schedule & Other Revisions to Part B for CY 2014; Final Rule, 78 FR 74230 (Dec. 10, 2013). Medicare Program; Revisions to Payment Policies under the Physician Fee Schedule, Clinical Laboratory Fee Schedule & Other Revisions to Part B for CY 2015; Final Rule, 79 FR 67907 (Nov. 13, 2014). Medicare Program; Revisions to Payment Policies under the Physician Fee Schedule, Clinical Laboratory Fee Schedule & Other Revisions to Part B for CY 2016; Final Rule, 80 FR 71263 (Nov. 16, 2015).

    As an alternative approach, we considered creating CMS Web Interface specific benchmarks for MIPS instead of using the Shared Savings Program benchmarks. This alternative approach for MIPS benchmarks would be restricted to CMS Web Interface reporters and would not include other MIPS data submission methods or other data sources which are currently used to create the Shared Saving Program benchmarks. This alternative would also apply the topped out cluster approach if any measures are topped out. While we see benefit in having CMS Web Interface methodology match the other MIPS benchmarks, we are also concerned about the Shared Saving Program and the Next Generation ACO Model participants having conflicting benchmark data. We requested comments on building CMS Web Interface specific benchmarks.

    We proposed that all MIPS eligible clinicians, regardless of whether they report as an individual or group, and regardless of specialty, that submit data using the same submission mechanism would be included in the same benchmark. We proposed to unify the calculation of the benchmark by using the same approach as the VM of weighting the performance rate of each MIPS eligible clinician and group submitting data on the quality measure by the number of beneficiaries used to calculate the performance rate so that group performance is weighted appropriately (77 FR 69321 through 69322). We would also include data from APM Entity submissions in the benchmark but would not score APM Entities using the MIPS scoring methodology. For APM scoring, we refer to section II.E.5.h. of the proposed rule (81 FR 28234).

    To ensure that we have robust benchmarks, we proposed that each benchmark must have a minimum of 20 MIPS eligible clinicians who reported the measure meeting the data completeness requirement defined in section II.E.5.b.(3) of the proposed rule (81 FR 28185), as well as meeting the required case minimum criteria for scoring that is defined later in this section. We proposed a minimum of 20 because, as discussed below, our benchmarking methodology relies on assigning points based on decile distributions with decimals. A decile distribution requires at least 10 observations. We doubled the requirement to 20 so that we would be able to assign decimal point values and minimize cliffs between deciles. We did not want to increase the benchmark sample size requirement due to concerns that an increase could limit the number of measures with benchmarks.

    We also proposed that MIPS eligible clinicians who report measures with a performance rate of 0 percent would not be included in the benchmarks. In our initial analysis, we identified some measures that had a large cluster of eligible clinicians with a 0 percent performance rate. We were concerned that the 0 percent performance rate represents clinicians who are not actively engaging in that measurement activity. We did not want to inappropriately skew the distribution. We solicited comment on whether or not to include 0 percent performance in the benchmark.

    We proposed at § 414.1380(b)(1)(i) to base the benchmarks on performance in the baseline period when possible. We proposed to publish the numerical benchmarks when possible, prior to the start of the performance period. In those cases, where we do not have comparable data from the baseline period, we proposed to use information from the performance period to establish benchmarks. While the benchmark methodology would be established in a final rule in advance of the performance period, we proposed that the actual numerical benchmarks would not be published until after the performance period for quality measures that do not have comparable data from the baseline period. The methodology for creating the benchmarks was discussed in the proposed rule (81 FR 28251).

    We considered not scoring measures that either are new to the MIPS program or do not have a historical benchmark based on performance in the baseline period. This policy would be consistent with the VM policy in which we do not score measures that have no benchmark (77 FR 69322). However, in the proposed rule (81 FR 28252), we expressed concerned that such a policy could stifle reporting on innovative new measures because it would take several years for the measure to be incorporated into the performance category score. We also believed that any issues related to reporting a new measure would not disproportionately affect the relative performance between MIPS eligible clinicians.

    We also considered a variation on the scoring methodology that would provide a floor for a new MIPS measure. Under this variation, if a MIPS eligible clinician reports a new measure under the quality performance category, the MIPS eligible clinician would not score lower than 3 points for that measure. This would encourage reporting on new measures, but also prevent MIPS eligible clinicians from receiving the lowest scores for a new measure, while still measuring variable performance. Finally, we also considered lowering the weight of a new measure, so that new measures would contribute relatively less to the score compared to other measures. In the end, we did not propose the alternatives we considered, because we wanted to encourage adoption and measured performance of new measures, however, we did request comment on these alternatives, including comments on what the lowest score should be for MIPS eligible clinicians who report a new measure under the quality performance category and protections against potential gaming related to reporting of new measures only. We also sought comments on alternative methodologies for scoring new measures under the quality performance category, which would assure equity in scoring between the methodology for measures for which there is baseline period data and for new measures which do not have baseline period data available.

    Finally, we clarified that some PQRS reporting mechanisms have limited experience with all-payer data. For example, under PQRS, all-payer data was permitted only when reporting via registries for measure groups; reporting via registries for individual measures was restricted to Medicare only. Under MIPS, however, we proposed to have more robust data submissions, as described in section II.E.5.b.(3) of the proposed rule (81 FR 28188). We recognized that comparing all-payer performance to a benchmark that is built, in part, on Medicare data is a limitation and noted we would monitor the benchmarks to see if we need to develop separate benchmarks. We also noted that this data issue would resolve in a year or two, as new MIPS data becomes the historical benchmark data in future years.

    The following is a summary of the comments we received regarding our proposals for quality measure benchmarks.

    Comment: Commenters generally supported our proposed approach: some commenters supported the establishment of separate benchmarks for submission mechanisms that do not have comparable measure specifications, and another supported using national benchmarks and linear-based scoring in the MIPS performance scoring methodology.

    Response: We agree with commenters and are finalizing at § 414.1380(b)(1)(iii) the establishment of separate benchmarks for the following submission mechanisms: EHR submission options; QCDR and qualified registry submission options; claims submission options; CMS Web Interface submission options; CMS-approved survey vendor for CAHPS for MIPS submission options; and administrative claims submission options. We note that the administrative claims benchmarks are for measures derived from claims data, such as the readmission measure. As discussed below, the CMS Web Interface submission benchmarks will be the same as the Shared Savings Program benchmarks for the corresponding Shared Savings Program performance period. We note that assigning separate benchmarks in this manner creates opportunities for clinicians to achieve higher quality scores by selectively choosing submission mechanisms; as discussed in section II.E.5.a.(2) in this final rule with comment period, we intend to monitor for such activity and to report back on any findings from our monitoring in future rulemaking.

    Comment: Commenters requested that CMS provide each measure's benchmarks in advance, with one recommending that CMS do so in the final rule and in future proposed rules so that MIPS eligible clinicians know their target goals or, alternatively, that CMS hold a listening session for input on benchmarks for each measure. The commenters stated that they did not want to be held accountable for performance if benchmarks cannot be provided in advance. One commenter noted that it would be difficult to gauge performance and areas for improvement since benchmarks would not be released in time and real time feedback is needed.

    Response: We agree with commenters that quality benchmarks should be made public and should be known in advance when possible so that MIPS eligible clinicians can understand how they will be measured. We are finalizing that measure benchmarks are based on historical performance for the measures based on a baseline period. Those benchmarks will be known in advance of the performance period. We finalize this approach with one exception. The CMS Web Interface will use benchmarks from the corresponding performance year of the Shared Savings Program and not the baseline year. Those benchmarks are also known in advance of the performance period.

    When no comparable data exists from the baseline period, then we finalize that we will use information from the performance period (CY 2017 for the transition year, during which MIPS eligible clinicians may report for a minimum of any continuous 90-day period, as discussed in section II.E.4 of this final rule with comment period) to assess measure benchmarks. In this case, while the benchmark methodology is being finalized in this final rule with comment period, the numerical benchmarks will not be known in advance of the performance period. However, as discussed throughout this final rule with comment period, we have added protections to protect MIPS eligible clinicians from poor performance, particularly in the transition year.

    Comment: Some commenters did not support the use of 2015 data or other historical data to set the 2017 benchmarks, with one commenter stating that CMS would be using data from periods during which MIPS did not exist and requesting that CMS establish an adequate foundation for benchmarks based on MIPS data. One commenter recommended that CMS not set benchmarks or hold clinicians accountable for performance until it has established an adequate foundation based on MIPS data. Another emphasized using reliable and valid patient sample sizes or adequate foundation of data to determine benchmarks even if only for limited number of measures.

    Response: In establishing the performance standards, we had to choose between two feasible alternatives: Either develop benchmarks based on historical data and provide the numerical benchmarks in advance of the performance period; or use more current data for benchmarks and not provide the numerical benchmarks in advance of the performance period. We believe there is more value in providing advance notice for quality performance category measures so that MIPS eligible clinicians can set a clear performance goal for these measures, provided that historical data is available. In many cases, MIPS quality measures are the same as those available under PQRS, so we believe that using PQRS data is appropriate for a MIPS benchmark. In contrast, we do not believe there is more value in providing advance notice for cost performance category measures since the claims data for the cost performance category can vary due to payment policies, payment rate adjustment and other factors. Therefore, we believe having the cost performance category measures based on performance period data will be more beneficial to MIPS eligible clinicians given that it is based on more current data. For the cost performance category, we believe it is more beneficial to base performance on the performance period.

    Comment: A few commenters opposed our benchmarking approach, with some opposing our proposal to separate benchmarks solely by submission mechanism given that medical groups vary by size, location, specialty and other factors which should be built into developing the benchmarks. Commenters recommended specialty-specific benchmarks, benchmarking by region, and benchmarks based on group size (for example, groups with 10-50 clinicians, 51-100 clinicians, 101-500 clinicians, 501-1,000 clinicians, and >1,000 clinicians). In other words, commenters did not believe in one overall benchmark but rather that groups should be compared only to other similar groups (for example, APM entities to APM entities, individuals to individuals, clinicians by specialty and groups to groups, small practices to small practices, or region by region).

    Response: We want the benchmarks to be as broad and inclusive as possible and to establish a single performance standard whenever the measure specifications are comparable. We finalized separate benchmarks by submission mechanism only when the differences in specifications make comparisons less valid. We do not believe differences in specialty, group size, and region create an inherent need for separate benchmarks as the specifications are comparable across each of these categories. Furthermore, we do not expect differences in location, practice size, and other characteristics to impact the quality of care provided. We also want to keep robust sample sizes in each benchmark, and stratifying a benchmark by different characteristics would risk fragmenting the sample size in such a manner that we do not have a valid benchmark for some measures.

    We estimated quality performance scores by practice size based on historical data and did not see a systematic difference in performance by practice among MIPS eligible clinicians that submitted complete and reliable data to require a need for separate benchmarks. However, as we monitor the MIPS program, we will continue to evaluate whether we need to further refine and stratify the benchmarks.

    Comment: One commenter recommended that CMS should analyze the quality performance data by looking at Medicare and non‐Medicare populations separately, and should also examine whether stratifying the performance data by specialty code, site‐of‐service code, or both will result in more accurate measurement and fair adjustments for physicians who treat the sickest patients.

    Response: We want accurate and fair measurement in the MIPS program. We have incorporated measures that have gone through public review. In many cases, we believe the measure developers have considered scenarios where risk adjustment is required to consider mix of patient population and site-of-service and do not believe we need a separate universal policy to further stratify performance by patient mix, specialty, or site of service for all measures. As we move through the transition year, however, we will continue to evaluate the need for additional adjustments or stratification for informational purposes and would make any proposed adjustments through future rulemaking.

    Comment: One commenter expressed their belief that integrating data from MIPS eligible clinicians participating in MIPS APMs with data from MIPS eligible clinicians who do not participate in APMs will skew the universe of reported data toward better performance, as MIPS APM participants tend to be more advanced and well resourced, putting MIPS eligible clinicians who do not participate in APMs at a disadvantage in scoring. The commenter recommended segregating such data for purposes of setting MIPS benchmarks for 2019 payment adjustments.

    Response: As discussed above, we believe in having inclusive and robust datasets as possible for benchmarks. We note that we are building benchmarks by comparable submission mechanism and not all submission mechanisms will have APM data; however, we believe it is important to include APM participants when comparable information is available because the benchmark represents the true distribution of performance. We do not want to establish separate, potentially lower, standards of care for clinicians who are not in APMs. In addition, as more MIPS eligible clinicians transition to APMs, we may not have sufficient volume to create benchmark based on MIPS eligible clinicians alone.

    Comment: A few commenters believed CMS should not allow a “new” physician's quality measure performance to count against the practice under Quality Payment Program if they have not been with that practice greater than 6 months. Another commenter recommended that CMS allow physicians who practice less than 12 months to self-identify so that their scoring can take into account the physician's limited data.

    Response: We appreciate the commenter's feedback and will restrict the data for the benchmarks to MIPS eligible clinicians and, as discussed above, the benchmarks will include comparable APM data, including data from QPs and Partial QPs. We believe these steps will help ensure that the validity and completeness of the benchmark data.

    Comment: Some commenters expressed concern regarding the comparability of measures from different EHR vendor systems. One commenter noted that data submitted from different EHR vendor systems may use different methodologies, as well as inconsistent numerators and denominators, and will therefore not be comparable across systems and clinicians. This commenter recommended that CMS work with ONC to standardize data submitted to Medicare across a number of vendor systems. Another commenter requested that CMS incorporate work by medical societies to implement guides to ensure eCQM calculations and benchmarks are accurate and that different EHRs are accurately capturing eCQMs. Another commenter cautioned that in the case of EHRs, eCQMs are also not uniformly calculated across EHRs, as several different administrative code sets are used. This commenter recommended that CMS create standards and mapping tools to facilitate working across these different codes, ensure consistency when EHR data is exchanged, and ensure eCQM calculations and benchmarks are accurate. The commenter also noted that different EHRs are more accurate at capturing eCQMs.

    Response: To date, there have been issues with EHR data accuracy and consistency. We have worked with ONC to address these issues through public feedback mechanisms, the availability of tools to support eCQM testing and value set uploads, and by encouraging vendors to consume the health quality measure format (HQMF) measure specifications directly. As these improvements penetrate to all systems in use by providers, we expect to see improvements in eCQM consistency. We will continue to work with ONC to continue considering the elimination of transitional code systems to further improve alignment of the eCQM data elements, and we will continue to engage with sites and stakeholder organizations to identify methods to further ensure consistency across sites and systems.

    Comment: Commenters generally supported our proposal to use the Shared Savings benchmarks for CMS Web Interface. One commenter supported our alternative approach of building our own benchmarks for CMS Web Interface measures.

    Response: We appreciate the commenters support and are finalizing our proposal to use the Shared Savings benchmarks for the CMS Web Interface. However, as we discuss in more detail below, we are adding a floor of 3 points for each measure for the transition year. Therefore, any values that are below the 30th percentile will receive a score of 3 points.

    Comment: Some commenters agreed that 0 percent performance rates should be excluded from benchmark calculations. One commenter suggested including 0 percent performance rates in benchmark calculations but distinguishing the data that was intentionally submitted from data that was unintentionally submitted from EHR reporting. Another commenter suggested rewarding clinicians that reported on a measure if more than 50 percent of MIPS eligible clinicians reported zero on that measure and removing zeroes would artificially increase the benchmark for any given measure.

    Response: We appreciate that in some circumstances a 0 performance rate may be a valid score; however, we are also concerned about skewing the distribution with potentially inaccurate scores. We are finalizing the policy to exclude 0 percent scores from the benchmarks for the transition year. We will continue to evaluate the impact of 0 percent scores on benchmarks. However, as described below, we are adding a floor for the transition year of MIPS, which will limit the effect of this adjustment on MIPS eligible clinicians' scores.

    Comment: One commenter did not agree with our proposal to use the Value Modifier approach to weight the performance of individuals and groups by the number of beneficiaries to create a single set of benchmarks. The commenter was concerned about combining both individuals and groups into one set of benchmarks. The commenter recommended simplifying the performance standards and incorporating aspects of the Shared Savings Program and VM into this MIPS category.

    Response: As discussed above, we believe that both individuals and groups reporting through the same submission mechanism are comparable, as the measure specifications are similar. In the proposed rule, we proposed to combine the group and individual data into a single benchmark by using the VM approach of patient weighting. However, after further analysis, we do not believe this approach is appropriate for the MIPS program.

    The VM defines relative performance as statistical difference from the mean for a measure, and weights each clinician's performance rate by the number of beneficiaries to identify the average score for a measure, a single unit. However, unlike the VM, in MIPS, we are not defining relative performance by using a single point, but rather a percentile distribution of the reliable clinician summary performance scores. We have taken steps to ensure that each clinician or group score meets certain standards to promote reliability at the group or individual clinician level. For example, the group or individual reporter must meet certain case volume and data completeness standards to be included in the MIPS benchmark. In MIPS, weighting individual or group values by the number of patients is similar to cloning or replicating that individual or group score in the percentile distribution. In a distribution benchmark, weighting will not have an impact in the following cases: When the distribution of scores is highly compressed (low variance); the distribution of cases is highly compressed (such as, all practices have fairly similar numbers of cases); or when the number of practices is large relative to the typical number of eligible cases for any practice for the measure. However, the difference between unweighted and weighted benchmarks is more likely to have an impact is when the number of eligible cases and corresponding performance scores vary widely across practices. The difference will be exacerbated if there are relatively few practices and/or if practices with especially high or low scores also have a disproportionately large number of cases. For example, assume a given benchmark has one large group and several smaller groups and individual reporters. The large group cares for 20 percent of the beneficiaries represented in the benchmark. If we weight the benchmark by patient weight, then another MIPS eligible clinician with a score just above or just below that performance rate will have a score that is different by a point or two, not because of differences in performance but because of differences in the number of beneficiaries cared for by the group or individual MIPS eligible clinician.

    Therefore, we are not finalizing our proposal to patient weight the benchmarks. Instead, we will count each submission, either by individual or group, as a single data point for the benchmark. We believe this data is reliable and the revision simplifies the combination of group and individual performance.

    Comment: Some commenters did not agree with our proposal to use performance period data to set benchmarks in instances where the measure is a new measure or there is a change to an existing measure. Instead, the commenter recommended just giving credit for reporting the measure. Another commenter recommended that new measures receive a score equal to the 90th percentile if the reporting rates are met. Another commenter supported not scoring new quality measures until 2 years after introduction. Another commenter recommended that MIPS eligible clinicians reporting new measures be held harmless from negative scoring.

    Response: To encourage meaningful measurement, we want to score all available measures for performance, including new measures. However, because new measures would not have a benchmark available prior to the start of the performance period; we are creating a 3-point new measure floor specifically for new measures and measures without a benchmark based on baseline period data. This floor would be available annually to any measure without a published benchmark. Generally, we would expect new measures to have the 3-point floor for the first 2 years until we get baseline data for that measure. This approach helps to ensure that the MIPS eligible clinicians are protected from a poor performance score that they would not be able to anticipate. As we discussed in section II.E.6.a.(2)(b) below, we are also setting a global 3-point floor for all submitted measures during the transition year. We would like to note that the global 3-point floor for all measures is a policy for the transition year of MIPS. In contrast, the new measure 3-point floor for measures without a previously published benchmark, such as new measures, would be available in future years of MIPS and not just the transition year. We also note that the new measure 3-point floor for measures without a previously published benchmark, is different than class 2 measures, as defined later in section II.E.6.a.(2)(c) of this rule and summarized in Table 17, that lack a benchmark because we do not have a minimum of 20 MIPS eligible clinicians who reported the measure meeting the case minimum and data completeness requirements. The new measure 3-point floor allows MIPS eligible clinicians to be scored on performance in which the lowest score possible for a measure will be 3 points, and the highest possible score is 10 points assuming the new measure has a benchmark and the MIPS eligible clinician has met the case minimum and data completeness criteria. However, the class 2 measures, as defined in Table 17, is not a floor but rather an automatic score of 3 points, in which MIPS eligible clinicians are not scored on performance and would only receive 3 points for that measure.

    We considered giving a set number of points for submitting a new measure, rather than measuring performance. We do not think it is equitable to give the maximum performance score (a score equal to the 90th percentile or the top decile) when other eligible clinicians may receive fewer points based on performance.

    Comment: Many commenters expressed support for our alternative approach that if a MIPS eligible clinician reports a new measure under the quality performance category, the MIPS eligible clinician will not score lower than 3 points for that measure. One commenter agreed with the assessment that this would encourage clinicians to report new measures, prevent clinicians from gaming the system by reporting only on new measures to avoid being compared to a benchmark, and still incentivize better performance on the new measure. This commenter also expressed support for the alternative to weight new measures less than measures with existing benchmark data, stating that this will also accomplish the above goals. Two commenters recommended that CMS apply this minimum floor proposal both to the transition year in which the measure is available in MIPS and to the first time the eligible clinician reports on the measure. One commenter noted that this will encourage reporting on new measures and help mitigate potential unintended consequences.

    Response: We are finalizing the alternative approach for the scoring of new measures, or measures without a comparable historical benchmark, to have a floor of 3 points until baseline data can be utilized. We note that the floor only applies when the new measure does not have a benchmark based on baseline data and not the first time the eligible clinician reports on the measure in subsequent years.

    In addition, for the transition year (first year) only, we are also implementing a global floor of 3 points for all submitted quality measures, not only new measures. This floor, along with changes in the performance threshold, affords MIPS eligible clinicians the ability to learn about MIPS and be protected from a negative adjustment in the transition year for any level of performance.

    Comment: One commenter noted that, while ensuring that an eligible clinician reporting a new measure would not receive a score lower than three points may incentivize reporting of new measures, the commenter was concerned that doing so may artificially inflate the measure's benchmark, and adversely affect clinicians reporting the measure in year 2, during which time scoring would no longer be based on an inflated benchmark. This commenter recommended that CMS establish measure benchmarks based only on true measure performance instead of potentially inflated, incentivized performance.

    Response: We would like to note that the benchmarks are based on the performance rates for the measures, not on the assigned points. Therefore, the floor for new measures should not affect future benchmarks. Table 16 has an example of how the floor would work.

    Table 16—Example of Using Benchmarks for a Single Measure To Assign Points With a Floor of 3 Points Benchmark decile Sample quality measure
  • benchmarks
  • (%)
  • Possible points with 3-point floor Possible points without 3-point floor
    Benchmark Decile 1 0.0-9.5 3.0 1.0-1.9 Benchmark Decile 2 9.6-15.7 3.0 2.0-2.9 Benchmark Decile 3 15.8-22.9 3.0-3.9 3.0-3.9 Benchmark Decile 4 23.0-35.9 4.0-4.9 4.0-4.9 Benchmark Decile 5 36.0-40.9 5.0-5.9 5.0-5.9 Benchmark Decile 6 41.0-61.9 6.0-6.9 6.0-6.9 Benchmark Decile 7 62.0-68.9 7.0-7.9 7.0-7.9 Benchmark Decile 8 69.0-78.9 8.0-8.9 8.0-8.9 Benchmark Decile 9 79.0-84.9 9.0-9.9 9.0-9.9 Benchmark Decile 10 85.0-100 10 10

    In this example, we still create an array of percentile distributions for benchmarks and decile breaks. However, where we would normally assign between 1.0-2.9 points for MIPS eligible clinicians with performance in the first or second deciles (in this example, performance between 0 and 15.7 percent), we will now assign 3.0 points. In future years, however, as baseline data becomes available for new measures, we would remove the floor and assign points less than 3, as illustrated above. For example, a performance rate of 9.6 percent (start of the 2nd decile), would receive 3.0 points with the floor and only 2.0 points without the floor. This methodology will not affect the scoring for MIPS eligible clinicians with performance in the third decile or higher. In addition, this methodology will not affect the calculation of future benchmarks. We do note, however, that if a MIPS eligible clinician consistently has poor performance, then by the time the baseline data can be used, the MIPS eligible clinician may receive fewer points because the floor has been removed.

    After consideration of the comments on quality measure benchmarks, we are finalizing many policies as proposed. Specifically:

    • For quality measures for which baseline period data is available, we are establishing at § 414.1380(b)(1)(i) measure benchmarks are based on historical performance for the measure based on a baseline period. Each benchmark must have a minimum of 20 individual clinicians or groups who reported the measure meeting the data completeness requirement and minimum case size criteria and performance greater than zero. We will restrict the benchmarks to data from MIPS eligible clinicians, and, as discussed above, comparable APM data, including data from QPs and Partial QPs.

    We will publish the numerical baseline period benchmarks prior to the start of the performance period (or as soon as possible thereafter).

    • For quality measures for which there is no comparable data from the baseline period, we are establishing at § 414.1380(b)(1)(ii) that CMS will use information from the performance period to create measure benchmarks. We will publish the numerical performance period benchmarks after the end of the performance period. In section II.E.4 of this final rule with comment period, we are finalizing that for the transition year, the performance period will be a minimum of any continuous 90-day period within CY 2017. Therefore, for MIPS payment year 2019, we will use data submitted for performance in CY 2017, during which MIPS eligible clinicians may report for a minimum of any continuous 90-day period.

    • We are establishing at § 414.1380(b)(1)(iii) separate benchmarks are used for the following submission mechanisms: EHR submission options; QCDR and qualified registry submission options; claims submission options; CMS Web Interface submission options; CMS-approved survey vendor for CAHPS for MIPS submission options, and administrative claims submission options. As discussed above, we are not stratifying benchmarks by other practice characteristics, such as practice size. For the reasons discussed above, we do not believe that there is a compelling rationale for such an approach, and we believe that stratifying could have unintended negative consequences for the stability of the benchmarks, equity across practices, and quality of care for beneficiaries. However, we continue to receive feedback that small practices should have a different benchmark, so we seek comment on any rationales for or against stratifying by practice size we may not have considered.

    • We are establishing at § 414.1380(b)(1)(ii)(A) that the CMS Web Interface submission will use benchmarks from the corresponding reporting year of the Shared Savings Program. We will post the MIPS CMS Web Interface benchmarks in the same manner as the other MIPS benchmarks. We are not building CMS Web Interface-specific benchmarks for the MIPS. We will apply the MIPS scoring methodology to each measure. Measures below the 30th percentile will be assigned a value of 3 points during the transition year to be consistent with the global floor established in this rule for other measures. We will revisit this global floor for future years.

    We are modifying our proposed policy with regards to patient weighting. Based on public comments, we are not finalizing our proposal to weight the performance rate of each MIPS eligible clinician and group submitting data on the quality measure by the number of beneficiaries used to calculate the performance rate. Instead, we will count each submission, either by an individual or group, as a single data point for the benchmark. We believe the original proposal could create potential unintended distortions in the benchmark. Therefore we believe it is more appropriate to use a distribution of each individual or group submission that meets our criteria to ensure reliable and valid data.

    We are also modifying our proposed policy for scoring new measures. Based on public comments, for the transition year and subsequent years of MIPS, we are adding protection against being unfairly penalized for poor performance on measures without benchmarks by finalizing a 3-point floor for new measures and measures without a benchmark. As discussed in more detail in the next section, for the transition year of MIPS we are also finalizing a 3-point floor for all submitted measures. We will revisit this policy in future years.

    (b) Assigning Points Based on Achievement

    We proposed in § 414.1380(b)(1)(x) of the proposed rule (81 FR 28251) to establish benchmarks using a percentile distribution, separated into deciles, because it translates measure-specific score distributions into a uniform distribution of MIPS eligible clinicians based on actual performance values. For each set of benchmarks, we proposed to calculate the decile breaks for measure performance and assign points for a measure based on the benchmark decile range in which the MIPS eligible clinician's performance rate on the measure falls. For example, MIPS eligible clinicians in the top decile would receive 10 points for the measure, and MIPS eligible clinicians in the next lower decile would receive points ranging from 9 to 9.9. We proposed to assign partial points to prevent performance cliffs for MIPS eligible clinicians near the decile breaks. The partial points would be assigned based on the percentile distribution.

    Table 17 of the proposed rule (81 FR 28252) illustrated an example of using decile points along with partial points to assign achievement points for a sample quality measure. We noted in the proposed rule (81 FR 28252) that any MIPS eligible clinician who reports some level of performance would receive a minimum of one point for reporting if the measure has the required case minimum, assuming the measure has a benchmark.

    We did not propose to base scoring on decile distributions for the same measure ranges as described in Table 17 of the proposed rule when performance is clustered at the high end (that is, “topped out” measures), as true variance cannot be assessed. MIPS eligible clinicians report on different measures and may elect to submit measures on which they expect to perform well. For MIPS eligible clinicians electing to report on measures where they expect to perform well, we anticipated many measures would have performance distributions clustered near the top. We proposed to identify “topped out” measures by using a definition similar to the definition used in the Hospital VBP Program: Truncated Coefficient of Variation 22 is less than 0.10 and the 75th and 90th percentiles are within 2 standard errors; 23 or median value for a process measure that is 95 percent or greater (80 FR 49550).24

    22 The 5 percent of MIPS eligible clinicians with the highest scores, and the 5 percent with lowest scores are removed before calculating the Coefficient of Variation.

    23 This is a test of whether the range of scores in the upper quartile is statistically meaningful.

    24 This last criterion is in addition to the HVBP definition.

    Using 2014 PQRS quality reported data measures, we modeled the proposed benchmark methodology and identified that approximately half of the measures proposed under the quality performance category are topped out. Several measures have a median score of 100 percent, which makes it difficult to assess relative performance needed for the quality performance category score.

    However, we did not believe it would be appropriate to remove topped out measures at this time. As not all MIPS eligible clinicians would be required to report these measures under our proposals for the quality performance category in section II.E.5.b. of the proposed rule (81 FR 28184), it would be difficult to determine whether a measure is truly topped out or if only excellent performers are choosing to report the measure. We also believed removing such a large volume of measures would make it difficult for some specialties to have enough applicable measures to report. At the same time, we did not believe that the highest values on topped out measures convey the same meaning of relative quality performance as the highest values for measures that are not topped out. In other words, we did not believe that eligible clinicians electing to report topped out process measures should be able to receive the same maximum score as eligible clinicians electing to report preferred measures, such as outcome measures.

    Therefore, we proposed to modify the benchmark methodology for topped out measures. Rather than assigning up to 10 points per measure, we proposed to limit the maximum number of points a topped out measure can achieve based on how clustered the scores are. We proposed to identify clusters within topped out measures and would assign all MIPS eligible clinicians within the cluster the same value, which would be the number of points available at the midpoint of the cluster. That is, we proposed to take the midpoint of the highest and lowest scores that would pertain if the measure was not topped out and the values were not clustered. We proposed to only apply this methodology for benchmarks based on the baseline period. When we develop the benchmarks, we would identify the clusters and state the points that would be assigned when the measure performance rate is in a cluster. We proposed to notify MIPS eligible clinicians when those benchmarks are published with regard to which measures are topped out.

    We proposed this approach because we wanted to encourage MIPS eligible clinicians not to report topped out measures, but to instead choose other measures that are more meaningful. We also sought feedback on alternative ways and an alternative scoring methodology to address topped out measures so that topped out measures do not disproportionately affect a MIPS eligible clinician's quality performance category score. Other alternatives could include placing a limit on the number of topped out measures MIPS eligible clinicians may submit or reducing the weight of topped out measures. We also considered whether we should apply a flat percentage in building the benchmarks, similar to the Shared Savings Program, where MIPS eligible clinicians are scored on their percentage of their performance rate and not on a decile distribution and requested comment on how to apply such a methodology without providing an incentive to report topped out measures. Under the Shared Savings Program, 42 CFR 425.502, there are circumstances when benchmarks are set using flat percentages. For some measures, benchmarks are set using flat percentages when the 60th percentile was equal to or greater than 80.00 percent, effective beginning with the 2014 reporting year (78 FR 74759-74763). For other measures benchmarks are set using flat percentages when the 90th percentile was equal to or greater than 95.00 percent, effective beginning in 2015 (79 FR 67925). Flat percentages allow those with high scores to earn maximum or near maximum quality points while allowing room for improvement and rewarding that improvement in subsequent years. Use of flat percentages also helps ensure those with high performance on a measure are not penalized as low performers. We also noted that we anticipate removing topped out measures over time, as we work to develop new quality measures that will eventually replace these topped out measures. We requested feedback on these proposals.

    The following is a summary of the comments we received regarding our proposal to assign points based on achievement.

    Comment: Many commenters supported the use of the decile scoring method for non-topped-out measures, including the partial point allocation, but some cautioned that without stronger clarification, the scoring complexity would create considerable confusion among MIPS eligible clinicians. One commenter wanted to know how CMS would capture partial credit in the quality performance category. The commenter also wanted to know if there is a standardized grading scale used to determine where a clinician/practice might fall between 0-10 points.

    Response: We appreciate the support for the decile scoring. We are finalizing the decile scoring method for assigning points, but for the transition year, we are also adding a 3-point floor for all submitted measures, as well as for the readmission measure (if the readmission measure is applicable). This means that MIPS eligible clinicians will receive between 3 and 10 points per reported measure. We note that this scoring method allows partial credit because the MIPS eligible clinician can still achieve points even if the MIPS eligible clinician does not submit all the required measures. For example, if the MIPS eligible clinician has six applicable measures yet only submits two measures, then we will score the two submitted measures. However, the MIPS eligible clinician will receive a 0 for every required measure that is not submitted.

    Comment: A few commenters requested that CMS not use quality-tiering in MIPS given that regardless of the investment in quality, most MIPS eligible clinicians will receive an average score.

    Response: We are not using the quality-tiering methodology in MIPS. We are shifting to the decile scoring system, and, unlike quality tiering, we expect performance to be along a continuum.

    Comment: Other commenters were concerned about the scoring criteria, which they believed would not offer guaranteed success just for reporting. Commenters stated that benchmarks and performance standards remain undefined and return on investment is uncertain and requested that CMS revise the quality scoring so that half of the quality score is granted to any practice that just attempts to report.

    Response: We would like to note that MACRA requires us to measure performance, not reporting. During this transition year, though, we believe it is important for MIPS eligible clinicians to learn to participate in MIPS, be rewarded for good performance, and be protected from being unfairly subjected to negative payment adjustments. Therefore, in addition to scoring measures on performance, we will give at least 3 points for each quality measure that is submitted under MIPS, as well as for the readmission measure (if the readmission measure is applicable). With the lowered performance threshold described in section II.E.7.c. of this final rule with comment period, this will ensure that MIPS eligible clinicians that submit quality data will receive at least a neutral payment adjustment or a small positive payment adjustment.

    Comment: A few commenters did not support the decile approach. One commenter proposed that CMS model quality scoring on the advancing care information performance category scoring with a target point total and the ability to exceed that total, and another commenter recommended using flat percentages. One commenter opposed using percentiles, deciles or any other rank-based statistics for performance ranking used for payment adjustments because it does not generate information on statistically significant performance at either end of the performance spectrum and hides real differences that could lead to effective quality improvement. The commenter also believed the proposed approach will always penalize a certain proportion of clinicians. This commenter recommended a methodology which uses some basis of statistical significance or classification based on the underlying spread of the distribution.

    Response: All scoring systems have limitations, but we believe the proposed scoring system is appropriate for MIPS. For measures for which there is baseline data, our scoring system bases the benchmarks on this data. This structure aligns with the HVBP and creates benchmarks that are achievable. In addition, we were striving for simplicity, and we believe that comparison to these benchmarks is well aligned. This approach brings attention to measure performance and focuses on quality improvement. We did not propose the flat percentage option as not all measures are structured as a percentage. Finally, we elected not to base the benchmark distribution on statistical significance because those methods can be more difficult to explain, monitor and track. We note also that relative performance is embedded in the MIPS payment adjustment, which is applied to the final score on a linear scale. We are finalizing at § 414.1380(b)(1)(ix) to score performance using a percentile distribution, separated by decile categories.

    Comment: One commenter encouraged CMS to incorporate health equity into a clinician's quality achievement score in future years.

    Response: We will consider this feedback in future rulemaking.

    Comment: On commenter requested clarification on how the CAHPS for MIPS survey would be scored. The commenter asked if CMS intended to create a single CAHPS for MIPS overall mean score roll‐up or if CMS would score each summary survey measure (SSM) individually to create a CAHPS for MIPS average score.

    Response: Each SSM will have an individual benchmark. We will score each SSM individually and compare it against the benchmark to establish the number of points. The CAHPS score will be the average number of points across SSMs.

    Comment: Many commenters supported retaining topped out measures and allowing topped out measures to be awarded the maximum number of points. Commenters emphasized that topped out measures allow more specialties to report and that the proposed lower point assignment to topped out measures put clinicians that have limited ability to report and track performance over time at a distinct disadvantage. For this reason, commenters recommended awarding equal points for topped out and non-topped out measures by maintaining the 10-point maximum value, at least in the transition year. Commenters also cited a lack of transparency in how topped out measures are identified, the existing complexity in the quality scoring approach, the fact that measures that are recognized as topped out nationally might not be topped out regionally or locally, and a belief that topped out measures are only reported by a small percentage of eligible physicians for any particular measure. Commenters recommend not removing topped out measures for at least 3 years since it takes that timeframe for new measures to be developed to replace topped out measures and because some topped out measures are critical to clinical care; however, other commenters recommended removing topped out measures since such measures will not appropriately reward high performance. Another commenter requested a year's notice prior to removal.

    Response: We agree that MIPS eligible clinicians should understand which measures are topped out. Therefore, we are not going to modify scoring for topped out measures until the second year the measure has been identified as topped out. The first year that any measure can be identified as topped out is the transition year, that is, the CY 2017 performance period. Thus, we will not modify the benchmark methodology for any topped out measures for the CY 2017 performance period. We will modify the benchmark methodology for topped out measures beginning with the CY 2018 performance period, provided that it is the second year the measure has been identified as topped out. We seek comment on whether, for the second year a measure is topped out, to use a mid-cluster scoring approach, flat rate percentage approach or to remove topped out measures at this time.

    Comment: Some commenters recommended that if topped out measures are to be scored differently, we should use the Shared Savings Program approach, not the Hospital VBP approach. One commenter suggested that CMS review these measures after the first performance period to re-evaluate topped out designations. One commenter noted that the methodology for distinguishing topped out measures is flawed since a narrow performance gap only means that performance is high for the cohort of reporting providers and does not reflect the performance of the rest of the population to whom the measure may be applicable. This commenter stated that many of the measures CMS that had deemed topped out were not implemented in PQRS long enough for robust data to have been collected to confirm that designation and thus requested that CMS remove the topped out designation.

    Response: As noted above, we are not creating a separate scoring system for topped out measures until the second year that the measure has been identified as topped out based on the baseline quality scores (for example, 2015 performance for the 2017 performance year). Our methodology for selecting topped out measures uses all information available to us. Because we offer the flexibility for most MIPS eligible clinicians to select the measures most relevant to their practice, we generally cannot assess the performance of clinicians on measures that the clinicians do not elect to submit. However, we can assess the performance of clinicians for the readmission measure which is not submitted but which is calculated from administrative claims data. We note that we are not removing topped out measures and that the designation can change if data collection practices and results change. We recognize that the MIPS scoring algorithm may not work as well for topped out measures; however, for the transition year, we have added protections in place to ensure that MIPS eligible clinicians who report at least one quality measure are protected from being unfairly subjected to a negative adjustment. We also intend to reduce the number of topped out measures in MIPS in future years.

    Comment: Commenters requested more transparency in how topped out measures were identified and stressed the importance of identifying topped out measures and the benchmarks for each of before finalizing a separate scoring system for such measures. Some commenters recommended listing them in the final rule with comment period, defining the rationale for maintaining them, and that if advance notice is not possible, topped out measure points should not be reduced. One commenter recommended that we allow the public to provide feedback before designating a measure as topped out to explain why it might appear as such. Another commenter noted that insufficient data is available to determine whether a measure is truly topped out or whether only high performers might have chosen to report a given measure.

    Response: We agree that MIPS eligible clinicians should understand which measures are topped out. We will take these comments into consideration for future rulemaking. As discussed above, we are not going to modify scoring for topped out measures until the second year the measure has been identified as topped out.

    We plan to identify topped out measures for benchmarks based on the baseline period when we post the detailed measures specifications and the measure benchmarks prior to the start of the performance period. This will count as the first year a measure is identified as topped out. The second year the same measure is topped out, we will apply a topped out measure scoring standard beginning in performance periods occurring in 2018. We note as reflected above we are seeking comment on the topped out measure scoring standard. We also plan to identify topped out measures for benchmarks based on the performance period.

    Comment: Most commenters recommended not limiting the number of topped out measures clinicians can submit, with one commenter asking for clarification on whether reporting additional topped out measures would allow a clinician to reach the maximum quality performance category score. Another commenter supported limiting MIPS eligible clinicians to reporting no more than two topped out measures to avoid potential “gaming”.

    Response: For the transition year of MIPS, we are not going to limit the number of topped out measures a clinician can submit. Thus, reporting topped out measures could potentially allow a clinician to reach the maximum quality performance category score since the MIPS eligible clinician could receive 10 points for each topped out measure submitted. We will continue to monitor and evaluate the impact of topped out measures and should we deem it necessary, we would propose a limitation of how many topped out measures could be reported through future rulemaking.

    Comment: One commenter recommended that CMS reweight topped out measures so as not to impose an unavoidable penalty on specialists. Another commenter suggested CMS re-evaluate and consider expanding its criteria for topped out measures to ensure clinicians' relative quality performance is fairly and accurately tied to payment, while still ensuring that specialists have a sufficient number of measures to select from under MIPS.

    Response: We share the concerns that topped out measures may disproportionately affect different specialties. We plan to publicly post which measures are topped out so that commenters will be able to plan accordingly. In addition, for the transition year of MIPS, we are not modifying the scoring for topped out measures. Instead, scoring for topped out measures will be the same as scoring for all other measures. We will continue to monitor and evaluate the impact of topped out measures by various MIPS eligible clinician practice characteristics. We will propose any additional policy changes through future rulemaking. Further, we encourage stakeholders to create new measures that can be used in the MIPS program to replace any topped out measures.

    Comment: One commenter recommended removing topped out measures from the CMS Web Interface measures.

    Response: We are not proposing to remove topped out measures for MIPS in the transition year, and we do not believe it would be appropriate to remove topped out measures from the CMS Web Interface. The CMS Web Interface measures are used in MIPS and in APMs such as the Shared Savings Program. We have aligned policies where possible, including using the Shared Savings Program benchmarks for the CMS Web Interface measures. We believe any modifications to the CMS Web Interface measures should be coordinated with Shared Savings Program and go through rulemaking.

    Comment: One commenter was concerned about our comment in the proposed rule that approximately half of the MIPS quality measures are topped out and that several have a median score of 100 percent.

    Response: We share the commenter's concerns that so many measures are topped out and show little variation in performance. It is unclear if this result is truly due to lack of variation in performance or clinicians are only submitting measures for which they have a good performance. We believe that MIPS eligible clinicians generally should have the flexibility to select measures most relevant to their practice, but one trade-off is not all MIPS eligible clinicians are reporting the same measure. Because removing such a large volume of measures would make it difficult for some specialties to have enough applicable measures to submit, we are not removing these measures from MIPS. As discussed above, we will identify these measures for year 1, but we will not modify the scoring of topped out measures until the second year they have been identified.

    Comment: One commenter recommended that CMS identify topped out measures as measures with a median performance rate over 95 percent because the definition is easier to understand. Another commenter requested further clarification on the definition of topped out measures.

    Response: We agree that, for process measures that are scored between 0 and 100 percent, using a median greater than 95 percent is a simple way to identify topped out measures. For process measures, we are modifying our proposal to identify topped out measures as those with a median performance rate of 95 percent or higher. For other measures, we are finalizing our proposal to identify topped out measures by using a definition similar to the definition used in the Hospital VBP Program: Truncated Coefficient of Variation is less than 0.10 and the 75th and 90th percentiles are within 2 standard errors.

    Comment: One commenter recommended that CMS use historical data to analyze whether allowing clinicians to choose an unrestricted combination of six quality measures out of hundreds of measures would lead to a topped out effect among final scores, and to devise an alternative MIPS measure selection methodology should it find that average final scores are universally inflated. Commenter also recommended that CMS remove topped out measures from the list of quality measures that MIPS eligible clinicians have to choose from, as measures that generate universally high performance scores fail to appropriately reward performance with higher payment.

    Response: We plan to continue evaluating the impact of topped out measures in the MIPS program. Because removing such a large volume of measures would make it difficult for some specialties to have enough applicable measures to report, we are not removing these measures from MIPS in year 1. As discussed above, we will identify these measures for year 1, but we will not modify the scoring of topped out measures until the second year they have been identified.

    After consideration of the comments, we are not finalizing all of our policies as proposed.

    We are establishing that the performance standard with respect to the quality performance category is measure-specific benchmarks. Specifically, we are finalizing at § 414.1380(b)(1) that, for the 2017 performance period, MIPS eligible clinicians receive three to ten achievement points for each scored quality measure in the quality performance category based on the MIPS eligible clinician's performance compared to measure benchmarks. A MIPS quality measure must have a measure benchmark to be scored based on performance. MIPS quality measures that do not have a benchmark will not be scored based on performance. Instead, these measures will receive 3 points for the 2017 performance period.

    We are finalizing at § 414.1380(b)(1)(ix), that measures submitted by MIPS eligible clinicians are scored using a percentile distribution, separated by decile categories. As discussed below, for MIPS payment year 2019, topped out quality measures are not scored differently than quality measures that are not considered topped out. At § 414.1380(b)(1)(x), we finalize that for each set of benchmarks, CMS calculates the decile breaks for measure performance and assigns points based on which benchmark decile range the MIPS eligible clinician's measure rate is between. At § 414.1380(b)(1)(xi) we assign partial points based on the percentile distribution. In § 414.1380(b)(1)(xii) MIPS eligible clinicians are required to submit measures consistent with § 414.1335.

    Based on public comments, we are finalizing a modification to our proposal for the benchmark methodology for topped out measures. Specifically, we will not modify the benchmark methodology for topped out measures for the first year that the measure has been identified as topped out. Rather, for the first year the measure has been identified as topped out we will score topped out measures in the same manner as other measures until the second year the measure has been identified as topped out. The first year that any measure can be identified as topped out is the transition year, that is, the CY 2017 performance period. Thus, we will not modify the benchmark methodology for any topped out measures for the CY 2017 performance period. We will modify the benchmark methodology for topped out measures beginning with the CY 2018 performance period, provided that it is the second year the measure has been identified as topped out. We seek comment on how topped out measures would be scored provided that it is the second year the measure has been identified as topped out. One option would be to score the measures using a mid-cluster approach. Under this approach, beginning with the CY 2018 performance period, we would limit the maximum number of points a topped out measure can achieve based on how clustered the scores are. We would identify clusters within topped out measures and assign all MIPS eligible clinicians within the cluster the same value, which will be the number of points available at the midpoint of the cluster. That is, we would take the midpoint of the highest and lowest scores that would pertain if the measure were not topped out and the values were not clustered. We would only apply this methodology for measures with benchmarks based on the baseline period. When we develop the benchmarks, we would identify the clusters and state the points that would be assigned when the measure performance rate is in a cluster. We would notify MIPS eligible clinicians when those benchmarks are published with regard to which measures are topped out. Another approach would be to remove topped out measures in the CY 2018 performance period, provided that it is the second year the measure has been identified as topped out. In this instance, we would not score these measures. Finally, a third approach would be to apply a flat percentage in building the benchmarks for topped out measures, similar to the Shared Savings Program, where MIPS eligible clinicians are scored on the performance rate rather than their place in the performance rate distribution. We request comment on how to apply such a methodology without providing an incentive to report topped out measures. Under the Shared Savings Program, 42 CFR 425.502, there are circumstances when benchmarks are set using flat percentages. For some measures, benchmarks are set using flat percentages when the 60th percentile was equal to or greater than 80.00 percent, effective beginning with the 2014 reporting year (78 FR 74759-74763). For other measures benchmarks are set using flat percentages when the 90th percentile was equal to or greater than 95.00 percent, effective beginning in 2015 (79 FR 67925). Flat percentages allow those with high scores to earn maximum or near maximum quality points while allowing room for improvement and rewarding that improvement in subsequent years. Use of flat percentages also helps ensure those with high performance on a measure are not penalized as low performers. We seek comment on each of these three options. Finally, we also note that we anticipate removing topped out measures over time, as we work to develop new quality measures that will eventually replace these topped out measures. We seek comment on at what point in time should measures that are topped out be removed from the MIPS.

    We are modifying our proposed approach to identify topped out measures. We had proposed to identify all topped out measures by using a definition similar to the definition used in the Hospital VBP Program: Truncated Coefficient of Variation 25 is less than 0.10 and the 75th and 90th percentiles are within 2 standard errors; 26 or median value for a process measure that is 95 percent or greater (80 FR 49550).27 However, for process measures, we are defining at § 414.1305 topped out process measures as those with a median performance rate of 95 percent or higher. For other measures, we are defining at § 414.1305 topped out non-process measures using a definition similar to the definition used in the Hospital VBP Program: Truncated Coefficient of Variation is less than 0.10 and the 75th and 90th percentiles are within 2 standard errors.

    25 The 5 percent of MIPS eligible clinicians with the highest scores, and the 5 percent with lowest scores are removed before calculating the Coefficient of Variation.

    26 This is a test of whether the range of scores in the upper quartile is statistically meaningful.

    27 This last criterion is in addition to the HVBP definition.

    In addition, as discussed in section II.E.6.a.(2)(a) of this final rule with comment period, we will add a global 3-point floor for all submitted measures for the transition year by assigning the decile breaks for measure performance between 3 and 10 points. We will revisit this policy in future years. Adding this floor responds to public comments for protections against being unfairly penalized for low performance. Table 16 in section II.E.6.a.(2)(a) illustrates an example of using decile points along with the addition of the 3-point floor to assign achievement points for a sample quality measure. The methodology in this example could apply to measures where the benchmark is based on the baseline period or for new measures where the benchmark is based on the performance period, assuming the measures meet the case minimum requirements and have a benchmark. We will continue to apply the new measure 3-point floor for measures without baseline period benchmarks for performance years after the first transition year. As discussed in section II.E.6.a.(2)(g)(ii) of this final rule with comment period, CMS Web Interface measures below the 30th percentile will be assigned a value of 3 points during the transition year to be consistent with other submission mechanisms. For the transition year, the 3-point floor will apply for all submitted measures regardless of whether they meet the case minimum requirements or have a benchmark, with the exception of measures submitted through the CMS Web Interface, which must still meet the case minimum requirements and have a benchmark in order to be scored. All submitted measures, regardless of submission mechanism, must meet the case minimum requirements, data completeness requirements, and have a benchmark in order to be awarded more than 3 points. We will revisit this policy in future years.

    We provide some examples below of the total possible points that MIPS eligible clinicians could receive under the quality performance category under our revised methodology. As described in section II.E.5.b. of this rule, MIPS eligible clinicians are required to submit six measures or measures from a specialty measure set, and we would also score MIPS eligible clinicians on the all-cause hospital readmission measure for groups of 16 or more with sufficient case volume (200 cases). The total possible points for the quality performance category would be 70 points for groups of 16 or more clinicians (6 submitted measures × 10 points + 1 all-cause hospital readmission measure × 10 points = 70). Further, the total possible points for small practices of 15 or fewer clinicians and solo practitioners and MIPS individual reporters (or for groups with less than 200 cases for the readmission measure) would be 60 points (6 submitted measures × 10 points = 60) because the all-cause hospital readmissions measure would not be applicable.

    However, for groups reporting via CMS Web Interface and that have sufficient case volume for the readmission measure, the total possible points for the quality performance category would vary between 120-150 points as discussed in Table 24 in section II.E.6.a.(2)(g)(ii) of this rule. If all measures are reported, then the total possible points is 120 points: (11 measures × 10 points) + (1 all-cause hospital readmission measures × 10 points) = 120; for those groups with sufficient case volume (200 cases) to be measured on readmissions. We discuss in section II.E.6.a.(2)(g)(ii) why the total possible points vary based on whether measures without a benchmark are reported. For other CMS Web Interface groups without sufficient volume for the readmissions measure, the readmission measure will not be scored, and the total possible points for the quality performance category would vary between 110-140 points, instead of 120-150 as discussed in section II.E.6.a.(2)(g)(ii).

    (c) Case Minimum Requirements and Measure Reliability and Validity

    We seek to ensure that MIPS eligible clinicians are measured reliably; therefore, we proposed at § 414.1380(b)(1)(iv) to use for the quality performance category measures the case minimum requirements for the quality measures used in the 2018 VM (see § 414.1265): 20 cases for all quality measures, with the exception of the all-cause hospital readmissions measure, which has a minimum of 200 cases. We referred readers to Table 46 of the CY 2016 PFS final rule (80 FR 71282), which summarized our analysis of the reliability of certain claims-based measures used for the 2016 VM payment adjustment. MIPS eligible clinicians that report measures with fewer than 20 cases (and the measure meets the data completeness criteria) would receive recognition for submitting the measure, but the measure would not be included for MIPS quality performance category scoring. Since the all-cause hospital readmissions measure does not meet the threshold for what we consider to be moderate reliability for solo practitioners and groups of less than ten MIPS eligible clinicians for purposes of the VM (see Table 46 of the CY 2016 PFS final rule, referenced above), for consistency, we proposed to not include the all-cause hospital readmissions measure in the calculation of the quality performance category for MIPS eligible clinicians who individually report, as well as solo practitioners or groups of two to nine MIPS eligible clinicians.

    We also proposed that if we identify issues or circumstances that would impact the reliability or validity of a measure score, we would also exclude those measures from scoring. For example, if we discover that there was an unforeseen data collection issue that would affect the integrity of the measure information, we would not include that measure in the quality performance category score. If a measure is excluded, we would recognize that the measure had been submitted and would not disadvantage the MIPS eligible clinicians by assigning them zero points for a non-reported measure.

    The following is a summary of the comments we received regarding our proposal to score measures with minimum case volume and validity.

    Comment: Several commenters were generally supportive of the 20 case minimum requirement.

    Response: We appreciate the support from these commenters and are finalizing our proposed approach of the 20 case minimum requirement for all measures except the all-cause hospital readmission measure. We are keeping the 200 case minimum for the all-cause readmission measure; however, as we are defining small groups as those with 15 or fewer clinicians, we are revising our proposal to not apply the readmission measure to solo practices or to groups with 2-9 clinicians. Rather, for consistency, we will not apply the readmission measure to solo practices or small groups (groups with 15 or fewer clinicians) or MIPS individual reporters.

    Comment: One commenter noted that clinicians attempting to participate, even if they are unable to meet the minimum case requirements, should still be acknowledged for making the attempt, especially if they are showing year-over-year improvement.

    Response: We agree that MIPS eligible clinicians should receive acknowledgement for participating; however, we also have to balance this with the ability to accurately measure performance. For the transition year, we are modifying our proposed approach on how we will score submitted measures that are unreliable because, for example, they are below the case minimum requirements. These measures will not be scored based on performance against a benchmark, but will receive an automatic score of three points. We believe this policy will simplify quality scoring in that it ensures that every clinician that submits quality data will receive a quality score. This is particularly important in the transition year because with a minimum 90-day performance period, we anticipate more MIPS eligible clinicians will submit measures below the case minimum requirements. We selected three points because we did not want to provide more credit for reporting a measure that cannot be reliably scored against a benchmark than for measures for which we can measure performance against a benchmark. In Table 17, we summarize two classes of measures: “class 1” are those measures for which performance can be reliably scored against a benchmark, and “class 2” are measures for which performance cannot be reliably scored against a benchmark. Additionally, we seek comment on whether we should remove non-outcomes measures for which performance cannot reliably be scored against a benchmark (for example, measures that do not have 20 reporters with 20 cases that meet the data completeness standard) for 3 years in a row. We believe it would be appropriate to remove outcomes measures under a separate timeline as we expect reporting of such measures to increase more slowly; further, we want to encourage the availability of outcomes measures.

    Comment: One commenter wanted to know whether a MIPS eligible clinician will receive credit for reporting a measure even if the MIPS eligible clinician's measure data indicates that the measure activity was never performed. Another commenter supported the proposal to allow MIPS eligible clinicians to receive credit for any measures that they report, regardless of whether the MIPS eligible clinician meets the quality performance category submission criteria.

    Response: As summarized in Table 17, for the transition year, measures that are submitted with a 0 percent performance rate (indicating that the measure activity was never performed) will receive 3 points. Measures that are below the case minimum requirement, or lack a benchmark (as discussed in section II.E.6.a (2)(a) or do not meet the data completeness requirements will also receive 3 points. However, we acknowledge that these policies do not reflect our goals for MIPS eligible clinicians' performance under this program. Rather, we aim for complete and accurate reporting that reflects meaningful efforts to improve the quality of care patients receive; we do not believe that a 0 percent performance rate or reporting of measures that do not meet data completeness requirements achieves that aim. As such, we intend to revisit these policies and apply more rigorous standards moving forward. We will revisit these policies in future years.

    Comment: One commenter requested that CMS ensure that all claims measures meet a reliability threshold of 0.80 at the individual physician level.

    Response: We believe that measures with a reliability of 0.4 with a minimum attributed case size of 20 meet the standards for being included as quality measures within the MIPS program. We aim to measure quality performance for as many clinicians as possible, and limiting measures to reliability of 0.7 or 0.8 would result in fewer individual clinicians with quality performance category measures. In addition, a 0.4 reliability threshold ensures moderate reliability for most MIPS eligible clinicians or group practices that are being measured on quality.

    Comment: One commenter also opposed limiting the number of measures that MIPS eligible clinicians can submit that are not able to be scored due to not meeting the required case minimum, since certain specialties may not have sufficient measures to report due to the few that are applicable and available to them.

    Response: We will not be limiting the number of measures that MIPS eligible clinicians can submit that are below the case minimum requirement in the transition year. We may revisit this approach in future years.

    Comment: One commenter recommended that CMS finalize the proposal whereby physicians are not penalized in scoring when they report measures but do not have the required case minimum.

    Response: We are modifying our proposed approach. Under our proposed approach, measures that were below the case minimum requirement, would have not been scored. Our revised approach is that, for the transition year, measures that do not meet the case minimum requirement, lack a benchmark or do not meet the data completeness criteria will not be scored and instead, MIPS eligible clinicians will receive 3 points for submitting the measure.

    After consideration of the comments, we are finalizing case minimum policies for measures at § 414.1380(b)(1)(iv) and (v). For the quality performance category measures, we will use the following case minimum requirements: 20 cases for all quality measures, with the exception of the all-cause hospital readmissions measure, which has a minimum of 200 cases. We reiterate that we will only apply the all-cause readmission measure to groups of 16 or more MIPS eligible clinicians that meet the case minimum requirement.

    Based on public comments, we are revising our proposed policy for all measures, except CMS Web Interface measures and administrative claims-based measures, that are submitted but for which performance cannot be reliably measured because the measures do not meet the required case minimum, do not have a benchmark, or do not meet the data completeness requirement, benchmark or is below the data completeness requirement, it will receive a floor of 3 points. At § 414.1380(b)(1)(vii), for the transition year, we finalize that if the measure is submitted but is unable to be scored because it does not meet the required case minimum, does not have a benchmark, or does not meet the data completeness requirement, the measure will receive a score of 3 points.

    We are finalizing our proposed policy for CMS Web Interface measures that are submitted but for which performance cannot be reliably measured because the measures do not meet the required case minimum or do not have a benchmark. At § 414.1380(b)(1)(viii), we are finalizing that the MIPS eligible clinician will receive recognition for submitting such measures, but the measure will not be included for MIPS quality performance category scoring. CMS Web Interface measures that do not meet the data completeness requirement will receive a score of 0. We are also finalizing our proposed policy for administrative claims-based measures for which performance cannot be reliably measured because the measures do not meet the required case minimum or do not have a benchmark. For the transition year, this policy would only apply to the readmission measure since the only administrative claims-based quality measure is the readmission measure. However, this policy will apply to additional administrative claims-based measures that are added in future years. At § 414.1380(b)(1)(viii), we are finalizing that such measures will not be included in the MIPS eligible clinician's quality performance category score. We note that the data completeness requirement does not apply to administrative claims-based measures. Overall, at § 414.1380, we will provide points for all submitted measures, but only a subset of measures receive points based on performance against a benchmark. Table 17 summarizes our scoring rules and identifies two classes of measures for scoring purposes.28

    28 We classified the measures for simplicity in discussing results. Name of classification subject to change.

    Table 17—Quality Performance Category: Scoring Measures Based on Performance for Performance Period 2017 Measure type Description Scoring rules Class 1—Measure can be scored based on performance Measures that were submitted or calculated that met the following criteria:
  • (1) The measure has a benchmark; 29
  • (2) Has at least 20 cases; and
  • (3) Meets the data completeness standard (generally 50 percent.)
  • • Receive 3 to 10 points based on performance compared to the benchmark.
    Class 2—Measure cannot be scored based on performance and is instead assigned a 3-point score. Measures that were submitted, but fail to meet one of the class 1 criteria. Measures either
  • (1) Do not have a benchmark,
  • (2) Do not have at least 20 cases, or
  • (3) Measure does not meet data completeness criteria.
  • • Receive 3 points.
  • • Note: This Class 2 measure policy does not apply to CMS Web Interface measures and administrative claims-based measures.
  • Generally, if we identify issues or circumstances that impact the reliability or validity of a class 1 measure score, we will recognize that the measure was submitted, but exclude that measure from scoring. Instead, MIPS eligible clinicians will receive a flat 3 points for submitting the measure. However, if we identify issues or circumstances that impact the reliability or validity of a class 1 measure that is a CMS Web Interface or administrative claims-based measure, we will exclude the measure from scoring. For Web Interface measures, we will recognize that the measure had been submitted. For Web Interface measures, as discussed in section II.E.6.a.(2)(g)(ii) of the final rule with comment period, and administrative claims-based measures, we will not score these measures. For the transition year, we note that the readmission measure is the only administrative claims-based quality measure. However, this policy will apply to additional administrative claims-based measures that are added in future years.

    29 Benchmarks needed 20 reporters with at least 20 cases meet data completeness and performance greater than 0 percent.

    We provide below examples of our new scoring approach. For simplicity, the examples not only explain how the to calculate the quality performance category score, but also how the quality performance category score contributes to the final score as described in section II.E.6.b of this final rule with comment period, assuming a quality performance category weight of 60 percent. We use the term weighted score to represent a performance category score that is adjusted for the performance category weight.

    If the MIPS eligible clinician, as a solo practitioner, scored 10 out of 10 on each of five measures submitted, one of which was an outcome measure, and had one measure that was below the required case minimum, the MIPS eligible clinician would receive the following weighted score for the quality performance category: (5 measures × 10 points) + (1 measure × 3 points) or 53 out of 60 possible points × 60 (weight of quality performance category) = 53 points toward the final score. Similarly, if the MIPS eligible clinician, as a solo practitioner, scored 10 out of 10 on each of five measures submitted, one of which was an outcome measure, but failed to submit a sixth measure even though there were applicable measures that could have been submitted, the MIPS eligible clinician would receive the following weighted score in the quality performance category: (5 measures × 10 points) + (1 measure × 0 points) or 50 out of 60 possible points × 60 (weight of quality performance category) = 50 points toward the final score.

    We also provide examples of instances where MIPS eligible clinicians either do not have 6 applicable measures or the applicable specialty set has less than six measures.

    For example, if a specialty set only has 3 measures or if a MIPS eligible clinician only has 3 applicable measures, then, in both instances, the total possible points for the MIPS eligible clinician is 30 points (3 measures × 10 points). If the MIPS eligible clinician scored 8 points on each of the 3 applicable measures submitted, one of which was an outcome measure, then the MIPS eligible clinician would receive the following weighted score in the quality performance category: (3 measures × 8 points) or 24 out of 30 possible points × 60 (weight of quality performance category) = 48 points toward the final score.

    (d) Scoring for MIPS Eligible Clinicians That Do Not Meet Quality Performance Category Criteria

    Section II.E.5.b. of the proposed rule outlined our proposed quality performance category criteria for the different reporting mechanisms. The criteria vary by reporting mechanism, but generally we proposed to include a minimum of six measures with at least one cross-cutting measure (for patient facing MIPS eligible clinicians) (Table C of the proposed rule at 81 FR 28447) and an outcome measure if available. If an outcome measure is not available, then we proposed that the eligible clinician would report one other high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measures) in lieu of an outcome measure. We proposed that MIPS eligible clinicians and groups would have to select their measures from either the list of all MIPS Measures in Table A of the Appendix in the proposed rule (81 FR 28399) or a set of specialty specific measures in Table E of the Appendix in the proposed rule (81 FR 28460). As discussed in section II.E.5.b.(3) of this final rule with comment period, we are not finalizing the requirement for a cross-cutting measure. As discussed in II.E.5.b.(6) of this final rule with comment period, we are also not including two of the three population measures in the scoring.

    We noted that there are some special scenarios for those MIPS eligible clinicians who select their measures from the Specialty Sets (Table E of the Appendix in the proposed rule at 81 FR 28460) as discussed in section II.E.5.b. of the proposed rule (81 FR 28186).

    For groups using the CMS Web Interface and MIPS APMs, we proposed to have different quality performance category criteria described in sections II.E.5.b. and II.E.5.h. of the proposed rule (81 FR 28187 and 81 FR 28234). Additionally, as described in section II.E.5.b of the proposed rule, we also proposed to score MIPS eligible clinicians on up to three population-based measures.

    Previously in PQRS, EPs had to meet all the criteria or be subject to a negative payment adjustment. However, we proposed that MIPS eligible clinicians receive credit for measures that they report, regardless of whether or not the MIPS eligible clinician meets the quality performance category submission criteria. Section 1848(q)(5)(B)(i) of the Act provides that under the MIPS scoring methodology, MIPS eligible clinicians who fail to report on an applicable measure or activity that is required to be reported shall be treated as receiving the lowest possible score for the measure or activity; therefore, for any MIPS eligible clinician who does not report a measure required to satisfy the quality performance category submission criteria, we proposed that the MIPS eligible clinician would receive zero points for that measure. For example, a MIPS eligible clinician who is able to report on six measures, yet reports on four measures, would receive two “zero” scores for the missing measures. However, we proposed that MIPS eligible clinicians who report a measure that does not meet the required case minimum would not be scored on the measure but would also not receive a “zero” score.

    We also noted that if MIPS eligible clinicians are able to submit measures that can be scored, we want to discourage them from continuing to submit the same measures year after year that cannot be scored due to not meeting the required case minimum. Rather, to the fullest extent possible, MIPS eligible clinicians should select measures that would meet the required case minimum. We sought comment on any safeguards we should implement in future years to minimize any gaming attempts. For example, if the measures that a MIPS eligible clinician submits for a performance period are not able to be scored due to not meeting the required case minimum, we sought comment on whether we should require these MIPS eligible clinicians to submit different measures with sufficient cases for the next performance period (to the extent other measures are applicable and available to them).

    We proposed that MIPS eligible clinicians who report a measure where there is no benchmark due to less than 20 MIPS eligible clinicians reporting on the measure would not be scored on the measure but would also not receive a “zero” score. Instead, these MIPS eligible clinicians would be scored according to the following example: A MIPS eligible clinician who submits six measures through a group of 10 or more clinicians, with one measure lacking a benchmark, would be scored on the five remaining measures and the three population-based measures based on administrative claims data

    We stated our intent to develop a validation process to review and validate a MIPS eligible clinician's inability to report on the quality performance requirements as proposed in section II.E.5.b. of the proposed rule. We anticipate that this process would function similar to the Measure Applicability Validity (MAV) process that occurred under PQRS, with a few exceptions. First, the MAV process under PQRS was a secondary process after an EP was determined to not be a satisfactory reporter. Under MIPS, we intend to build the process into our overall scoring approach to reduce confusion and burden on MIPS eligible clinicians by having a separate process. Second, as the requirements under PQRS are different than those proposed under MIPS, the process must be updated to account for different measures and different quality performance requirements. More information on the MAV process under PQRS can be found at http://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/PQRS/Downloads/2016_PQRS_MAV_ProcessforClaimsBasedReporting_030416.pdf. We requested comments on these proposals.

    The following is a summary of the comments we received regarding our proposal to score MIPS eligible clinicians that do not meet quality performance category criteria.

    Comment: Commenters recommended that we clarify the proposed process to identify whether groups have fewer than 6 applicable measures to report and wanted real time notification of whether they passed. One commenter requested clarification on how proposed specialty sets will be scored, given that many have less than the required number of measures and do not include a required outcome or high priority measure. A few commenters recommended reinstating the MAV process. A few commenters recommended that CMS should engage the public in developing the MAV process and provide the public with a formal opportunity to provide input into proposed clusters and the overall MAV algorithm. One commenter recommended that CMS consider both the availability of measures based on subspecialty or patient condition and also submission mechanism. The commenter was concerned that due to the requirement to use only one submission mechanism per performance category, a MIPS eligible clinician or group may be prevented from achieving all measure requirements. The commenter believed CMS should not penalize a clinician for failing to report a measure because it is unavailable via the submission mechanism selected. Another commenter requested that CMS compare the scores of primary care and specialty care clinicians and assess whether the difference is due to a lack of available measures.

    Response: The MIPS validation process will vary by submission mechanism. For claims and registry submissions, we plan to use the cluster algorithms from the current MAV process under PQRS to identify which measures an MIPS eligible clinician is able to report. For QCDRs, we do not intend to establish a validation process. We expect MIPS eligible clinicians that enroll in QCDRs have sufficient meaningful measures that the MIPS eligible clinician is able to report. For the EHR submissions, we know that MIPS eligible clinicians may not have six measures relevant within their EHR. If there are not sufficient EHR measures to meet the full specialty set requirements or meet the requirement to submit 6 measures, the MIPS eligible clinician should select a different submission mechanism in order to meet the quality performance category requirements of submitting measures in a specialty set or six applicable measures. MIPS eligible clinicians should work with their EHR vendors to incorporate applicable measures as feasible. As discussed in section II.E.6.a.(1) of this final rule with comment period, if a MIPS eligible clinician submits via multiple mechanisms we would calculate two quality performance category scores and take the highest score. For the CMS Web Interface, MIPS eligible clinicians are attributed beneficiaries on a defined population that is appropriate for the measures, so there is no need for additional validation. Given the number of choices for submitting quality data, we anticipate MIPS eligible clinicians will be able to find a submission mechanism that meets the MIPS submission requirements. We strongly encourage MIPS eligible clinicians to select the submission mechanism that has 6 measures that are available and appropriate to their specialty and practice type.

    Comment: Several commenters made recommendations on our request for comments on preventing gaming. Some commenters recommended an attestation or statement of explanation when a practice or provider chooses to submit a quality measure that does not meet the required case minimum. One commenter recommended that CMS require attestation from physicians who claim they are unable to report on quality performance requirements and that CMS provide very clear directions about the requirements in order to prevent confusion and inadvertent wrongdoing. Another commenter encouraged CMS to implement a strict validation and review process and to establish safeguards, such as a limit on the amount of measures that can be reported below the case minimum. One commenter requested clarification on whether CMS will allow clinicians to remain within their applicable measure set in such a scenario (that is, not force clinicians to report measures outside of their applicable measure set just to meet case minimum thresholds) and was concerned about the idea of prohibiting subsequent reporting on measures that did not meet case minimums. One commenter objected to our request for comments on how to prevent ‘gaming’ stating that for CMS to give such time and consideration to potential gaming of the system is insulting to America's physicians. The commenter believed that such focus on gaming leads to unnecessarily complicated programs. The commenter recommended that CMS acknowledge in the final rule with comment period that the vast majority of Medicare physicians are not intending to “game” the system or avoid meeting CMS program requirements and are instead attempting to learn about a new payment system that could go into effect in less than 6 months. The commenter also recommended that the resources currently earmarked for the purpose of identifying potential gaming should be directed towards helping MIPS eligible clinicians, from both large and small practices, understand the regulatory requirements, correctly report data, and identify areas and methods in which they can improve their scores.

    Response: For the transition year, we are encouraging participation in MIPS and will not be finalizing any policies to prevent gaming. We agree with the commenter in that we believe the vast majority of MIPS eligible clinicians do not intend to game the system. Rather, we believe that clinicians are interested in working with us to learn the details of the new payment system established under the Quality Payment Program and to provide high quality care to Medicare beneficiaries. We must ensure, however, that payment under this new system is based on valid and accurate measurement and scoring, and identify ways to prevent any potential gaming that could occur in the program. We will continue to monitor MIPS eligible clinician submissions and may propose additional policies through future rulemaking as appropriate.

    Comment: Commenters recommended that we hold EHR vendors accountable for EHR certification and measure availability and take this into account when scoring a MIPS eligible clinician on low case volume.

    Response: We do currently require that EHR vendors be certified to a minimum of 9 eCQMs as is required for reporting under the current PQRS and EHR Incentive Programs. In the 2015 EHR Incentive Programs final rule, CMS required EPs, eligible hospitals, and CAHs to use the most recent version of an eCQM for electronic reporting beginning in 2017 (80 FR 62893). We are maintaining this policy for the electronic reporting bonus under MIPS and encourage MIPS eligible clinicians to work with their EHR vendors to ensure they have the most recent version of the eCQM. CMS will not accept an older version of an eCQM for a submission for the MIPS program for the quality category or the end-to-end electronic reporting bonus within that category. Additionally, measures that are submitted below the required case minimum will receive 3 points but will not be scored on performance for the 2017 performance period

    After consideration of the comments, we are finalizing at § 414.1380(b)(1)(vi) that MIPS eligible clinicians who fail to report a measure that is required to satisfy the quality performance category submission criteria will receive zero points for that measure. Further, we are finalizing implementation of a validation process for claims and registry submissions to validate whether MIPS eligible clinicians have six applicable and available measures, whether an outcome measure is available or another other high priority measure if an outcome measure is not available.

    However, we are not finalizing our proposal that MIPS eligible clinicians who report a measure that does not meet the required case minimum, the data completeness criteria, or for which there is no benchmark due to less than 20 MIPS eligible clinicians reporting the measure, would not receive any points for submission and would not be scored on performance against a benchmark. Rather, as discussed in section II.E.6.a.(2)(c) of this final rule with comment period, for “class 2” measure, as defined in Table 17, that are submitted, but unable to be scored, we will add a 3-point floor for all submitted measures for the transition year. That is, if a MIPS eligible clinician submits a “class 2” measure, as defined in Table 17 we will assign 3 points to the MIPS eligible clinician for submitting that measure regardless of whether the measure meets the data completeness requirement or required case minimum requirement or whether the measure has a benchmark for the transition year. For example, a MIPS eligible clinician who is a solo practitioner could submit 6 measures as follows: 2 measures (one of which is an outcome measure) with high performance, scoring 10 out of 10 on each of these measures, 1 measure that lacks minimum case size, 1 measure that lacks a benchmark, 1 measure that does not meet the data completeness requirement and 1 measure with low performance. In this case, the MIPS eligible clinician would receive 32 out of 60 possible points in the quality performance category (2 measures × 10 points plus 4 measures × 3 points). We will revisit this policy in future years.

    (e) Incentives To Report High Priority Measures

    Consistent with other CMS value-based payment programs, we proposed that MIPS scoring policies would emphasize and focus on high priority measures that impact beneficiaries. These high priority measures are defined as outcome, appropriate use, patient safety, efficiency, patient experience and care coordination measures; see Tables A through D of the Appendix in the proposed rule (81 FR 28399-28460) for these measures. We proposed these measures as high priority measures given their critical importance to our goals of meaningful measurement and our measure development plan. We note that many of these measures are grounded in NQS domains. For patient safety, efficiency, patient experience and care coordination measures, we refer to the measures within the respective NQS domains and measure types. For outcomes measures, we include both outcomes measures and intermediate outcomes measures. For appropriate use measures, we have noted which measures fall within this category in Tables A through D and provided criteria for how we identified these measures in section II.E.5.b. of the proposed rule. For non-MIPS measures reported through QCDRs, we proposed to classify which measures are high priority during the measure review process.

    We proposed scoring adjustments to create incentives for MIPS eligible clinicians to submit high priority measures and to allow these measures to have more impact on the total quality performance category score.

    We proposed to create an incentive for MIPS eligible clinicians to voluntarily report additional high priority measures. We proposed to provide 2 bonus points for each outcome and patient experience measure and 1 bonus point for other high priority measures reported in addition to the one high priority measure (an outcome measure, but if one is not available, then another high priority measure) that would already be required under the proposed quality performance category criteria. For example, if a MIPS eligible clinician submitted 2 outcome measures, and two patient safety measures, the MIPS eligible clinician would receive 2 bonus points for the second outcome measure reported and 2 bonus points for the two patient safety measures. The MIPS eligible clinician would not receive any bonus points for the first outcome measure submitted since that is a required measure. We selected 2 bonus points for outcome measures given the statutory requirements under section 1848(q)(2)(C)(i) of the Act to emphasize outcome measures. We selected 2 bonus points for patient experience measures given the importance of patient experience measures to our measurement goals. We selected 1 bonus point for all other high priority measures given our measurement goals around each of those areas of measurement. We believe the number of bonus points provides extra credit for submitting the measure, yet would not mask poor performance on the measure. For example, a MIPS eligible clinician with poor performance receives only 3 points for performance for a particular high priority measure. The bonus points would increase the MIPS eligible clinician's points to 4 (or 5 if the measure is an outcome measure or patient experience measure), but that amount is far less than the 10 points a top performer would receive. We noted that population-based measures would not receive bonus points.

    We noted that a MIPS eligible clinician who submits a high priority measure but had a performance rate of 0 percent would not receive any bonus points. MIPS eligible clinicians would only receive bonus points if the performance rate is greater than zero. Bonus points are also available for measures that are not scored (not included in the top 6 measures for the quality performance category score) as long as the measure has the required case minimum and data completeness. We believe these qualities would allow us to include the measure in future benchmark development.

    Groups submitting data through the CMS Web Interface, including MIPS APMs that report through the CMS Web Interface, are required to submit a set of predetermined measures and are unable to submit additional measures (other than the CAHPS for MIPS survey). For that submission mechanism, we proposed to apply bonus points based on the finalized set of measures. We would assign two bonus points for each outcome measure (after the first required outcome measure) and for each patient experience measure. We would also have one additional bonus point for each other high priority measure (patient safety, efficiency, appropriate use, care coordination). We believe MIPS eligible clinicians or groups should have the ability to receive bonus points for reporting high priority measures through all submission mechanisms, including the CMS Web Interface. In this final rule with comment period, we will publish how many bonus points the CMS Web Interface measure set would have available based on the final list of measures (See Table 21).

    We proposed to cap the bonus points for the high priority measures (outcome, appropriate use, patient safety, efficiency, patient experience, and care coordination measures) at 5 percent of the denominator of the quality performance category score. Tables 19 and 20 of the proposed rule (81 FR 28257-28258) illustrated examples of how to calculate the bonus cap. We also proposed an alternative approach of capping bonus points for high priority measures at 10 percent of the denominator of the quality performance category score. Our rationale for the 5 percent cap was that we do not want to mask poor performance by allowing a MIPS eligible clinician to perform poorly on a measure but still obtain a high quality performance category score by submitting numerous high priority measures in order to obtain bonus points; however, we were also concerned that 5 percent may not be enough incentive to encourage reporting. We requested comment on the appropriate threshold for this bonus cap.

    The following is a summary of the comments we received regarding our proposal to provide bonus points for high priority quality measures.

    Comment: Several commenters supported our proposal to award two bonus points for reporting additional outcome or patient experience measures and one bonus point for reporting any other high priority measure, indicating that rewarding bonus points would provide an additional incentive to report on measures which were of higher value to patients.

    Response: We appreciate the support of the commenters for our proposals. We are finalizing the proposal to assign two bonus points for reporting additional outcome or patient experience measures and one bonus point for reporting any other high priority measure.

    Comment: Some commenters recommended that outcome, patient experience, and other high priority measures not be required for reporting but should be awarded bonus points if they are reported, including the first high priority measure reported.

    Response: Our long term goal for the Quality Payment Program is to move reporting towards high priority measures. We believe that our proposal to require an outcome measure or another high priority measure if an outcome measure is not available presents a balanced approach that will encourage more reporting of these measures. We are concerned that the use of these measures would be much more limited and selective if reporting of one of these measures were not required.

    Comment: A number of commenters expressed concern with the proposal to award bonus points for the reporting of additional high priority measures because many specialties do not have sufficient outcome, patient experience or other high priority measures to receive bonus points. Some commenters expressed concern about the future development of outcome measures due to lack of available clinical evidence and poor risk adjustment.

    Response: By awarding bonus points for the reporting of additional high priority measures, we are encouraging a movement towards stronger development of measures that are aligned with our measurement goals. We encourage stakeholders who are concerned about a lack of high priority measures to consider development of these measures and submit them for future use within the program. In addition, our strategy for identifying and developing meaningful outcome measures are in the MACRA quality measure development plan, authorized by section 102 of the MACRA (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf). The plan references how we plan to consider evidence-based research, risk adjustment, and other factors to develop better outcome measures.

    Comment: A commenter recommended that CMS identify a small number of high priority measures including patient-reported outcome measures that would be tested on a regional scale before being implemented nationally. This commenter recommended that these proposed high priority measures should be vetted with other stakeholders.

    Response: We believe that our proposed measure set provides flexibility for clinicians in determining which measures to report. All measures go through a review process that includes public comment as part of the rulemaking process, and most measures are reviewed by the NQF-convened MAP as part of CMS' pre-rulemaking process.

    Comment: A commenter recommended that CMS move toward establishing core sets of high priority measures by specialty or subspecialty. This would enable consumers and purchasers to make direct comparisons of similar clinicians with assurance that they are all being assessed against a consistent and standardized set of important quality indicators.

    Response: As part of this rule, we have finalized specialty measure sets that may simplify the measure selection process. We continue to encourage the development of outcome and other high priority measures that may be reported and relevant to all specialties of medicine.

    Comment: A commenter supported the concept of incentivizing clinicians to submit high priority measures given that they can be more challenging; however, this commenter sought clarification on which measures submitted by QCDRs would be considered high priority. This same commenter indicated that QCDRs should be allowed to determine the most appropriate classification for each of its measures, including which measures should be considered high priority, subject to the QCDR measure approval process.

    Response: We define high priority to measures as those based on the following criteria: outcome, appropriate use, patient safety, efficiency, patient experience and care coordination measures. For non-MIPS measures reported through QCDRs, we proposed to classify which measures are high priority during the measure review process (81 FR 28186). If the measure is endorsed by NQF as an outcome measure, we will take that designation into consideration. If we decide to assign these domains to QCDR measures, we will add the high priority designation to QCDR measures accordingly. Although we may enlist the assistance and consultation of the QCDR in assessing high priority measures, we would still make the final high priority designation.

    Comment: One commenter requested clarity on measures which are identified as a high priority and noted that, based on past reporting statistics, certain high-priority measures may be classified as topped out. The commenter requested clarification on what this means for the MIPS eligible clinician's score.

    Response: Any high priority measure that is topped out will still be eligible for bonus points. We think incentives should remain to report high priority measures, even topped out measures, as additional reporting makes for a more comprehensive benchmark and can help confirm that the measure is truly topped out. Also, as discussed in section II.E.6.a.(2)(c) of this final rule with comment period, we are not implementing any special scoring for topped out measures in year 1 of MIPS. Thus, the score for that measure will not be reduced by our proposed mid-cluster approach for topped out measures in CY 2017. We will not modify the benchmark methodology for any topped out measures for the CY 2017 performance period. We will modify the benchmark methodology for topped out measures beginning with the CY 2018 performance period, provided that it is the second year the measure has been identified as topped out. We will propose options for scoring topped out measures through future rulemaking.

    Comment: One commenter supported our proposal to award 2 bonus points for outcome measures but recommended that only 1 bonus point be awarded for the reporting of patient experience measures.

    Response: We believe that patient experience measures align with our measurement goals and for that reason should be awarded the same number of bonus points as outcome measures.

    Comment: One commenter requested clarification as to whether a MIPS eligible clinician can earn bonus points if the MIPS eligible clinician does not report all 6 measures due to lack of available measures.

    Response: The MIPS eligible clinician can receive bonus points on all high priority measures submitted, after the first required high priority measure submitted, assuming these measures meet the minimum case size and data completeness requirements even if the MIPS eligible clinician did not report all 6 required measures due to lack of available measures.

    Comment: One commenter recommended that CMS pursue additional approaches to the quality performance category to advance health equity and reward MIPS eligible clinicians who promote health equity including: adding measures stratified by race and ethnicity or other disparity variable, and developing and adding a stand-alone health equity measure as a high priority measure for which clinicians can receive a bonus point.

    Response: Eliminating racial and ethnic disparities to achieve an equitable health care system is one of the four foundational principles listed in the CMS Quality Strategy. We refer readers to the MACRA quality measure development plan, authorized by section 102 of the MACRA (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf). The plan outlines the many ways we look to identify, measure and reduce disparities. We will consider in future rulemaking the commenter's proposed options to advance health equity and reward MIPS eligible clinicians who promote health equity.

    After consideration of the comments, we are finalizing at § 414.1380(b)(1)(xiii) our proposal to award 2 bonus points for each outcome or patient experience measure and 1 bonus point for each other high priority measure that is reported in addition to the 1 high priority measure that is already required to be reported under the quality performance category submission criteria. We will revisit this policy in future years. High priority measures are defined as outcome, appropriate use, patient safety, efficiency, patient experience and care coordination measures, as identified in Tables A through D in the Appendix of this final rule with comment period. For the CMS Web Interface, we will apply bonus points based on the finalized set of measures reportable through that submission mechanism. MIPS eligible clinicians will only receive bonus points if they submit a high priority measure with a performance rate that is greater than zero, provided that the measure meets the case minimum and data completeness requirements. We believe that this will encourage stronger reporting of those measures that are more closely aligned to our measurement goals.

    The following is a summary of the comments we received regarding our proposal for establishing a cap on bonus points awarded for the reporting of additional high priority measures:

    Comment: Some commenters opposed our proposal to cap bonus points for high priority measures. Others recommended that the cap be increased from 5 percent of the denominator as proposed to 10 percent of the denominator as in our alternative option. Those who opposed the cap on bonus points at 5 percent of the denominator believe that the 5 percent cap was too low to encourage the reporting of high-priority measures. One commenter requested that CMS share a data analysis demonstrating the necessity for a cap. Others cautioned that quality measures and the available bonus points may be selected, not for the benefit of the clinician or patient, but only to obtain the bonus points, and that this defeats the purpose of true quality measurement for quality patient care.

    Response: After consideration of the comments, we believe increasing the cap on bonus points to 10 percent of the quality score denominator for high priority measures provides a strong incentive to report these measures while still providing a necessary safeguard to avoid masking poor performance. While our long term goals for the program are to move towards the use of outcome and other high priority measures as much as possible, we also acknowledge the important role that other measures play at this time. We remain concerned, however, that without a cap in place, or with a cap that is too high, we could incentivize the reporting of additional measures over a focus on performance in relevant clinical areas, and mask poor performance with higher bonus points. We understand commenters' concern that quality measures and the available bonus points may be selected, not for the benefit of the clinician or patient, but only to obtain the bonus points. We have identified high priority measures to encourage meaningful measurement in each of the high priority areas and believe MIPS eligible clinicians who report on these measures will continue to work to improve their performance in these areas accordingly. At the same time, we will continue to monitor reporting trends and revisit our policies on bonus points for high priority measures as the program develops in future years.

    Comment: Some commenters were concerned that at a 5 percent cap, CMS may be incentivizing the reporting of a high priority measure over high performance on another measure. Some commenters recommended that CMS defer awarding bonus points for high priority measures to reduce the complexity of the scoring methodology within the quality performance category.

    Response: We do not believe that raising the bonus cap of 10 percent will mask poor performance. Instead, we believe it will encourage additional reporting of these outcome and high priority measures. We note that we will not assign bonus points if an additional high priority measure is reported with a zero performance rate or if the reported measure does not meet the case minimum or data completeness requirements. We believe that this approach will avoid the issue that the commenters have identified. We will closely monitor reporting trends to ensure that this balance is maintained.

    Comment: One commenter recommended that we cap the bonus points that CMS Web Interface users can earn as the CMS Web Interface includes several high priority measures.

    Response: We believe the bonus points should be applied consistently across all submission mechanisms. Groups who report via the CMS Web Interface submit data on a pre-defined set of measures and do not have the ability to report on additional measures through another submission mechanism (other than the CAHPS for MIPS survey). We note that CMS Web Interface users are subject to the same 10 percent cap that all other MIPS eligible clinicians have, so CMS Web Interface users will not receive any additional credit compared to other MIPS eligible clinicians. We will closely monitor reporting trends to address commenter's concern that Web Interface users do not receive an unfair advantage by having more high priority measures available to them than other MIPS eligible clinicians.

    After consideration of the comments, we are finalizing at § 414.1380(b)(1)(xiii) a modification to the proposed high priority measure cap. Specifically, we are increasing the cap for high priority measures from 5 percent to 10 percent of the denominator (total possible points the MIPS eligible clinician could receive in the quality performance category) 30 of the quality performance category for the first 2 years. We believe that this cap protects against rewarding reporting over performance while still encouraging reporting of the types of measures which will form the foundation of the future of the program. In future years, we plan to decrease this cap over time.

    30 For example, the denominator for a MIPS eligible clinician who is a solo practitioner would be 60 points if the clinician has six applicable measures (6 measures × 10 points). If the MIPS eligible clinician, who is a solo practitioner, only has 5 applicable measures, then the denominator would be 50 points (5 measures × 10 points). A group of 16 or more would have a denominator of 70 points assuming the group had 6 applicable measures and enough cases to be scored on the readmission measure (7 measures × 10 points).

    (f) Incentives To Use CEHRT To Support Quality Performance Category Submissions

    Section 1848(q)(5)(B)(ii) of the Act provides that under the methodology for assessing the total performance of each MIPS eligible clinician, the Secretary shall: (1) Encourage MIPS eligible clinicians to report on applicable measures under the quality performance category through the use of CEHRT and QCDRs; and (2) for a performance period for a year, for which a MIPS eligible clinician reports applicable measures under the quality performance category through the use of CEHRT, treat the MIPS eligible clinician as satisfying the CQMs reporting requirement under section 1848(o)(2)(A)(iii) of the Act for such year. To encourage the use of CEHRT for quality improvement and reporting on measures under the quality performance category, we proposed a scoring incentive to MIPS eligible clinicians who use their CEHRT systems to capture and report quality information.

    We proposed to allow one bonus point for each measure under the quality performance category score, up to a maximum of 5 percent of the denominator of the quality performance category score if:

    • The MIPS eligible clinician uses CEHRT to record the measure's demographic and clinical data elements in conformance to the standards relevant for the measure and submission pathway, including but not necessarily limited to the standards included in the CEHRT definition proposed in § 414.1305;

    • The MIPS eligible clinician exports and transmits measure data electronically to a third party using relevant standards or directly to us using a submission method as defined at § 414.1325; and

    • The third party intermediary (for example, a QCDR) uses automated software to aggregate measure data, calculate measures, perform any filtering of measurement data, and submit the data electronically to us using a submission method as defined at § 414.1325.

    These requirements are referred to as “end-to-end electronic reporting.”

    We note that this bonus would be in addition to the bonus points for reporting high priority measures. MIPS eligible clinicians would be eligible for both this bonus option and the high priority bonus option with separate bonus caps for each option. We also proposed an alternative approach of capping bonus points for this option at 10 percent of the denominator of the quality performance category score. Our rationale for the 5 percent cap was that we do not want to mask poor performance by allowing a MIPS eligible clinician to perform poorly on a measure but still obtain a high quality performance category score; however, we were also concerned that 5 percent may not be enough incentive to encourage end-to-end electronic reporting. We sought comment on the appropriate threshold for this bonus cap. We proposed the CEHRT bonus would be available to all submission mechanisms except claims submissions. This incentive would also be available for MIPS APMs reporting through the CMS Web Interface (except in cases where measures are entered manually into the CMS Web Interface). Specifically, MIPS eligible clinicians who report via qualified registries, QCDRs, EHR submission mechanisms, and CMS Web Interface in a manner that meets the end-to-end reporting requirements may receive one bonus point for each reported measure with a cap as described. We did not propose to allow this option for claims submission, because there is no mechanism for MIPS eligible clinicians to identify the information was pulled using an EHR. This approach supports and encourages innovative approaches to measurement using the full array of standards ONC adopts, and the data elements MIPS eligible clinicians capture and exchange, to support patient care. Thus, approaches where a qualified registry or QCDR obtains data from a MIPS eligible clinician's CEHRT using any of the wide range of ONC-adopted standards and then uses automated electronic systems to perform aggregation, calculation, filtering, and reporting would qualify each such measure for the CEHRT bonus point. In addition, measures submitted using the EHR submission mechanism or the EHR submission mechanism through a third party would also qualify for the CEHRT bonus.

    We requested comment on this proposed approach.

    The following is a summary of the comments we received regarding our proposal to award CEHRT bonus points for end-to-end electronic submissions.

    Comment: Commenters questioned whether the 5 percent cap would provide a worthwhile incentive. One commenter noted that the potential bonus points are so diluted that physicians will not be motivated to navigate the additional complexity of earning a bonus point. Others supported the higher cap.

    Response: We agree with commenters that capping the bonus available at 5 percent would not provide a sufficient incentive to utilize CEHRT for reporting in the initial years of the program; Accordingly, we are finalizing our alternative option that a provider may receive bonus points up to 10 percent of the denominator of the quality performance category score for the first 2 years of the program. We intend to decrease these cap in future years through future notice and comment rulemaking.

    Comment: One commenter recommended giving 2 points, not 1, for the CEHRT incentive.

    Response: We agree with the commenter that the proposed bonus would not provide a sufficient incentive for MIPS eligible clinicians. Although we are not increasing the points per-measure that a clinician can earn by conducting electronic end-to-end reporting, we are finalizing our alternate option which would cap the bonus a clinician may earn at 10 percent instead of 5 percent of the denominator of the quality performance category score.

    Comment: A few commenters wanted bonus incentives for use of QCDRs. Currently, many QCDRs, including specialty registries, cannot obtain data from CEHRT or support the standards for data submission. The commenters believed that clinicians should still receive bonus points if they transfer data from an EHR into their own registry. One commenter recommended that CMS encourage EHRs to embrace interoperability so that data transfer can occur between EHR and QCDRs. Another commenter stated that CMS should also offer bonus points to clinicians who use a QCDR (regardless of its ties to CEHRT) since QCDRs in and of themselves represent robust electronic data submission for a growing number of clinicians.

    Response: We appreciate commenters' support for the use of QCDRs. Under the policy we are finalizing, MIPS eligible clinicians who capture their data using CEHRT and electronically export and transmit this data to a QCDR which uses automated software to aggregate measure data, calculate measures, perform any filtering of measurement data, and submit the data electronically via a submission method defined at § 414.1325, would be able to earn a bonus point. Any submission pathway that involves manual abstraction and re-entry of data elements that are captured and managed using certified health IT is not end-to-end electronic quality reporting and is not consistent with the goal of the bonus. It is, however, important to note that end-to-end electronic submission is a goal for which bonus points are available, and not a requirement to achieve maximum performance in the quality performance category.

    Comment: Some commenters supported the proposed bonus points for the use of certified EHR technology. One commenter agreed with the inclusion of bonus points to encourage reporting via QCDR and CEHRT, but was concerned that giving bonus points for reporting via the CMS Web Interface and via Qualified Registry would not encourage use of QCDRs and CEHRT, and that giving bonuses for all of these methods would function as a penalty for those who submit via claims. This commenter encouraged either only giving bonus points to CEHRT or QCDR-based submissions or attaching more bonus points to these mechanisms. Another commenter recommended that CMS encourage the continued uptake of CEHRT and QCDRs by awarding bonus points for use of those technologies and not by unfairly penalizing MIPS eligible clinicians that have not yet adopted them. One commenter appreciated the optional bonus points that can be awarded for the use of CEHRT, as this is foundational to the functionality needed for a quality program of this magnitude.

    Response: We appreciate commenters' support for the proposed bonus for use of CEHRT. We want to encourage increased usage of CEHRT and believe this functionality should be available for qualified registries and CMS Web Interface as well as EHR and QCDR submission.

    Comment: Commenters wanted clarification on how to determine which measures qualify for end-to-end electronic reporting, as measures reported through the CMS Web Interface and QCDR may or may not involve “end-to-end” electronic reporting. Commenters requested that CMS consider any measures coming from an electronic source to an electronic source, following relevant standards, as eligible for the electronic reporting bonus points. One commenter proposed clarifying our requirement for “end-to-end reporting” as follows: “in conformance to the standards relevant for the measure and submission pathway allows the manner in which the specific registry requires the data submission, such as data derived from an electronic source, which might not be CEHRT, and the destination is electronic. One commenter noted that many clinicians will not have end-to-end electronic capability by 2018 for reasons outside of their control.

    Response: The end-to-end electronic reporting bonus point is not specific to certain CQMs, but would apply in any case where the submission pathway maintains fully electronic management and movement of patient demographic and clinical data once it is initially captured in the eligible clinician's certified health IT. Where a registry is calculating and submitting the Quality Payment Program-accepted measures on the MIPS eligible clinician's behalf, this means that: (1) The MIPS eligible clinician uses certified health IT to capture and electronically provide to the registry clinical data for the measures, using appropriate electronic means (for example, through secure access via API or by electronic submission of QRDA documents); and (2) the registry uses verifiable software to process the data, calculate, and report measure results to CMS (in CMS-specified electronic submission format). In order to qualify for a bonus point, submission via a QCDR or the CMS Web Interface would need to adhere to these principles. Any submission pathway that involves manual abstraction and re-entry of data elements that are captured and managed using certified health IT is not end-to-end electronic quality reporting and is not consistent with the goal of the bonus. We understand that not all clinicians may have end-to-end electronic capabilities immediately, and note that end-to-end electronic submission is a goal for which bonus points are available, and not a requirement to successfully participate in MIPS. We are finalizing policies that offer MIPS eligible clinicians substantial flexibility and sustain proven pathways for successful participation across all of the performance categories. As noted by the commenter, we have, included some pathways to which the end-to-end electronic reporting bonus points may not apply in 2017. For example, if a MIPS eligible clinicians submits electronic data to a registry, but the electronic data is not captured from certified health IT or if a MIPS eligible clinician uses CEHRT to capture data, but then calculates measures using chart abstraction and submits the resulting measures to CMS, then the MIPS eligible clinician would not be eligible for the end-to-end electronic reporting bonus points. Those MIPS eligible clinicians who are already successfully reporting quality measures meaningful to their practice via one of these pathways may continue to do so, or may of course choose a different pathway, if they believe the different pathway will offer them a better avenue for success in MIPS.

    Comment: Several commenters requested that CMS create incentives to make CEHRT more flexible because many registries rely on both automated and manual data entries. Commenters were concerned that most EHRs do not support all the necessary data elements for advanced quality measures or analytics and require hybrid approaches to data collection, but that other electronic submissions have that data. The commenters believed that CMS should reward eligible clinicians for utilizing registries, leveraging electronic capture, reporting where it is feasible, and using alternative methods including manual data entry. One commenter wanted to incorporate use of an EHR with a registry system to minimize double reporting and documentation.

    Response: We are finalizing policies that offer MIPS eligible clinicians substantial flexibility and sustain proven pathways for successful participation. For purposes of the end-to-end electronic reporting bonus point, the pathway should maintain fully electronic management and movement of data once it is initially captured in the MIPS eligible clinician's health IT. Standards-based, interoperable methods for managing quality measurement data are essential for improving the value of measures to MIPS eligible clinicians while reducing these clinicians' data-handling burdens. We would expect the elements of a hybrid measure that use essential patient demographic and clinical data normally managed in CEHRT or other certified health IT for care delivery and documentation (for example, Common Clinical Data Set elements) could be made available to the registry using electronic means. Electronic means would include transmission in any Clinical Document Architecture format supported by the CEHRT, or an appropriately secure API.

    We recommend and encourage all registries to pursue standards-based, fully electronic methods for accurately extracting and importing data from other electronic sources, in addition to data supported by CEHRT and other ONC-Certified Health IT, as appropriate to their measures. However, we recognize that for some types of measures some supplementation of the data normally recorded in EHRs in the course of care may in the near future still require registries to continue alternate, including manual, means of harvesting the data elements not yet practically available using electronic means. In future years, we anticipate evolving data standards and data aggregation and management services infrastructure, including robust registries capable of seamlessly aggregating and analyzing data across multiple electronic types and sources, will eventually eliminate the burden of manual processes including abstraction.

    Comment: One commenter noted that utilizing the CMS Web Interface would involve abstraction and therefore not truly be completely electronic, and recommended that the bonus point for “end to end” quality measure submission be applied only when data is submitted from the CEHRT to CMS. Another commenter noted the proposed rule does not address whether data scrubbing is allowed when the MIPS eligible clinician is receiving bonus points for using these methods. The commenter believed data scrubbing is necessary to improve the accuracy of quality measures and recommends that CMS clarify that data scrubbing does not nullify bonus points for data submission.

    Response: We are finalizing our proposed policy that the CEHRT bonus would be available for groups using CMS Web Interface for measures submitted in a manner that meets the end-to-end reporting requirements. CMS Web Interface users may receive one bonus point for each reported measure with a cap of 10 percent of the denominator of the quality performance category. For CMS Web Interface users, we define end-to-end electronic reporting as cases where users upload data that has been electronically exported or extracted from EHRs, electronically calculated, and electronically formatted into a CMS-specified file that is then electronically uploaded via the Web Interface as opposed to cases where measures are entered manually into the CMS Web Interface.

    Any submission pathway that involves manual abstraction and re-entry of data elements that are captured and managed using certified health IT is not end-to-end electronic quality reporting and is not consistent with the goal of bonus. Thus, the bonus points would not apply to measures entered manually into the CMS Web Interface, though those measurements would be included in the MIPS eligible clinician's scoring for the performance category.

    We do not believe limiting the bonus points to the relatively small number of measures that we will be able to accept directly from CEHRT for the 2017 performance period would be the best way to recognize and encourage development of other standards-based, interoperable methods for managing quality measurement data. If a MIPS eligible clinician finds the measures most meaningful to their practice in a registry, and makes patient clinical and demographic data captured and managed using certified health IT available to the registry for use in calculating a measure, that is consistent with the goals of end-to-end electronic reporting, stimulating innovation in the use of standards to re-use data captured in the course of care to advance more timely and affordable availability of meaningful measure measurements to help drive continuous improvement.

    Comment: Others were concerned that limiting data sources to CEHRT alone would eliminate the potential for obtaining bonus points for many specialties and practice types. Commenters expressed concern that their electronic data sources cannot be certified or that financial constraints make these resources unavailable.

    Response: Bonus points apply both to measures that can be captured, calculated, and reported only using CEHRT and to measures for which only some of the data elements needed for the measure are currently supported by CEHRT. For purposes of the end-to-end electronic reporting bonus points, the pathways for those patient demographic and clinical data that are initially captured in the eligible clinician's certified health IT (including but not necessarily limited to those modules required to meet the CEHRT definition for MIPS) should maintain fully electronic management and movement from the clinician through measure submission to CMS. For example, where a registry is calculating and submitting MIPS-accepted measures that each use one or more data elements captured and managed for care delivery and documentation using certified health IT (such as, but not limited to, elements included in the Common Clinical Data Set), this means that: (1) The eligible clinician uses certified health IT to capture and electronically provide to the registry those clinical data using appropriate electronic means; and (2) the registry uses verifiable software to process the data, calculate, and report measure results to CMS using appropriate electronic means. Appropriate electronic means for getting data from the certified health IT to the registry would include secure access via API or by electronic submission of QRDA or other Clinical Document Architecture documents, and appropriate electronic means of measure submission from the registry to CMS would be the CMS-specified electronic submission format.

    Comment: One commenter disagreed with the decision to award bonus points to MIPS eligible clinicians who report using their CEHRT since their EHR vendor is charging a high fee by compiling the data and reporting the measures themselves instead of directly from the EHR.

    Response: We appreciate the commenter's concerns. We believe the awarding of bonus points for use of CEHRT is important to incentivize solutions, which ultimately reduces cost and burden to MIPS eligible clinicians. Our approach also encourages clinicians to consider a range of options to determine which health IT systems and submission mechanisms will provide the best value to their practice. We expect that over time, as the technology to support electronic reporting evolves and more options become available, the cost and administrative burden on participants leveraging these technologies will continue to decrease.

    Comment: One commenter wanted the CEHRT bonus for claims based reporting.

    Response: The CEHRT bonus is designed for submission of data captured utilizing CEHRT. We did not propose to allow this option for claims submission because there is no mechanism for MIPS eligible clinicians to identify the information included in the claims submission was pulled using CEHRT.

    Comment: One commenter was concerned that there are fewer EHR products available that can provide the reporting functionality necessary to carry out the MIPS requirements. One commenter noted that CEHRT standards fall short of providing QRDA or appropriate functionality without errors.

    Response: ONC's 2014 Edition and 2015 Edition Health IT Certification criteria 31 do align with the Quality Payment Program requirements. Specifically, the 2015 Edition, while not required for 2017, offers rigorous testing for more features and functionality than have prior editions of certification. Each developer will need to decide how best to support the needs of its users, but we expect that between now and 2018, when the MIPS requirements to use technology certified to the 2015 Edition will be in full effect, that more products will be certified to the 2015 Edition in order to support their users' needs for MIPS program participation. As CMS and ONC assess the impact of our policies and learn from the transition year of the Quality Payment Program (along with health IT vendors and MIPS eligible clinicians and groups) we will continue advancing health IT certification infrastructure and support in parallel to the needs of developers, clinicians, and other care providers to encourage the continued development, adoption and use of certified health IT including quality measurement standards to increase the availability of standards-based, interoperable data management and aggregation technology.

    31 45 CFR 170.314(c)(1) through (3) and 170.315(c)(1) through (3) and optionally (c)(4).

    After consideration of the comments, we are finalizing at § 414.1380(b)(1)(xiv) one bonus point is available for each measure submitted with end-to-end electronic reporting for a quality measure under certain criteria described in this section. We are modifying the CEHRT bonus cap. Specifically, we are increasing the cap for using CEHRT for end-to-end reporting from 5 percent to 10 percent of the denominator of the quality performance category (total possible points for the quality performance category) for the first 2 years. We intend to decrease this cap in future years through future notice and comment rulemaking. MIPS eligible clinicians will be eligible for both the CEHRT bonus option and the high priority bonus option with separate bonus caps for each option. The CEHRT bonus will be available to all submission mechanisms except claims submissions.

    We are finalizing that the CEHRT bonus would be available to all submission mechanisms except claims submissions. Specifically, MIPS eligible clinicians who report via qualified registries, QCDRs, EHR submission mechanisms, and CMS Web Interface in a manner that meets the end-to-end reporting requirements may receive one bonus point for each reported measure with a cap as described. For Web Interface users, we define end-to-end electronic reporting as cases where users upload data that has been electronically exported or extracted from EHRs, electronically calculated, and electronically formatted into a CMS-specified file that is then electronically uploaded via the Web Interface as opposed to cases where measures are entered manually into the CMS Web Interface.

    Due to requests from many commenters that we provide more clarity around the various options for a MIPS eligible clinician to satisfy the “end-to-end electronic” requirements and to earn the CEHRT bonus points, we are providing additional explanation regarding the final policy.

    There are several key steps common across all of the submission pathways for end-to-end electronic reporting: (1) The collection of data at the point of care; (2) calculation of CQM performance as a numerator/denominator ratio; and (3) submission of the data to CMS using a standard format. ONC's certification regulations (45 CFR 170.315(c)(1) through (3) in the 2015 edition) have established several independent but complementary quality measurement capability criteria to which health IT modules can be certified because some health IT may not support all of the steps in the measurement process. For example, one application may support capturing the clinical data at the point of care (step 1), but not the calculation of measure results (step 2) or reporting of them to payers like CMS (step 3). Instead, that application may be built to export the measurement data in standard format to another application that performs the calculation and reporting functions but may not support initial data capture provide that feature. Some health IT applications are capable of performing each step necessary from data capture to CMS submission.

    Although certification for each of these steps helps to ensure accurate calculation and reporting measures, our final policy seeks to offer MIPS eligible clinicians the opportunity to earn bonus points for a wider array of measurement pathways rather than the EHR submission method currently available only for eCQMs for which a health IT product, service, or registry could be certified under ONC's Health IT Certification Program as being in conformance with CMS-published specifications. At this time, we believe it is important to ensure incentives are tied to a wider array of submission pathways that facilitate automated, electronic reporting.

    However, we continue to believe that standards-based, interoperable methods for managing quality measurement data are essential for both improving the value of measures to eligible clinicians while reducing these clinicians' data-handling burdens.

    In a 2014 concept paper, Connecting Health and Care for the Nation: A 10-Year Vision to Achieve an Interoperable Health IT Infrastructure,32 ONC described how interoperability is necessary for a “learning health system” in which health information flows seamlessly and is available to the right people, at the right place, at the right time to better inform decision making to improve individual health, community health, and population health. The vision that ONC and CMS share for health IT in the learning health system is that it will integrate seamlessly with efficient, clinical care processes, while sustaining strong protections for the security and integrity of the data. Within that infrastructure, quality improvement support functions are increasingly expected to enable and rely upon the seamless aggregation, routine analysis, and automated electronic management of data needed to deliver meaningful, actionable feedback on clinician performance and treatment efficacy while minimizing data-related burdens on clinicians. As we implement, observe, and learn from the transition year of the Quality Payment Program, CMS and ONC will continue working in close partnership to enable ONC to continue advancing health IT certification infrastructure in parallel to the needs of clinicians, other providers, consumers, purchasers, and payers who will increasingly rely on standards-based, interoperable data management and aggregation technology to better measure and continuously improve safety, quality, and value of care.

    32https://www.healthit.gov/sites/default/files/ONC10yearInteroperabilityConceptPaper.pdf.

    Table 18, summarizes at a high level several pathways we expect to be widely available to MIPS eligible clinicians in 2017 and 2018 for quality performance reporting and which of these pathways would earn bonus points for use of CEHRT to report quality measures electronically from capture to CMS (“end-to-end”).

    Table 18—Examples Illustrating How End-to-End Electronic Reporting Requirements Work MIPS Eligible clinician scenario Actions taken Then meets end-to-end reporting bonus (1) Uses health IT certified to § 170.314 or § 170.315(c)(1) through (3)—that is, the MIPS eligible clinician's system is certified capable of capturing, calculating, and reporting MIPS eCQMs MIPS eligible clinician uses their e-measure-certified health IT to submit MIPS eCQM to CMS via EHR data submission mechanism (described at 42 CFR 414.1325) Yes. (2) Uses health IT certified to § 170.314 or § 170.315(c)(1) to capture data and export MIPS eCQM data electronically to a third-party intermediary The third-party intermediary is certified to be in conformance with § 170.415(c)(2-3) (import data/calculate, report results) for each measure; and calculates and submits MIPS eCQMs Yes. (3) Uses health IT certified to § 170.314 or § 170.315(c)(1) to capture data and export a MIPS eCQM electronically to a QCDR QCDR uses automated, verifiable software to process data, calculate and electronically report to a MIPS eCQM to CMS consistent with CMS-vetted protocols Yes. (4) Uses certified health IT, including but not necessarily limited to that needed to satisfy the definition of CEHRT at § 414.1305, to capture demographic and clinical data and transmit it to a QCDR using appropriate Clinical Document Architecture standard (such as QRDA or C-CDA) QCDR uses automated, verifiable software to process data, calculate and electronically report to MIPS approved non-MIPS measures consistent with CMS-vetted protocols Yes. (5) Uses certified health IT, including but not necessarily limited to that needed to satisfy the definition of CEHRT at § 414.1305, to capture demographic and clinical data. Makes data available to a third-party intermediary via secure application programming interface (API) The third-party intermediary uses automated, verifiable software to process data, calculate and electronically report to MIPS approved non-MIPS measures consistent with CMS-vetted protocols Yes. (6) Uses certified health IT, including but not necessarily limited to that needed to satisfy the definition of CEHRT at § 414.1305, to capture demographic and clinical data and transmit it to the third-party intermediary using appropriate standard or method (QRDA, C-CDA, API) The eligible clinician or group, or a third-party intermediary uses automated, verifiable software to process data, calculate and reports to MIPS approved measures through manual entry, or manual manipulation of an uploaded file, into a CMS web portal No; manual entry interrupted data flow and electronic calculation is not verified. (7) Uses certified health IT to support patient care and capture data but abstracts it manually into a web portal or abstraction-input app The third-party intermediary uses automated, verifiable software to process data, calculate and report measure No; manual abstraction interrupted data flow.

    In the first example in Table 18, for MIPS eCQMs, when a MIPS eligible clinician wishes to use CEHRT for the entire process of data capture to CMS submission, the health IT solution must be certified to § 170.315(c)(1) through (3) in order to achieve the bonus point.

    In the second and third examples, the MIPS eligible clinician has chosen to participate in a registry or QCDR and report eCQMs. This MIPS eligible clinician sends quality data electronically from CEHRT to the registry, and the registry calculates the measure results and eventually submits the eCQMs data to CMS on the eligible clinician's behalf. In the second case, the registry uses health IT that is certified to § 170.315(c)(2) through (3) in order for the MIPS eligible clinician to earn the bonus points for end-to-end electronic reporting. In the third case, the QCDR does not use health IT that is certified to a particular standard, but uses automated, verifiable software to process data, calculate and electronically report a MIPS eCQM to MIPS consistent with CMS-vetted protocols. In both of these cases, a MIPS eligible clinician or group would earn a bonus point for each measure submitted in this manner, up to a 10 percent cap.

    In both the fourth and fifth examples, the MIPS eligible clinician has chosen to participate in a QCDR and report on the MIPS-accepted non-MIPS (registry) measures. The MIPS eligible clinician uses CEHRT, and perhaps some additional certified health IT modules, in the normal course of clinical documentation and this certified health IT captures clinical data needed for the MIPS eligible clinician's selected registry measures. In both the fourth and fifth examples, the QCDR has satisfied the MIPS criteria, including obtaining CMS' approval of the non-MIPS measures this MIPS eligible clinician is using. In these cases, the QCDR processes the clinical data to calculate measure results and reports the MIPS-approved non-MIPS measures consistent with CMS-vetted protocols. The only difference between these two examples is how the data gets from the MIPS eligible clinician's certified health IT to the QCDR. In the fourth example, the MIPS eligible clinician's certified health IT transmits quality data documents to the registry in QRDA or other Clinical Document Architecture standard format. In the fifth example, the MIPS eligible clinician has made appropriate arrangements to grant the registry access to the quality measurement information via a secure application programming interface (API). We have presented both examples to emphasize that the MIPS eligible clinician would receive the bonus point under each scenario. Either the secure transmission of data within CDA documents or a secure API is an electronic method of managing and moving the quality measurement data to where it is needed.

    In the sixth example, the group, or a third party submitting data on their behalf, may use the CMS Web Interface to submit electronic data for quality measure submissions. However, such a submission would only be awarded the bonus for end-to-end reporting if the submission included uploading an electronic file without modification. This is to preserve the electronic flow of data end-to-end and provide a verifiable method to ensure that manual abstraction, manual calculation, or subsequent manual correction or manipulation of the measures using abstraction did not occur.

    Finally, in the last example, the MIPS eligible clinician initially captures data electronically, but manually abstracts the data for analysis and keys it into a web portal used by a registry. The registry then calculates and submits the measure results to CMS electronically. In this case, no bonus point would be given as the manual abstraction process interrupted the complete end-to-end electronic data flow.

    (g) Calculating the Quality Performance Category Score

    The next two subsections provide a detailed description of how the quality performance category score would be calculated under our finalized policies.

    (i) Calculating the Quality Performance Category Score for Non-APM Entity, Non-CMS Web Interface Reporters

    To calculate the quality performance category score, we proposed to sum the weighted points assigned for the measures required by the quality performance category criteria plus the bonus points and divide by the weighted sum of total possible points. (81 FR 28256)

    If a MIPS eligible clinician elects to report more than the minimum number of measures to meet the MIPS quality performance category criteria, then we would only include the scores for the measures with the highest number of assigned points. In the proposed rule (81 FR 28257), we provided an example for how this logic would work. The quality performance category score would be capped at 100 percent.

    We proposed that if a MIPS eligible clinician has met the quality performance category submission criteria for reporting quality information, but does not have any scored measures as discussed in section II.E.6.b.(2) of the proposed rule, then a quality performance category score would not be calculated. Refer to section II.E.6.a.2.d. of the proposed rule (81 FR 28254) for details on how we proposed to address scenarios where a quality performance category score is not calculated for a MIPS eligible clinician. We requested comment on our proposals to calculate the quality performance category score.

    The following is summary of the comments we received on our proposals to calculate the quality performance category score.

    Comment: Several commenters expressed concern about the complexity of the scoring approach. One commenter recommended taking an average of the performance percentages as an alternative.

    Response: We have simplified our methodology for scoring the quality performance category. For example, during the transition year, we are adding a floor of 3 points for any submitted measure (class 1 or class 2 measures as defined in Table 17, as discussed in section II.E.6.a.(2)(c) of this final rule with comment period). This adjustment will minimize the number of measures that are not scored and stabilize the denominator of the MIPS quality performance category score. This also ensures that all MIPS eligible clinicians will have a quality performance category score. As discussed in the Web Interface scoring section in section II.E.6.a.(2)(g)(ii), we are not scoring measures that lack a benchmark or are below case minimum if the measure meets data completeness criteria.

    Comment: Several commenters supported our proposal to use the top six scored measures.

    Response: We appreciate the support and we are finalizing the proposal to score the top six scored measures for all submission mechanisms except CMS Web Interface. The required number of measures for CMS Web Interface is discussed in section II.E.5.b.(3)(a)(ii) of this final rule with comment period.

    Comment: One commenter disagreed with the ability to report more than 6 measures because not all groups had the same option to report additional measures given the availability of measures.

    Response: With the exception of the CMS Web Interface submission mechanism (other than the CAHPS for MIPS survey), groups are allowed to report additional measures. We note that groups, outside of the MIPS APM scoring standard, have the option to choose whether they will report via the CMS Web Interface or another submission mechanism. With regard to the availability of measures, we will continue to monitor trends to identify areas where further measure development is needed.

    After consideration of the comments, we are finalizing our policy at § 414.1380(b)(1)(xv) to calculate the quality performance category score as proposed. We will sum the points assigned for the measures required by the quality performance category criteria plus the bonus points and divide by the weighted sum of total possible points. The quality performance category score cannot exceed the total possible points for the quality performance category. If a MIPS eligible clinician elects to report more than the minimum number of measures to meet the MIPS quality performance category criteria, then we will only include the scores for the measures with the highest number of assigned points, once the first outcome measure is scored, or if an outcome measure is not available, once another high priority measure is scored.

    We are finalizing our proposal that if a MIPS eligible clinician does not have any scored measures, then a quality performance category score will not be calculated. However, we also note that during the transition year, with the implementation of the 3-point floor for class 2 measures as described in Table 17 that all MIPS eligible clinicians who are non-CMS Web Interface users, that submit some quality data will have a quality performance category score in year 1 of MIPS. The MIPS eligible clinician will receive:

    • 3 points for submitted measures that do not meet the minimum case size, do not have a benchmark or do not meet data completeness criteria, even if the measure is reported with a 0 percent performance rate.

    • 3 points or more for submitted or calculated measures that meet the minimum case size, have a benchmark and meet data completeness criteria, even if the measure is reported with a 0 percent performance rate.

    However, as we will illustrate below, because we have changed the performance standards, submission criteria, and other scoring elements, we believe the scoring system will be simpler to understand and that it will reduce burden on MIPS eligible clinicians trying to achieve a higher quality performance category score. Thus, based on public comments, we are adjusting multiple parts of our proposed scoring approach to ensure that we do not unfairly penalize MIPS eligible clinicians who have not had time to prepare adequately to succeed under our proposed approach while still rewarding high performers.

    For example, we are no longer requiring a cross-cutting measure for patient facing MIPS eligible clinicians, as discussed in section II.E.5.(b)(3) of this final rule with comment period. Additionally, we are no longer requiring two of the 3 population health measures and are only requiring the all-cause hospital readmission measure for groups of 16 or more clinicians instead of our proposed approach of groups of 10 or more, assuming the case minimum of 200 cases has been met, as discussed in section II.E.5.b.(6) of this final rule with comment period. If the case minimum of 200 cases has not been met, we will not score this measure. Thus, the MIPS eligible clinician will not receive a zero for this measure, but rather this measure will not apply to the MIPS eligible clinician's quality performance category score.

    We also note that if a group of 16 or more, does not report any quality performance category data, the group would be scored on the all-cause readmission measure (assuming the group meets the readmission measure minimum case size requirements) even if they did not submit any other quality performance category measures if they submitted information in other performance categories. If a group of 16 or more did not report any information in any of the performance categories, then the readmission measure would not be scored.

    We are now capping both the high priority bonus and the CEHRT bonus at 10 percent instead of 5 percent of the denominator of the quality performance category score. Further, all measures reported can now be scored with a floor of 3 points even if the measure is below the case minimum, lacks a benchmark or is below the completeness requirement. We believe that all of these modified elements, when combined, will significantly increase participation in the MIPS, will reduce burden and confusion on MIPS eligible clinicians and will allow MIPS eligible clinicians to gain experience under the MIPS while penalties are smaller in nature.

    For example, a MIPS eligible clinician who is in a group of 20 practitioners that reports as a group, and reports 4 quality measures instead of the required 6 measures. Of the 4 measures submitted, which include an outcome measure, each measure has a performance rate that is low. The clinician is also scored on an additional measure, the all-cause hospital readmission measure, but has a poor performance rate on this measure as well. Under this revised scoring approach, we allow all MIPS eligible clinicians who submit quality measures to receive a 3-point floor per measure in the quality performance category. Under this scenario, the MIPS eligible clinician receives the 3-point floor for each of the 4 submitted measures and the all-cause hospital readmission measure. The MIPS eligible clinician's quality performance category weighted score is calculated as follows: 5 measures × 3 points each/total possible points of 70 points × (quality performance category weight of 60) = 12.9 points towards the final score.

    In another example, a MIPS eligible clinician who is a solo practitioner reports all 6 measures, including an outcome measure, although all are below the required case minimum. The eligible clinician receives a floor of 3 points for all 6 measures in the quality performance category even though the measures are below the 20 case size minimum. Under this scenario, the MIPS eligible clinician's quality performance category weighted score is calculated as follows: 6 measures × 3 points each/total possible points of 60 points × (quality performance category weight of 60), or 18/60 × 60 = 18 points towards the final score. We note that we did not include the all-cause hospital readmissions measure in the above quality performance category calculation since it is not applicable to groups of 15 or fewer clinicians and solo practitioners and MIPS individual reporters due to reliability concerns.

    In another example, a MIPS eligible clinician is in a group of 25 that reports as a group via registry 3 process measures, 1 outcome measure, 1 other high priority (for example, patient safety) measure and 1 process measure that is below the case minimum requirement. Two of the process measures and one outcome measure qualify for the CEHRT bonus. Measures that do not meet the required case minimum or do not have a benchmark or fall below the data completeness requirement will be given 3 points. We emphasize that these measures are treated differently than a required measure that is not reported. Any required measure that is not reported would receive a score of zero points and be considered a scored measure. Table 19 illustrates the example.

    Table 19—Quality Performance Category Example With High Priority and CEHRT Bonus Points Measure Measure type Number of
  • cases
  • Points based
  • on performance
  • Total possible
  • points
  • Quality bonus points for high priority Quality bonus
  • points for
  • CEHRT
  • Measure 1 Outcome Measure using CEHRT 20 4.1 10 0
  • (required)
  • 1
    Measure 2 Process using CEHRT 21 9.3 10 N/A 1 Measure 3 Process using CEHRT 22 10 10 N/A 1 Measure 4 Process 50 10 10 N/A N/A Measure 5 High Priority (Patient Safety) 43 8.5 10 1 N/A Measure 6 Process below case minimum 10 3 10 N/A N/A All-Cause Hospital Readmission Claims 205 5 10 N/A N/A Total Points All Measures N/A 49.9 70 1 3

    The total possible points for the MIPS eligible clinician is 70 points. The eligible clinician has 49.9 points based on performance. The MIPS eligible clinician also qualifies for 1 bonus point for reporting an additional high priority patient safety measure and 3 bonus points for end-to-end electronic reporting of quality measures. The bonus points for high priority measures and CEHRT reporting are subject to two separate caps, which are each 10 percent of 70 possible points or 7 points. The quality performance category score for this MIPS eligible clinician is (49.9 points + 4 bonus points = 53.9)/70 total possible points × 60 (quality performance category weight) = 46.2 points towards the final score. The quality performance category score would be capped at 100 percent.

    The example in Table 20 illustrates how to calculate the bonus cap for the high priority measure bonus and the CEHRT bonus. In this scenario, the MIPS eligible clinician is a solo practitioner who has submitted 6 measures, as an individual, all above the case minimum requirement. Since the MIPS eligible clinician is a solo practitioner, the all-cause hospital readmission measure does not apply. The MIPS eligible clinician below successfully submitted six quality measures using end-to-end electronic reporting, and therefore, qualifies for the CEHRT bonus of one point for each of those measures. In addition to CEHRT bonus points, the MIPS eligible clinician reported 4 outcome measures (6 bonus points), a patient experience measure (2 bonus points) and a care coordination measure (1 bonus point) for 9 total high priority bonus points. The MIPS eligible clinician receives 2 bonus points for the second, third and fourth outcome measures, given that no bonus points are given for the first required measure. However, the number of high priority measure bonus points (9 points) is over the cap (which is 10 percent of 60 possible points or 6 points), and the number of CEHRT bonus points (6 points) is at the cap (which is 10 percent of 60 possible points or six points). The quality performance category score for this MIPS eligible clinician is 50.8 + 6 CEHRT bonus points + 6 high priority bonus points/60 points = 62.8/60 or 100 percent since the overall number of points is capped at 60 or 100 percent score. Note, in section II.E.5.b.(2) of this final rule with comment period, we proposed to weight the quality performance category at 60 percent of the MIPS final score, so a 100 percent quality performance category score would account for 60 percent of the final score.

    Table 20—Quality Performance Category Bonus Cap Example Measure Measure type Points based on
  • performance
  • Total possible points Quality bonus points for high priority Quality bonus points for CEHRT
    Measure 1 Outcome Measure using CEHRT 4.1 10 * 0 1 Measure 2 Outcome Measure using CEHRT 9.3 10 2 1 Measure 3 Patient Experience using CEHRT 10 10 2 1 Measure 4 High Priority (Care Coordination) measure using CEHRT 10 10 1 1 Measure 5 Outcome measure using CEHRT 9 10 2 1 Measure 6 Outcome measure using CEHRT 8.4 10 2 1 Total 50.8 60 9 6 Cap applied to Bonus Categories 10% × total possible points to calculate the high priority bonus cap and 10% × total possible points to calculate the CEHRT bonus cap 6 6 Total with high priority and CEHRT Bonus ** 60 * Required. ** Given we cap the quality performance category score at 60.
    (ii) Calculating the Quality Performance Category for CMS Web Interface Reporters

    CMS Web Interface reporters have different quality performance category submission criteria; therefore, we proposed to modify our scoring logic slightly to accommodate this submission mechanism. CMS Web Interface users report on the entire set of measures specified for that mechanism. Therefore, rather than scoring the top six reported measures, we proposed to score all measures. If a group does not meet the reporting requirements for one of the measures, then the group would receive 0 points for that measure. We note that since groups reporting through the CMS Web Interface are required to report on all measures, and since some of those measures are “high priority,” these groups would always have some bonus points for the quality performance category score if all the measures are reported. That is, the group would either report on less than all CMS Web Interface measures, in which case the group would receive zeroes for unreported measures, or the group would report on all measures, in which case the group would automatically be eligible for bonus points. The other proposals for scoring discussed in section II.E.6.a.2.(g)(i) of the proposed rule, including bonus points, would still apply for CMS Web Interface. We requested comment on this proposal.

    The following is a summary of the comments we received regarding our proposal to score CMS Web Interface.

    Comment: Some commenters requested that we apply the policy of scoring only the six highest scoring measures to the CMS Web Interface.

    Response: For other submission mechanisms, MIPS eligible clinicians are required to report 6 measures; therefore, we are scoring 6 required measures. In contrast, in the transition year, the CMS Web Interface reporters are required to report 13 individual measures, and a 2-component diabetes composite measure. We believe it would be appropriate to score all the required measures. However, we note that 3 measures do not have a benchmark in the Shared Saving Program; therefore, we will only score those measures with a benchmark. For the transition year, measures with a benchmark include 10 individual measures and the 2-component diabetes composite measure for a total of 11 measures with benchmarks. CMS Web Interface reporters are required to report on more than 6 measures; they are required to report on 13 individual measures and the 2-component diabetes composite measure, but are only scored in the transition year on 11 (10 individual measures and the 2-component diabetes composite measure) of the total 14 required measures given that only 11 measures have a benchmark. Therefore, we believe we have a comparable number of measures scored in CMS Web Interface (11measures with benchmarks) compared to other reporting mechanisms (6 measures). In addition, we think this policy to not score measures without a benchmark is consistent with Shared Savings Program and NextGen ACO programs which do not measure performance on selected measures. Table 21 shows the number of CMS Web Interface measures and indicates which have benchmarks and which are high priority measures that would be eligible for bonus points. The first required outcome measure would not receive bonus points. For the two-component diabetes composite measure, both components of the measure would need to be submitted to qualify as a high priority measure.

    Table 21—Finalized Quality Measures Available for MIPS Web Interface Reporting in 2017 Count NQF/Q # ACO # Measure title & description High priority
  • designation
  • 2017 Shared
  • savings program benchmark (yes/no)
  • 1 0059/001 ACO-27 2- Component Diabetes Composite Measure: Diabetes: Hemoglobin A1c (HbA1c) Poor Control (>9%): Percentage of patients 18-75 years of age with diabetes who had hemoglobin A1c > 9.0% during the measurement period * Yes, diabetes composite benchmark only. 0055/117 ACO-41 Diabetes: Eye Exam: Percentage of patients 18-75 years of age with diabetes who had a retinal or dilated eye exam by an eye care professional during the measurement period or a negative retinal or dilated eye exam (no evidence of retinopathy) in the 12 months prior to the measurement period 2 0097/046 ACO-12 Medication Reconciliation Post-Discharge: The percentage of discharges from any inpatient facility (e.g. hospital, skilled nursing facility, or rehabilitation facility) for patients 18 years and older of age seen within 30 days following discharge in the office by the physician, prescribing practitioner, registered nurse, or clinical pharmacist providing on-going care for whom the discharge medication list was reconciled with the current medication list in the outpatient medical record
  • This measure is reported as three rates stratified by age group:
  • • Reporting Criteria 1: 18-64 years of age
  • • Reporting Criteria 2: 65 years and older
  • • Total Rate: All patients 18 years of age and older.
  • * No.
    3 0041/110 ACO-14 Preventive Care and Screening: Influenza Immunization: Percentage of patients aged 6 months and older seen for a visit between October 1 and March 31 who received an influenza immunization OR who reported previous receipt of an influenza immunization Yes. 4 0043/111 ACO-15 Pneumonia Vaccination Status for Older Adults: Percentage of patients 65 years of age and older who have ever received a pneumococcal vaccine Yes. 5 2372/112 ACO-20 Breast Cancer Screening: Percentage of women 50 through 74 years of age who had a mammogram to screen for breast cancer Yes. 6 0034/113 ACO-19 Colorectal Cancer Screening: Percentage of patients 50—75 years of age who had appropriate screening for colorectal cancer Yes. 7 0421/128 ACO-16 Preventive Care and Screening: Body Mass Index (BMI) Screening and Follow-Up Plan: Percentage of patients aged 18 years and older with a BMI documented during the current encounter or during the previous six months AND with a BMI outside of normal parameters, a follow-up plan is documented during the encounter or during the previous six months of the current encounter
  • Normal Parameters: Age 18-64 years BMI ≥ 18.5 and < 25 kg/m2.
  • Yes.
    8 0418/134 ACO-18 Preventive Care and Screening: Screening for Depression and Follow-Up Plan: Percentage of patients aged 12 years and older screened for depression on the date of the encounter using an age appropriate standardized depression screening tool AND if positive, a follow-up plan is documented on the date of the positive screen Yes. 9 0068/204 ACO-30 Ischemic (IVD): Use of Aspirin or Another Antiplatelet: Percentage of patients 18 years of age and older who were diagnosed with acute myocardial infarction (AMI), coronary artery bypass graft (CABG) or percutaneous coronary interventions (PCI) in the 12 months prior to the measurement period, or who had an active diagnosis of ischemic vascular disease (IVD) during the measurement period and who had documentation of use of aspirin or another antiplatelet during the measurement period Yes. 10 0028/226 ACO-17 Preventive Care and Screening: Tobacco Use: Screening and Cessation Intervention: Percentage of patients aged 18 years and older who were screened for tobacco use one or more times within 24 months AND who received cessation counseling intervention if identified as a tobacco user Yes. 11 0018/236 ACO-28 Controlling High Blood Pressure: Percentage of patients 18-85 years of age who had a diagnosis of hypertension and whose blood pressure was adequately controlled (<140/90 mmHg) during the measurement period * Yes. 12 0101/318 ACO-13 Falls: Screening for Fall Risk: Percentage of patients 65 years of age and older who were screened for future fall risk at least once during the measurement period * Yes. 13 0710/370 ACO-40 Depression Remission at Twelve Months: Patients age 18 and older with major depression or dysthymia and an initial Patient Health Questionnaire (PHQ-9) score greater than nine who demonstrate remission at twelve months (+/- 30 days) after an index visit defined as a PHQ-9 score less than five. This measure applies to both patients with newly diagnosed and existing depression whose current PHQ-9 score indicates a need for treatment * No. 14 N/A/438 ACO-42 Statin Therapy for the Prevention and Treatment of Cardiovascular Disease: Percentage of the following patients—all considered at high risk of cardiovascular events—who were prescribed or were on statin therapy during the measurement period:
  • • Adults aged ≥ 21 years who were previously diagnosed with or currently have an active diagnosis of clinical atherosclerotic cardiovascular disease (ASCVD); OR
  • • Adults aged ≥ 21 years with a fasting or direct low-density lipoprotein cholesterol (LDL-C) level ≥ 190 mg/dL; OR
  • • Adults aged 40-75 years with a diagnosis of diabetes with a fasting or direct LDL-C level of 70-189 mg/dL
  • No.
    Note: High priority measures are noted with an asterisk (*).

    Comment: One commenter opposed the approach in which groups not able to report on all measures would receive a score of zero for omitting measures, as it limits the use of this technology.

    Response: Section 1848(q)(5)(B)(i) of the Act requires us to give the lowest possible score to a MIPS eligible clinician that fails to report a required measure or activity. As all measures in the CMS Web Interface are required to be submitted, we have to score zeros for those who do not report.

    Comment: Commenter recommended that CMS give extra points when specialists utilizing the CMS Web Interface participate in specialty registries.

    Response: We offer CMS Web Interface users the ability to receive bonus points for reporting more than one high priority measure and for end-to-end electronic reporting. We did not propose to offer bonuses for participation in specialty registries. We do not think it is appropriate to offer a special bonus for one particular submission mechanism; however, if we revisit the issue of new bonus point categories in the future, we would do so through proposed rulemaking in future years.

    After considering all comments, we are finalizing our policy as proposed with regard to scoring CMS Web Interface measures for all elements except for the following scenarios.

    We also highlight that unless otherwise noted, this section on CMS Web Interface scoring will not apply to clinicians participating in an APM Entity scored through the APM scoring standard. APM Entity group reporting and scoring for MIPS eligible clinicians participating in MIPS APMs are summarized in section II.E.5.h. of this final rule with comment period. All eligible clinicians that participate in APMs are considered MIPS eligible clinicians unless and until they are determined to be either QPs or Partial QPs who elect not to report under MIPS, and are excluded from MIPS, or unless another MIPS exclusion applies.

    We are finalizing the following modifications for our CMS Web Interface scoring policies. First, we will be providing a global floor of 3 points for all CMS Web Interface measures submitted in the transition year, even with measures at 0 percent performance rate, assuming that these measures have met the data completeness criteria, have a benchmark and meet the case minimum requirements. However, measures with performance below the 30th percentile will be assigned a value of 3 points during the transition year to be consistent with the floor established in this rule for other measures and because the Shared Savings Program does not publish benchmarks below the 30th percentile. We will reassess scoring for measures below the 30th percentile in future years. Table 22 illustrates how the decile score works for Shared Saving Program benchmarks. For example, a performance rate of 9.6 percent (below 30th percentile), would receive 3.0 points. This methodology will not affect the scoring for MIPS eligible clinicians with performance in the third decile or higher. In addition, this methodology will not affect the calculation of future benchmarks.

    Table 22—Example of Using Shared Saving Program Benchmarks * for a Single Measure To Assign Points With a Global Floor of 3 Points Benchmark decile Sample quality
  • measure benchmarks
  • for web interface
  • (%)
  • Possible points
  • with 3-point floor
  • Benchmark Deciles 1-3 (starts at 0 and ends before the 30th percentile) N/A 3.0 Benchmark Decile 4 (starts at the 30th percentile) 23.0-35.9 4.0-4.9 Benchmark Decile 5 36.0-40.9 5.0-5.9 Benchmark Decile 6 41.0-61.9 6.0-6.9 Benchmark Decile 7 62.0-68.9 7.0-7.9 Benchmark Decile 8 69.0-78.9 8.0-8.9 Benchmark Decile 9 79.0-84.9 9.0-9.9 Benchmark Decile 10 85.0-100 10 * Data is illustrative and does not represent an actual Shared Savings Program Benchmark.

    We will not score CMS Web Interface measures that do not meet the case minimum requirement or lack a benchmark unless that measure is not submitted. We believe that this policy is appropriate since, unlike with non-CMS Web Interface users where MIPS eligible clinicians can report additional measures beyond the required six to ensure that there are sufficient measures to be scored on performance, CMS Web Interface users are limited to reporting the 14 measures (13 individual measures and the 2-component diabetes composite measure) listed in Table 21. Given that these CMS Web Interface users cannot report additional measures in instances where a measure does not have a benchmark or is below the case minimum, we have decided not to score these measures.

    However, measures that are not reported and measures reported below the data completeness requirements will receive a 0 score. We have decided to give a zero to measures that are below the data completeness requirements for CMS Web Interface users because we believe that these users generally have more experience in reporting measures than the non-CMS Web Interface users and therefore should not have any challenges in meeting the data completeness criteria. Table 23 summarizes the scoring approach for Web Interface and Non-Interface Measures.

    Table 23—Comparison of Scoring Approach of Web Interface and Non-Web Interface Measures Data completeness, with/without case minimum criteria met/benchmark Range of possible
  • scores per measure
  • for non-CMS web
  • interface users
  • Range possible scores per measure for CMS web interface users
    No measures reported regardless of case minimum criteria met 0 0 No measures reported regardless of whether there is a benchmark 0 0 Partial data (below data completeness criteria requirement) without case minimum criteria met, regardless of whether the measure is at 0% performance rate or not 3 0 Partial data (below data completeness criteria requirement) without a benchmark, regardless of whether the measure is at 0% performance rate or not 3 0 Complete data (data completeness criteria met) without case minimum criteria met, regardless of whether the measure is at 0% performance rate or not 3 Null: The measures will not be scored. Complete data (data completeness criteria met) without a benchmark, regardless of whether the measure is at 0% performance rate or not 3 Null: The measure will not be scored. Complete data (data completeness criteria met) with case minimum criteria met, the measure has a benchmark, and the measure is at 0% performance rate 3 3 Complete data (data completeness criteria met) with case minimum criteria met, the measure has a benchmark, and the performance rate is greater than 0% performance rate** 3-10 3-10 * * SSP benchmark's start at the 30th percentile ** Given the global 3-point floor for low performance, a measure that would have received 1 point or 2 points will now receive a score of 3 points.

    We provide in Table 24 examples of this scoring approach. For example, for each measure that lacks a benchmark that is not reported, a zero will be added to the numerator and 10 points will be added to the denominator. This is because normally these measures are not scored but since these measures were not reported, the group will be penalized with a lower quality performance category score accordingly. For each measure that does not lack a benchmark that is not reported, then a zero will be added to the numerator but no points will be added to the denominator since these measures are normally scored so the denominator is static. We are finalizing the policy to score measures with benchmarks because CMS Web Interface reporters have to report on more than 6 measures, so we believe we have a comparable number of measures compared to other reporting mechanisms. In addition, we believe this policy to not score measures without a benchmark is consistent with Shared Savings Program and NextGen ACO programs which do not measure performance on selected measures.

    Table 24—Scoring Examples: Groups Reporting via Web Interface With the Readmission Measure * Examples Reported 14 measures yes/no Number of measures not reported Number of
  • measures not
  • scored **
  • Quality performance category numerator/denominator
  • (assume all measures reported received 10 points and the score for the readmission measure* is 3 points)
  • Quality performance category score numerator/denominator × (weight of quality performance category of 60) = points toward the final score
    Reported 14 measures Yes N/A 3 11 measures × 10 points + 1 measures × 3 points/120 113/120 × 60 = 56.5. Reported 11 measures, did not report 3 measures without a benchmark No 3 measures lacking a benchmark 0 11 measures × 10 points + 1 measure × 3 points/150 113/150 × 60 = 45.2. Reported 13 measures, did not report measure with a benchmark No 1 measure with a benchmark 3 10 measures × 10 points + 1 measure × 3 points/120 103/120 × 60 = 51.5. Note: * For CMS Web Interface groups without sufficient volume for the readmissions measure (below the 200 case minimum), as well as Shared Saving Program and NextGen ACOs, the readmission measure will not be scored. ** Measures are not scored if the measure is reported but the case minimum criteria is not met or if the measure lacks a benchmark.
    (h) Measuring Improvement

    Section 1848(q)(3)(B) of the Act requires the Secretary, in establishing performance standards for measures and activities for the MIPS performance categories, to consider: historical performance standards; improvement; and the opportunity for continued improvement. In addition, under section 1848(q)(5)(D) of the Act, beginning with the second year of the MIPS, if data sufficient to measure improvement are available, the final score methodology shall take into account improvement of the MIPS eligible clinician in calculating the performance score for the quality and cost performance categories and may take into account improvement for the improvement activities and advancing care information performance categories.

    We solicited public comments on potential ways to incorporate improvement into the scoring methodology moving forward. We were especially interested in feedback on the following three options, with the assumption that eligible clinicians would report the same measures year-to-year (where possible). We were also interested in feedback on how to score improvement given that a MIPS eligible clinician can change measures and submission mechanisms from year-to-year. In addition, a MIPS eligible clinician can elect to report as an individual or a member of a group and that election can vary from year to year. Finally, we sought feedback on whether to score improvement where MIPS eligible clinicians do not have the required case minimum for measures to be scored.

    Option 1: In the proposed rule, we presented an option in which we could adopt the approach for assessing improvement currently used for the HVBP, where we assign from 1-10 points for achievement and from 1-9 points for improvement for each measure. We would compare the achievement and improvement points for each measure in the quality performance category and score whichever is greater. Specifically, we would determine two scores for a MIPS eligible clinician at the measure level for the quality performance category. First, we would assess the MIPS eligible clinician's achievement score, which measures how the MIPS eligible clinician performed compared to benchmark performance scores for each applicable measure in the quality performance category. Second, we would assess the MIPS eligible clinician's improvement score, which measures how much a MIPS eligible clinician has improved compared to the MIPS eligible clinician's own previous performance during a baseline period for each applicable measure in the quality performance category. Under this methodology, we would compare the achievement and improvement scores for each measure and only use whichever is greater, but only those eligible clinicians with the top achievement would be able to receive the maximum number of points. If a MIPS eligible clinician's practice was not open during the baseline period but was open during the performance period, points would be awarded based on achievement only for that performance period. For a more detailed description of the Hospital VBP Program methodology, we refer readers to §§ 412.160 and 412.165.

    Option 2: In the proposed rule, we presented an option where we could adopt the approach for assessing improvement currently used in the Shared Savings Program, where MIPS eligible clinicians or groups would receive a certain number of bonus points for the quality performance category for improvement, although the total points received for the performance may not exceed the maximum total points for the performance category in the absence of the quality improvement points. Under this methodology, we would score individual measures and determine the corresponding number of points that may be earned based on the MIPS eligible clinician's performance. We would add the points earned for the individual measures within the quality performance category and divide by the total points available for the performance category to determine the quality performance category score. MIPS eligible clinicians that demonstrate quality improvement on established quality measures from year-to-year would be eligible for up to 4 bonus points for the quality performance category. Bonus points would be awarded based on a MIPS eligible clinician's net improvement in measures within the quality performance category, which would be calculated by determining the total number of significantly improved measures and subtracting the total number of significantly declined measures. Up to 4 bonus points would be awarded based on a comparison of the MIPS eligible clinician's net improvement in performance on the measures to the total number of individual measures in the quality performance category. When bonus points are added to points earned for the quality measures in the quality performance category, the total points received for the quality performance category may not exceed the maximum total points for the performance category in the absence of the quality improvement points. For a more detailed description of the Shared Savings Program methodology, we refer readers to § 425.502, as well as CY 2015 PFS final rule with comment period (79 FR 67928—67931) for a discussion of how CMS will determine whether the improvement or decline is significant.

    Option 3: In the proposed rule, we presented an option where we could adopt the approach similar to that for assessing improvement for the Medicare Advantage 5-star rating methodology. Under this approach, we would identify an overall “improvement measure score” by comparing the underlying numeric data for measures from the prior year with the data from measures for the performance period. To obtain an “improvement measure score” MIPS eligible clinicians would need to have data for both years in at least half of the required measures for the quality performance category. The numerator for the overall “improvement measure” would be the net improvement, which is a sum of the number of significantly improved measures minus the number of significantly declined measures. The denominator is the number of measures eligible for improvement since to qualify for use in the “improvement measure” calculation, a measure must exist in both years and not have had a significant change in its specification. This “improvement measure” would be included in the quality performance category. We recognize that high performing MIPS eligible clinicians may have less room for improvement and consequently may have lower scores on the overall “improvement measure”. Therefore, under this option we would apply the following rule, which is similar to how the Medicare Advantage 5-star rating methodology treats highly rated plans within the Medicare Star Quality Rating System, in connection with the improvement measure to avoid penalizing consistently high-performing eligible clinicians: We would calculate a MIPS eligible clinician's score with the “improvement measure” and without, and use the MIPS eligible clinician's best score. We requested comments on these proposals.

    Comment: Numerous commenters wrote in support of Options 1, 2, and 3, with the majority supporting Option 1. Those who supported Option 1 recognized that this approach presented challenges if the clinician changes measures from year to year or changes between group and individual reporting. One commenter was concerned about improvement points for year 2, where a clinic performing highly would not be able to receive as many points as another lower performing clinic even though both had improved. One commenter expressed concern with how CMS intends to measure and score quality improvement in the years following the first performance period. In particular, this commenter sought clarity on scoring process measures versus outcome measures. The commenter requested that specific examples of how each measure will be scored be included in the final rule with comment period. One commenter requested that CMS release an RFI outlining the three options in detail before finalizing any proposal. Another commenter recommended postponing measuring improvement and instead focusing on a successful MIPS launch. Another commenter cautioned that no methodology should be finalized without testing and significant outreach to, and input from the medical community to ensure clinicians understand and trust what they are being scored on. One commenter recommended that CMS determine the feasibility for each of the 3 proposed strategies. The concern is that due to fluidity of physician groups, payment adjustment applied 2 years later may never reach the physicians that earned it. This is due to the physician leaving their group. Also if a physician achieves success and moves to a lower performing group they will be penalized. This commenter recommended not committing to a single approach in incorporating improvements into MIPS scores.

    Response: We thank commenters for their feedback. We are not finalizing any policies related to improvement in this rule, but will consider comments for future rulemaking.

    Comment: One commenter recommended measuring improvement in advancing care information and cost. One commenter suggested that all Shared Savings Program participants for which CY 2015 was their first year of ACO participation be able to choose the timeline that becomes the baseline for their performance improvement score as these providers were only being evaluated on reporting and not performance, and to use CY 2015 for the baseline would be misleading. Commenter strongly believed that CMS should work on securing a successful launch of the program and encouraging participation before it begins to evaluate future improvement. One commenter supported CMS' proposals to reward improvement.

    Response: We are open to measuring improvement for all performance categories. We are not finalizing any policies related to improvement in this rule, but will consider comments for future rulemaking.

    Comment: One commenter expressed concern that practices that are high performers may be penalized because they do not have the opportunity for large increases in performance.

    Response: We note that we are required to measure achievement, and in addition to measuring achievement, may measure improvement in Year 2, if data sufficient to measure improvement is available. MIPS eligible clinicians will not be penalized if they are high performers.

    We appreciate the comments regarding the three proposed options to score improvement; however, we are not proposing an approach for scoring improvement at this time. We will consider these comments and outline a proposal in future rulemaking.

    (3) Scoring the Cost Performance Category

    As we described in the proposed rule (81 FR 28259), we proposed to align scoring across the MIPS performance categories. For the cost performance category, we proposed to score the cost measures similarly to the quality performance category. Specifically, we proposed at § 414.1380(b)(2) to assign one to ten points to each cost measure based on a MIPS eligible clinician's performance compared to a benchmark (81 FR 28260). However, we proposed that for the cost performance category (unlike the quality performance category), the benchmark would be based on the performance period, rather than the baseline period. The details of the scoring for cost measures are described below.

    (a) Cost Measure Benchmarks

    For the cost performance category, we proposed at § 414.1380(b)(2) that the performance standard is measure-specific benchmarks (81 FR 28259). We would calculate an array of measure benchmarks based on performance. Then, a MIPS eligible clinician's actual performance on the cost measure during the performance period would be evaluated to determine the number of points that should be assigned based on where the clinician's actual performance falls within these benchmarks.

    We proposed at § 414.1380(b)(2) to create benchmarks for the cost measures based on the performance period (81 FR 28260). Changes in payment policies, including changes in relative value units, and changes that affect how hospitals, clinicians and other health care providers are paid under Medicare Parts A and B, can make it challenging to compare performance on cost measures in a performance period with a historical baseline period. In addition, for the Hospital VBP Program and the VM, we use the performance period to establish the benchmarks for scoring Hospital VBP Program's efficiency measures and the VM's cost measures (80 FR 49562, 80 FR 71280). We proposed that if we use the performance period, we would publish the benchmark methodology in a final rule, but would not be able to publish the actual numerical benchmarks in advance of the performance period. We stated we believe that it is important for MIPS eligible clinicians to know in advance how they might be scored so we would continue to provide performance feedback with information on the MIPS eligible clinician's relative performance.

    We considered an alternative to base the cost performance category measure benchmarks on the baseline period proposed rather than the performance period (81 FR 28259). This option would further align the cost performance category benchmark methodology with the quality performance category benchmark methodology. This option would also allow us to publish the numerical benchmarks before the performance period ends; however, we believe the benefits of earlier published benchmarks are more limited for cost measures. MIPS eligible clinicians would not be able to track their daily progress because they would not have all the necessary information to determine the attribution, price standardization, and other adjustments to the measures. We believe the relative performance that we provide through performance feedback would provide MIPS eligible clinicians the information they need to track performance and to learn about their resource utilization. In addition, we believe that using benchmarks based in the performance period is a better approach than using benchmarks based in the baseline period because different payment policies could apply during the baseline period than during the performance period which could affect the cost of care for patients treated by MIPS eligible clinicians. We would also have to identify the baseline benchmark and trend it forward so that the dollars in the baseline period are comparable to the performance period, whereas we would not have to make a trending adjustment for benchmarks based on the performance period. For these reasons, we elected to propose to base the benchmarks on the performance period rather than the baseline period.

    We proposed to create a single set of benchmarks for each measure specified for the cost performance category. We proposed that all MIPS eligible clinicians that are attributed sufficient cases for the measure would be included in the same benchmark. In addition, we proposed that a minimum of 20 MIPS eligible clinicians or groups must be attributed the case minimum in order to develop the benchmark. If a measure does not have enough MIPS eligible clinicians or groups that are attributed enough cases to create a benchmark, then we proposed not including that measure in the scoring for the cost performance category.

    We requested comment on the proposal to establish cost measure benchmarks based on the performance period as well as the alternative proposal.

    The following is a summary of the comments we received regarding our proposals on the benchmarking of cost measures:

    Comment: Several commenters supported our proposal to benchmark cost measures on the performance period, noting that clinicians do not have control of the payment rate for individual services and could be subject to inappropriate adjustments to payments if a previous year was used as a benchmark.

    Response: We agree with commenters and will be finalizing our proposal at § 414.1380(b)(2)(i) to establish cost measure benchmarks based on the performance period. As discussed further below, cost measures must have a benchmark to be scored.

    Comment: A number of commenters opposed our proposal to benchmark cost measures on the basis of the performance period and instead supported our alternative proposal to benchmark cost measures on the basis of a previous year. These commenters supported the alternative benchmarking proposal because they believed it would support alignment with the benchmarking period used for quality scoring, allow clinicians to be aware of cost targets in advance, and be more consistent with the approach used in the Medicare Shared Savings Program. A few commenters recommended using regional trend factors, similar to the Shared Savings Program, to update historical data. Some commenters suggested a benchmark period that was less than a year.

    Response: For quality measurement, we believe that providing a benchmark from previous years provides a helpful target that can support the overall goal of improvement. However, we believe that cost measures have important differences that make using a previous year as a benchmark period problematic, such as changes in Medicare payment policies over time and the development of new therapies and technology. We will continue to provide feedback to clinicians on the cost of care associated with cost measures to which they would have patients attributed and believe that this will be helpful information as they address potential improvements to make in future years. Because we are using performance period data, not historical data, we do not require a trend factor to update the benchmark. We believe that benchmarking to a period of less than 1 year could reduce the reliability of our measures. By benchmarking to the current performance period, we are not making clinicians responsible for differences in costs of care that occur as a result of changes in payment policy over time.

    Comment: Some commenters opposed our proposal to establish a single national benchmark for each cost measure and instead recommended that clinicians only be compared to those that practice in the same specialty, subspecialty, or region of the country, or which have a similar practice sizes or mix of patients.

    Response: The measures used within the cost performance category are constructed to identify the differences in patients as much as possible as opposed to the different specialties of the individual clinicians. We considered the option of peer compatibility grouping during the development of the VM. At that time, we found that there were difficulties in defining which groups were similar enough to be considered peers. We believe that this difficulty is increased by attributing patients to individual clinicians as identified by TIN/NPI rather than TINs as in the VM. We will continue to use a specialty adjustment for the total per capita cost measure to accommodate the different circumstances by which patients are often treated by specialists but will not otherwise adjust or limit comparison based on the specialty of the clinician. In section II.E.5.e.(3) of this final rule with comment period, we provide additional responses on comparing cost measures based on other characteristics based on practice size or the types of patients served.

    We also believe that it is appropriate to have a national versus regional benchmark. The cost measures are price standardized to remove geographic adjustments such as wage indices and cost of living adjustments, so that measures would reflect the same payment rate for a particular service regardless of the region in which it is provided. Other CMS performance programs such as VM and HVBP use national benchmarks and we believe it is appropriate to continue that policy for MIPS. After considering the comments, we are finalizing our proposals at § 414.1380(b)(2) to establish a single benchmark for each cost measure and to base those benchmarks on the performance period. We are finalizing the methodology proposed at § 414.1380(b)(2) to assign one to ten points to each cost measure attributed to the MIPS eligible clinician based on the MIPS eligible clinician's performance compared to the measure benchmark. Because we are basing the benchmarks on the performance period, we will not be able to publish the actual numerical benchmarks in advance of the performance period, as indicated in the proposed rule (81 FR 28259).

    While we understand there are some opportunities associated with benchmarking to a previous year, we believe they are overwhelmed by the disadvantages. This is particularly true as we continue to develop episode-based measures in which the development of a new technology or a change in payment policy could result in a significant change in typical cost of care from year to year. This could potentially result in the majority of clinicians being found to perform well above or well below the benchmark, even if they did not change their practice patterns in relation to their peers. While we did not receive any comments on our proposal to only develop a benchmark for a measure if a minimum 20 MIPS eligible clinicians or groups are attributed the case minimum, we are finalizing that proposal incorporating the changes made to the attribution methodology used for cost measures discussed in II.E.5.e.(3) of this final rule with comment period. We will develop a benchmark for a measure only if at least 20 groups (for those MIPS eligible clinicians participating in MIPS as a group practice) or TIN/NPI combinations (for those MIPS eligible clinicians participating in MIPS as an individual) can be attributed the case minimum for the measure. We are also finalizing our proposal that if a benchmark is not developed, the measure is not scored or included in the performance category.

    (b) Assigning Points Based on Achievement

    For each set of benchmarks, we proposed to calculate the decile breaks based on measure performance during the performance period and assign points for a measure based on which benchmark decile range the MIPS-eligible clinician's performance on the measure is between. We proposed that for cost measures, lower costs represent better performance. In other words, MIPS-eligible clinicians in the top decile would have the lowest cost of care. We proposed to use a methodology generally consistent with the methodology proposed for the quality performance category. We refer readers to Tables 21 and 22 of the proposed rule (81 FR 28260 through 28261), for details on assigning points based on decile distribution. We requested comments on the methodology for assigning points based on performance period deciles for the cost performance category and solicited comments on alternative methodologies for assigning points for performance under this performance category for future rulemaking.

    For clarity, we have reproduced Table 21 from the proposed rule in Table 25. Table 25 illustrates an example of using decile points along with partial points to assign achievement points for a sample cost measure.

    Table 25—Example of Using Benchmarks for One Sample Measure To Assign Points Decile Average cost Possible points Benchmark Decile 1 $100,000 or more 1.0-1.9 Benchmark Decile 2 $75,893-$99,999 2.0-2.9 Benchmark Decile 3 $69,003-$75,892 3.0-3.9 Benchmark Decile 4 $56,009-$69,002 4.0-4.9 Benchmark Decile 5 $50,300-$56,008 5.0-5.9 Benchmark Decile 6 $34,544-$50,299 6.0-6.9 Benchmark Decile 7 $27,900-$34,543 7.0-7.9 Benchmark Decile 8 $21,656-$27,899 8.0-8.9 Benchmark Decile 9 $15,001-$21,655 9.0-9.9 Benchmark Decile 10 $1,000-$15,000 10 Note: The numbers provided in this table are for illustrative purposes only.

    The following is summary of the comments we received regarding our proposal to assign points for a measure based on performance period deciles for the cost performance category.

    Comment: A commenter expressed concern with the use of the decile scoring system for the cost performance category, noting that the wide variation in spending demonstrated in Table 21 of the proposed rule indicated that the cost measures are not properly risk adjusted. Another commenter expressed concern that the decile approach was not reliable.

    Response: We noted that Table 21 in the proposed rule was provided for illustrative purposes only and was not created on the basis of any particular data analysis. We believe that the decile approach is appropriate to measure relative performance for the cost performance category and is consistent with the approach taken for the quality performance category of MIPS.

    Comment: Some commenters recommended that the cost performance category be scored on both achievement and improvement. Commenters indicated that MACRA requires improvement to be considered in calculating this performance category.

    Response: Section 1848(q)(5)(D) of the Act requires us to consider both achievement and improvement in assessing the cost performance category beginning with the second year of MIPS if data sufficient to measure improvement is available. We will discuss how to incorporate improvement in future rulemaking.

    After considering the comments, we are finalizing our proposal to assign 1 to 10 achievement points for each measure based on which benchmark decile range the MIPS eligible clinician's performance on the measure is between.

    (c) Case Minimum Requirements

    We seek to ensure that MIPS eligible clinicians are measured reliably; therefore, we proposed in section II.E.5.e.(3) (81 FR 28198) of the proposed rule, to establish a 20 case minimum for each cost measure. We noted that this would include the MSPB measure. In the CY 2016 PFS final rule, we finalized a policy that increases the required case minimum for MSPB from 20 to 125 cases (80 FR 71295 through 71296). As discussed further in section II.E.5.e.(3)(a)(ii) of this final rule with comment period, after considering the comments and reviewing additional data sources, we finalized a higher case minimum of 35 for a MIPS eligible clinician or group to be attributed the MSPB cost measure. This newly established case minimum of 35 will ensure that the measure meets our reliability threshold for both groups and individual clinicians. We finalized a case minimum of 20 for all other cost measures and finalized at § 414.1380(b)(2)(ii) that MIPS eligible clinicians and groups must meet the minimum case volume specified by CMS to be scored on a cost measure for the cost performance category for the clinician or group.

    (d) Calculating the Cost Performance Category Score

    To calculate the cost performance category score, we proposed at § 414.1380(b)(2)(iii) to average all the scores of all the cost measures attributed to the MIPS eligible clinician. All measures in the cost performance category as described in section II.E.5.e. of the proposed rule would be weighted equally. If a MIPS eligible clinician has only one cost measure with a required case minimum to be scored, we proposed to score that measure accordingly, and the MIPS eligible clinician's cost performance category score would consist of the score for that one measure. We noted that MIPS eligible clinicians cannot receive a zero score for any cost measure for failure to submit the measure since none of the cost performance category measures are submitted by MIPS eligible clinicians. Rather, these measures are attributed to MIPS eligible clinicians through claims data. However, if a MIPS eligible clinician is not attributed any cost measures (for example, because the case minimum requirements have not been met for any measure or there is not a sufficient number of MIPS eligible clinicians to create a benchmark for any measure), then a cost performance category score would not be calculated. Refer to section II.E.6.b.(2) of this final rule with comment period for details on how we address scenarios where a performance category score is not calculated for a MIPS eligible clinician. MIPS eligible clinicians would receive performance feedback as required under section 1848(q)(12) of the Act and discussed in section II.E.8.a. of this final rule with comment period. Over time, performance feedback may include a list of attributed cases for each measure by MIPS eligible clinician. We requested comment on our proposals to calculate the cost performance category score.

    Table 22 of the proposed rule illustrated a sample scoring methodology for a limited set of measures (81 FR 28261). Measures that do not meet the required case minimum are not used for scoring. Unlike the quality performance category score, we did not propose bonus points as part of the cost performance category score. The following is summary of the comments we received regarding our proposed calculation of the cost performance category score:

    Comment: One commenter opposed our proposal to weigh all cost measures equally, indicating that the total per capita cost measure should be weighed more heavily due to a lack of experience with other measures. Some commenters suggested that cost measures be weighted on the basis of the volume of attributed patients for each of the individual measures that are scored, rather than weighted equally regardless of patient volume.

    Response: We are making two important changes to the cost performance category that are relevant to these comments. First, we are reducing the number of cost measures from the proposed rule to only include those which have previously been used in the VM or the 2014 sQRUR. Secondly, we are reducing the weight of the cost performance category to zero in the MIPS final score for the 2019 MIPS payment year to allow clinicians and groups to better understand the different attribution and scoring approach used in this category as compared with the approach to cost measures for the VM. Given that we are reducing the weight of the category to zero, we do not believe it is necessary for the 2019 MIPS payment year to create differential weighting for individual measures, whether it is by weighting measures based on an individual clinician or group patient volume, charges, or establishing a static weight that always weights a particular measure higher or lower for all clinicians or groups. We encourage clinicians to review performance feedback to become more familiar with the measures and the scoring for this category. We will continue to review the cost performance category and consider changes as we develop and include additional cost measures in the future.

    Comment: Some commenters opposed our proposal to include all measures for which a clinician or group meets the case minimum in calculating a cost performance score and recommended that scoring be limited to a certain number of measures. Some commenters expressed concern that cost for a particular patient could be captured within multiple measures and encouraged CMS to only use the measures with the highest scores.

    Response: Our goal in the cost performance category of MIPS is to include as broad a collection of measures as possible to measure costs for many different patients. Some clinicians or groups may have a larger number of cost measures attributed to them, particularly as we continue to develop new episode-based measures, but we believe that this larger number of attributed measures reflects a breadth of care provided by a clinician or group. Given that there is no additional reporting burden associated with cost measures, we do not believe it is appropriate to limit the number of measures that apply once the case minimums are met.

    We also understand that there are cases in which an individual clinician or group might have the same individual patient costs attributed for multiple cost measures. However, we do not believe that this justifies limiting the number of measures in the cost performance category score for a particular clinician or group. In the quality performance category, if a clinician submits more measures than required, we will only include those with the highest score in the performance category score. We do this in part to encourage quality reporting on new and diverse measures. Because cost measures do not require reporting, we do not believe this rationale applies for the cost performance category. We will use all cost measures that meet the case minimums in calculating the cost performance category score, as long as those measures have also met our standards for the minimum number of attributed clinicians or groups needed to calculate a benchmark.

    After consideration of the comments, we are finalizing our proposal at § 414.1380(b)(2)(iii) that a MIPS eligible clinician's cost performance category score is the equally-weighted average of all scored costs measures. We are also finalizing our proposal to not calculate a cost performance category score if a clinician or group is not attributed any cost measures, because the clinician or group has not met the case minimum requirements for any of the cost measures or a benchmark has not been created for any of the cost measures that would otherwise be attributed to the clinician or group. As described in section II.E.5.e.(2) of this final rule with comment period, we are finalizing a 0 percent weight for the cost performance category for the transition year of MIPS, a 10 percent weight for MIPS payment year 2020. For MIPS payment year 2021 and beyond, the cost performance category will be 30 percent. This reduced weighting provides an opportunity for MIPS eligible clinicians to become familiar with the scoring in the cost performance category of MIPS.

    (4) Scoring the Improvement Activities Performance Category

    Section 1848(q)(5)(C) of the Act outlines specific scoring rules for the improvement activities performance category. Section 1848(q)(5)(C)(i) of the Act provides that a MIPS eligible clinician who is in a practice that is a certified patient-centered medical home or comparable specialty practice for a performance period shall receive the highest potential score for the improvement activities performance category for such period. Section 1848(q)(5)(C)(ii) of the Act provides that MIPS eligible clinicians participating in an APM for a performance period shall earn a minimum score of one-half of the highest potential score for the improvement activities performance category for such period. We refer readers to section II.E.5.h. of this final rule with comment period for a description of the APM scoring standard for MIPS APMs. Section 1848(q)(5)(C)(iii) of the Act states that MIPS eligible clinicians are not required to perform activities in each subcategory or participate in an APM to receive the highest possible score for the improvement activities performance category. Based on these criteria, we proposed a scoring methodology that assigns points for the improvement activities performance category (based on certified patient-centered medical home participation and the improvement activities reported by the MIPS eligible clinician). A MIPS eligible clinician's performance would be evaluated by comparing the reported improvement activities to the highest possible score.

    (a) Assigning Points to Reported Improvement Activities

    Improvement activities is a new performance category that has not been implemented in our previous programs. Therefore, in the transition year, we cannot assess how well the MIPS eligible clinician has performed on the activity against data from a baseline year. We can only assess whether the MIPS eligible clinician has participated sufficiently to receive credit in the improvement activities performance category. Therefore, we proposed at § 414.1380(b)(3) to assign points for each reported activity within two categories: Medium-weighted and high-weighted activities (81 FR 28261). Medium-weighted activities are worth 10 points. High-weighted activities are worth 20 points. Table 26 under section II.E.6.a(4)(a) of this final rule with comment period lists all of the improvement activities that are high-weighted. All other activities not listed as high-weighted activities are considered medium activities. Table H in the Appendix of this final rule with comment period provides the Improvement Activities Inventory of all activities, both medium-weighted and high-weighted. Consistent with our unified scoring system principles, MIPS eligible clinicians would know in advance how many potential points they could receive for each improvement activity.

    Activities are proposed to be weighted as high based on the extent to which they align with activities that support the certified patient-centered medical home, since that is the standard under section 1848(q)(5)(C)(i) of the Act for achieving the highest potential score for the improvement activities performance category, as well as with our priorities for transforming clinical practice. Additionally, activities that require performance of multiple actions, such as participation in the Transforming Clinical Practice Initiative, participation in a MIPS eligible clinician's state Medicaid program, or an activity identified as a public health priority (such as emphasis on anticoagulation management or utilization of prescription drug monitoring programs) are justifiably weighted as high. We solicited comment on which activities should receive a high weight as opposed to a medium weight.

    We also considered an approach of equal weighting for all improvement activities. We solicited comment on a multi-tier weighting approach such as low, medium and high activity categories for future years of MIPS.

    The following is a summary of the comments we received regarding our proposal on the assigning of points to reported improvement activities.

    Comment: A number of commenters requested a reduction in the number of activities or a reduction in the reporting threshold from 60 to 30 points to meet 100 percent of scoring for this performance category, citing reporting burden and the limited amount of time that clinicians will have to prepare to begin reporting improvement activities for this new performance category. Some commenters requested a requirement of a maximum of three activities and other commenters suggested four activities.

    Response: After consideration of the comments, we are modifying our proposal to reduce the number of activities so that no more than four medium-weighted activities, or no more than two high-weighted activities, or an equivalent combination (that is, 1 high and 2 medium) are required in order to achieve the highest possible improvement activities performance category score. The comments we received support this modification as commenters expressed concerns about the limited amount of time MIPS eligible clinicians will have to start preparing for these activities and also the burden associated with reporting additional activities.

    After consideration of the comments, we are finalizing our proposals at § 414.1380(b)(3) to assign points for improvement activities according to two weightings: Medium-weighted; and high-weighted activities. Each medium-weighted activity is worth 10 points toward the total category score, and each high-weighted activity is worth 20 points toward the total category score of 40 points. These points are doubled for small practices, rural practices, or practices located in geographic health professional shortage areas (HPSAs), and non-patient facing MIPS eligible clinicians. We refer readers to section II.E.6.a.(4)(d) of this final rule with comment period for further detail on improvement activities scoring.

    We are finalizing Table 23 of the proposed rule (81 FR 28263) with modifications that include clarifying language for one of the existing PDMP activities that is assigned the highest points for an activity (20 points), revising the description of one existing activity under the Emergency Response and Preparedness Subcategory that is also assigned the highest points for an activity (20 points) and changing the period for this activity to be performed from a minimum of 6 months to 60 days, which is better aligned with the new overall performance period for the Quality Payment Program of a 90-day reporting period, and we are changing the weighting of one existing activity in the Population Management subcategory from medium-weighted and instead assigning it the highest points for an activity (20 points). We are changing this existing activity from a medium to a high-weighted activity to incentivize caring for these vulnerable populations. These modifications are reflected in Table 26, which lists the improvement activities that are assigned the highest points for an activity (high-weighted activities are double-weighted to 40 points for MIPS eligible clinicians that are small practices, practices located in rural areas, geographic HPSAs, or non-patient facing MIPS eligible clinicians and 20 points for all other MIPS eligible clinicians). Table H in the Appendix to this final rule with comment period provides the Improvement Activities Inventory of all activities, both medium-weighted and high-weighted.

    Table 26—Finalized Improvement Activities Assigned the Highest Points Subcategory Activity Expanded Practice Access Provide 24/7 access to MIPS eligible clinicians, eligible groups, or care teams for advice about urgent and emergent care (e.g., eligible clinician and care team access to medical record, cross-coverage with access to medical record, or protocol-driven nurse line with access to medical record) that could include one or more of the following:
  • Expanded hours in evenings and weekends with access to the patient medical record (for example, coordinate with small practices to provide alternate hour office visits and urgent care);
  • Use of alternatives to increase access to care team by MIPS eligible clinicians and groups, such as e-visits, phone visits, group visits, home visits and alternate locations (for example, senior centers and assisted living centers); and/or
  • Provision of same-day or next-day access to a consistent MIPS eligible clinician, group or care team when needed for urgent care or transition management.
  • Population Management Participation in a systematic anticoagulation program (coagulation clinic, patient self-reporting program, patient self-management program) for 60 percent of practice patients in the transition year and 75 percent of practice patients in year 2 who receive anti-coagulation medications (warfarin or other coagulation cascade inhibitors). Population Management MIPS eligible clinicians and MIPS eligible clinician and groups who prescribe oral Vitamin K antagonist therapy (warfarin) must attest that, in the first performance period, 60 percent or more of their ambulatory care patients receiving warfarin are being managed by one or more of these improvement activities:
  • Patients are being managed by an anticoagulant management service, that involves systematic and coordinated care, incorporating comprehensive patient education, systematic INR testing, tracking, follow-up, and patient communication of results and dosing decisions;
  • Patients are being managed according to validated electronic decision support and clinical management tools that involve systematic and coordinated care, incorporating comprehensive patient education, systematic INR testing, tracking, follow-up, and patient communication of results and dosing decisions;
  • For rural or remote patient, patients are managed using remote monitoring or telehealth options that involve systematic and coordinated care, incorporating comprehensive patient education, systematic INR testing, tracking, follow-up, and patient communication of results and dosing decisions; and/or
  • For patients who demonstrate motivation, competency, and adherence, patients are managed using either a patient self-testing (PST) or patient-self-management (PSM) program.
  • The performance threshold will increase to 75 percent for the second performance period and onward.
  • Clinicians would attest that, 60 percent for the transition year, or 75 percent in future years, of their ambulatory care patients receiving warfarin participated in an anticoagulation management program for at least 90 days during the performance period.
  • Population Management For outpatient Medicare beneficiaries with diabetes and who are prescribed antidiabetic agents (for example, insulin, sulfonylureas), MIPS eligible clinicians and MIPS eligible clinician groups must attest to having:
  • For the first performance period, at least 60 percent of medical records with documentation of an individualized glycemic treatment goal that:
  • (a) Takes into account patient-specific factors, including, at least age, comorbidities, and risk for hypoglycemia; and
  • (b) Is reassessed at least annually.
  • The performance threshold will increase to 75 percent for the second performance period and onward.
  • Clinicians would attest that, 60 percent for the transition year, or 75 percent in future years, of their medical records that document individualized glycemic treatment represent patients who are being treated for at least 90 days during the performance period.
  • Population Management Participating in a Rural Health Clinic (RHC), Indian Health Service (IHS), or Federally Qualified Health Center in ongoing engagement activities that contribute to more formal quality reporting, and that include receiving quality data back for broader quality improvement and benchmarking improvement which will ultimately benefit patients.
  • Participation in Indian Health Service, as an improvement activity, requires MIPS eligible clinicians and groups to deliver care to federally recognized American Indian and Alaska Native populations in the U.S. and in the course of that care implement continuous clinical practice improvement including reporting data on quality of services being provided and receiving feedback to make improvements over time.
  • Population Management Use of a Qualified Clinical Data Registry to generate regular performance feedback that summarizes local practice patterns and treatment outcomes, including for vulnerable populations. Care Coordination Participation in the CMS Transforming Clinical Practice Initiative. Beneficiary Engagement Collection and follow-up on patient experience and satisfaction data on beneficiary engagement, including development of improvement plan. Patient Safety and Practice Assessment Clinicians would attest that, 60 percent for the transition year, or 75 percent in the second year, of consultation of prescription drug monitoring program prior to the issuance of a Controlled Substance Schedule II (CSII) opioid prescription that lasts for longer than 3 days. Patient Safety and Practice Assessment Participation in the Consumer Assessment of Healthcare Providers and Systems Survey or other supplemental questionnaire items (e.g., Cultural Competence or Health Information Technology supplemental item sets). Achieving Health Equity Seeing new and follow-up Medicaid patients in a timely manner, including individuals dually eligible for Medicaid and Medicare. Emergency Response and Preparedness Participation in domestic or international humanitarian volunteer work. Activities that simply involve registration are not sufficient. MIPS eligible clinicians and groups attest to domestic or international humanitarian volunteer work for a period of a continuous 60 days or greater. Integrated Behavioral and Mental Health Integration facilitation, and promotion of the colocation of mental health and substance use disorder services in primary and/or non-primary clinical care settings. Integrated Behavioral and Mental Health Offer integrated behavioral health services to support patients with behavioral health needs, dementia, and poorly controlled chronic conditions that could include one or more of the following:
  • Use evidence-based treatment protocols and treatment to goal where appropriate;
  • Use evidence-based screening and case finding strategies to identify individuals at risk and in need of services;
  • Ensure regular communication and coordinated workflows between eligible clinicians in primary care and behavioral health;
  • Conduct regular case reviews for at-risk or unstable patients and those who are not responding to treatment;
  • Use of a registry or other certified health information technology functionality to support active care management and outreach to patients in treatment; and/or
  • Integrate behavioral health and medical care plans and facilitate integration through co-location of services when feasible.
  • (b) Improvement Activities Performance Category Highest Potential Score

    Although there is likely to be variability in the level at which each MIPS eligible clinician may perform improvement activities, we currently do not have a standard way of measuring that variability. In future years, we plan to capture data to begin to develop a baseline for measuring improvement in performing improvement activities. Because we cannot measure variable performance within an improvement activity at this time, we proposed at § 414.1380(b)(3)(v) to compare the points associated with the reported activities against the highest potential score (81 FR 28265). We proposed the highest potential score to be 60 points for the transition year performance period based on the following rationale.

    Based on discussions with several high performing organizations, we believed that MIPS eligible clinicians would be able to report on as many as six activities of medium weight. Examples of these organizations include one that led a major redesign of patient workflow after Hurricane Katrina, implementing clinical practice improvements to ensure patients receive faster treatment in the event of future disasters, ranked nationally in six adult specialties and high-performing in six adult specialties; 33 a second that was recognized by a leading medical association that achieved: 6.7 percent 30-day all cause readmissions, 42 percent fewer ED visits with implementation of a 60-day intensive home care program, costs of 15-28 percent below regional average and significant improvement in patient surveys from CAHPS; 34 and a third recognized as a leader in rural health with the highest award for excellence from the National Rural Primary Care Association.

    33 U.S. News and World Report 2015-2016 Best Hospitals Ranking. Retrieved from https://www.ochsner.org/patients-visitors/about-us/outcomes-and-honors/us-news-and-world-report.

    34 California Association of Physicians Groups in Medicare Advantage (2014). Retrieved from http://www.ehcca.com/presentations/capgma1/cohen_b2.pdf.

    We also believed that a top performing small practice or practice in a rural area or geographic HPSA, or a non-patient facing MIPS eligible clinician would be able to report on at least two activities. In consideration of special circumstances for these small practices, as well as practices located in rural areas and in HPSAs or non-patient facing MIPS eligible clinicians, we proposed that the weight for any activity selected would be 30 points. For any MIPS eligible clinician, the maximum total points achievable in this performance category is 60 points. Based on the above rationale, we believed it was reasonable to expect all MIPS eligible clinicians to be able to report improvement activities, and as such, a MIPS eligible clinician reporting no improvement activities would receive a zero score for the improvement activities performance category. We believed this proposal would allow us to capture variation in reporting the improvement activities performance category.

    Section 414.1355(a) of the proposed rule presented the CMS Study on Improvement Activities and Measurement (81 FR 28214). Given the burden for participants completing the year-long study and the value of collectively examining innovation and practice activities to improve clinical quality data submissions and further reduce time requirements for eligible clinicians and groups to report, we proposed that MIPS eligible clinicians and groups that successfully participate and submit data to fulfill study requirements would receive the highest potential score of 60 points for the improvement activities performance category.

    The following is a summary of the comments we received regarding our proposal on the methodology for achieving the highest score.

    Comment: Commenters supported considerations for small, rural, HPSA and non-patient facing MIPS eligible clinicians, but recommended that CMS allow these entities to report on two medium-weighted improvement activities or one high-weighted improvement activity in order to achieve 100 percent of the total possible score, and to report on one medium-weighted improvement activity to achieve 50 percent of the total possible score.

    Response: As discussed in section II.E.5.f.(2) of this final rule with comment period, we are reducing the number of activities for these types of clinicians. Rather than selecting any two activities, these practices may select either two medium-weighted activities, or one high-weighted activity, to achieve the highest score.

    Comment: Other commenters recommended that CMS use a uniform weighting for all the activities, and that scoring for this category be aligned with the other performance categories.

    Response: We justify the weighting of high for specific activities based on our priorities for specific programs/activities and alignment with activities that would be performed by a clinician in a certified patient-centered medical home or comparable specialty practice. For weighting of a high, we focused on areas with activities that promote CMS public health priorities and support the patient centered medical home. We are retaining the two weights, medium and high for activities.

    Comment: Commenters also requested general clarification about how credit for meeting improvement activities participation requirements will be determined, and questioned how groups will be scored.

    Response: Scoring is based on the number of different weighted activities selected from the broad list in Table H in the Appendix to this final rule with comment period. As discussed in section II.E.6.a.(4)(a) of this final rule with comment period, small practices, practices located in rural areas or geographic health professional shortage areas or non-patient facing MIPS eligible clinicians receive 20 points by selecting one medium-weighted activity and receive 40 points by selecting two medium-weighted activities, or alternatively may select one high-weighted activity to receive 40 points. If a MIPS eligible clinician, other than a MIPS APM or APM, does not select any activity, they will receive zero points in the improvement activities performance category.

    All other MIPS eligible clinicians, other than a MIPS APM, will receive 10 points by selecting one medium-weighted activity (a medium-weighted activity is double-weighted for small practices, practices located in rural areas and geographic HPSAs, and non-patient facing MIPS eligible clinicians); 20 points by selecting two medium-weighted activities; 30 points by selecting three medium activities; and 40 points by selecting four medium-weighted activities. An APM, other than a MIPS APM, only needs to select two medium or one high-weighted activity to add to their automatic score of at least one-half of the highest score. Alternatively, these same MIPS eligible clinicians may receive 20 points by selecting one high-weighted activity (a high-weighted activity is double-weighted for small practices, practices located in rural areas and geographic HPSAs, and non-patient facing MIPS eligible clinicians), or 40 points by selecting two high-weighted activities. With the exception of small practices, practices in rural areas and geographic HPSAs and non-patient facing MIPS eligible clinicians, a combination of one medium-weighted activity and one high-weighted activity would achieve 30 points and two medium- and one high-weighted activity would achieve 40 points. MIPS eligible clinicians or groups, other than APMs, who do not select any activity would receive zero points.

    Comment: Commenters recommended that practices participating in APMs should receive more than 50 percent of the total possible score and recommended that participants receive up to 100 percent of the total possible score. One commenter recommended that alternatively, activity reporting be allowed at the APM entity level to reduce reporting burden.

    Response: We are finalizing our proposal that APM participants will receive at least one-half of the highest possible score. However, we recognize that participating in an APM requires significant effort from practices and eligible clinicians, and with that in mind, we are revising the improvement activities performance category scoring policy for MIPS APMs. To develop the improvement activities score assigned to all MIPS APMs, CMS will compare the requirements of the specific APM with the list of activities in the Improvement Activities Inventory in Table H in the Appendix to this final rule with comment period and score those activities in the same manner that they are otherwise scored for MIPS eligible clinicians according to section II.E.6.a.(4) of this final rule with comment period. For further explanation of how MIPS APMs scores will be calculated, we refer readers to section II.E.5.h of this final rule with comment period.

    After consideration of the comments, we are not finalizing our proposal at § 414.1380(b)(3)(v) to compare the points associated with the reported activities against the highest potential score of 60 points but are using 40 points instead as the total points possible to achieve the highest score for the transition year performance period (81 FR 28265). For small practices, rural and geographic HPSA practices and non-patient facing MIPS eligible clinicians, the weight for any activity selected would be doubled so that these practices only need to select one high- or two medium-weighted activities to achieve the highest score of 40 points. We are finalizing our proposal that MIPS eligible clinicians participating in APMs will automatically receive one-half of the highest score for improvement activities and in addition, MIPS APMs may receive a higher score based on the improvement activities performance category score that CMS assigns for each MIPS APM based on the extent to which the requirements of the specific model meet the list of activities in the Improvement Activities Inventory. We note that one-half of the highest score for improvement activities is the minimum amount that eligible clinicians participating in APMs could achieve, in accordance with the statute. We refer readers to section II.E.5.h of this final rule with comment period for additional information about how a MIPS APM can achieve the highest score.

    The following is a summary of the comments we received regarding our proposal to conduct the CMS Study on Improvement Activities and Measurement.

    Comment: One commenter agreed that improvement activities performance category study participants should receive full credit for improvement activities performance category and that those participants that do not meet study guidelines should be removed and be subject to typical improvement activities performance category requirements. This commenter recommended that CMS provide a final date by which it plans to make these exclusion determinations and that after this date, CMS can work with the ex-participant to help them complete the year. They also recommended that all participants who get excluded from the study not be allowed to participate in the study the following year.

    Response: We will continue to work with stakeholders to further define future participation requirements as this study evolves.

    After consideration of the comments, we are finalizing our proposal that MIPS eligible clinicians and groups that successfully participate and submit data to fulfill study requirements will receive the highest score for the improvement activities performance category.

    (c) Points for Certified Patient-Centered Medical Home or Comparable Specialty Practice

    Section 1848(q)(5)(C)(i) of the Act specifies that a MIPS eligible clinician who is in a practice that is certified as a patient-centered medical home or comparable specialty practice, as determined by the Secretary, for a performance period must be given the highest potential score for the improvement activities performance category for the performance period. We proposed that certified patient-centered medical home practices are those that have received accreditation from any of the following four nationally recognized accreditation organizations the Accreditation Association for Ambulatory Health Care, the National Committee for Quality Assurance (NCQA), The Joint Commission, and the Utilization Review Accreditation Commission (URAC); 35 or are a Medicaid Medical Home Model or Medical Home Model. We proposed that our proposed comparable specialty practices are those that include the NCQA Patient-Centered Specialty Recognition. We refer readers to II.E.5.g.(5) of this final rule with comment period for a description of the Medical Home Model and the Medicaid Medical Home Model. The four accreditation organizations listed above all have evidence of being used by a large number of medical organizations as the model for their patient-centered medical home and are national in scope. No other criteria are required for receiving recognition as a certified patient-centered medical home or comparable specialty practice except for being recognized by one of the above organizations.

    18 The name was officially shortened to URAC in 1996.

    We outlined at § 414.1355(b) of the proposed rule the policy for certified patient-centered medical homes (81 FR 28209). The organizations identified above maintain a list of certified patient-centered medical homes, including the Medical Home Models and the Medicaid Medical Home Models, that would be used to determine whether a MIPS eligible clinician qualifies for the highest potential score for the improvement activities performance category because the MIPS eligible clinician is in a certified patient-centered medical home. The NCQA maintains a list of practices that have received the Patient-Centered Specialty Recognition which would be used to determine whether a MIPS eligible clinician qualifies for the highest potential score for the improvement activities performance category because the MIPS eligible clinician is in a comparable specialty practice.

    We proposed at § 414.1380(b)(3) that a MIPS eligible clinician who is in a practice that is certified as a patient-centered medical home, including a Medical Home Model, Medicaid Medical Home Model or comparable specialty practice in accordance with those proposals would receive the highest potential score (in accordance with section 1848(q)(5)(C)(i) of the Act) of 60 points for the improvement activities performance category (81 FR 28210).

    The following is summary of the comments we received regarding our proposal to provide practices defined as certified patient-centered medical homes with the highest score for the improvement activities performance category. We address comments regarding the specifics of this definition in section II.E.5.f.(3)(b) of this final rule with comment period.

    Comment: One commenter strongly recommended a flexible approach to quality assessment that emphasizes outcomes of care and that favors continuous quality improvement methodologies rather than rigid, process-oriented patient-centered medical home certification models, believing that relying on patient-centered medical home certification as a means of quality assessment runs the risk of practices not actually realigning efforts to produce higher quality and more cost effective care.

    Response: Our policy on this topic is required by the statute, which specifically identifies MIPS eligible clinicians who practice in a certified patient-centered medical home or comparable specialty practices as receiving the highest score for the improvement activities performance category; this policy does not apply to the quality category.

    Comment: Several commenters supported certified patient-centered medical homes and supported MIPS eligible clinicians who practice in these entities receiving full credit for the improvement activities category. One commenter suggested that patient-centered medical homes stratify data by disparity variables and implement targeted interventions to address health disparities. These commenters believed that the presentation of the information in this way will allow MIPS eligible clinicians to better understand the patient-centered medical home model and decide how to best deliver care under MIPS. Additional commenters suggested including activities under the improvement activities category that are associated with actions conducted by a certified patient-centered medical home. The commenters recommended the following subcategories of activities be associated with elements of a patient-centered medical home: expanded practice access, population management, care coordination, beneficiary engagement, and patient safety and practice assessment.

    Response: We do not believe the commenter is suggesting these elements should be a requirement for being approved to receive full credit as a certified patient-centered medical home. Stratification of data to address health disparities is something we will consider encouraging in the future. Reorganizing and expanding the existing Improvement Activities Inventory is something we look forward to working with stakeholders on in future years.

    After consideration of these comments we are finalizing our proposal at § 414.1380(b)(3) that a MIPS eligible clinician who is in a practice that is certified as a patient-centered medical home, including a Medicaid Medical Home, Medical Home Model, or comparable specialty practice, will receive the highest potential score (in accordance with section 1848(q)(5)(C)(i) of the Act) for the improvement activities performance category (81 FR 28210). However, as noted in section II.E.5.f.(3)(b) of this final rule with comment period, we are not finalizing our proposal at § 414.1380(b)(3)(v) to compare the points associated with the reported activities against the highest potential score of 60 points (81 FR 28210), but instead are using 40 points as the total points required to achieve the highest score for the transition year performance period. We also are not finalizing our proposal at § 414.1355(b) to only define certified patient-centered medical home practices as those that have received accreditation from four nationally recognized accreditation organizations (the Accreditation Association for Ambulatory Health Care, the National Committee for Quality Assurance (NCQA), The Joint Commission, and the Utilization Review Accreditation Commission (URAC)); or comparable specialty practices as those that are a Medicaid Medical Home Model or Medical Home Model or from the NCQA Patient-Centered Specialty Recognition (81 FR 26210), rather we are finalizing an expanded definition of these practices at section II.E.5.f.(3)(b) of this final rule with comment period, and we refer readers to the specifics of this definition in section II.E.5.f.(3)(b) of this final rule with comment period.

    (d) Calculating the Improvement Activities Performance Category Score

    To determine the improvement activities performance category score, we proposed to sum the points for all of the MIPS eligible clinician's reported activities and divide by the proposed improvement activities performance category highest potential score of 60. A perfect score would be 60 points divided by 60 possible points, which equals 100 percent. If MIPS eligible clinicians have more than 60 improvement activities points, then we proposed to cap the resulting improvement activities performance category score at 100 percent.

    Table 24 of the proposed rule illustrated a sample scoring methodology for the improvement activities performance category for a MIPS eligible clinician that is not an APM participant (81 FR 28267). For example, the MIPS eligible clinician was not an APM participant and did not immediately earn the minimum score of one-half of the highest potential score or 30 points that are available for APM participation. The MIPS eligible clinician completed two high-weighted activities worth 20 points each and two medium-weighted activities for 10 points each to receive the maximum 60 points available in the improvement activities performance category score of 100 percent.

    Alternatively, the MIPS eligible clinician could have selected three high-weighted activities for 20 points each, six medium-weighted activities for ten points each, or some combination to reach 60 points. The score however is capped at 100 percent (60/60). This means that a MIPS eligible clinician who selects four high-weight activities (80 possible points) would still be given a score of 100 percent (60/60). Please refer to Table 24 of the proposed rule for the illustration of the proposed methodology (81 FR 28267).

    Section 1848(q)(2)(B)(iii) of the Act requires the Secretary to give consideration to the circumstances of small practices and practices located in rural areas and in geographic HPSAs (as designated under section 332(a)(1)(A) of the Public Health Service Act) in defining activities. Section 1848(q)(2)(C)(iv) of the Act also requires the Secretary to give consideration to non-patient facing MIPS eligible clinicians. Further, section 1848(q)(F)(5) of the Act allows the Secretary to assign different scoring weights for measures, activities, and performance categories, if there are not sufficient measures and activities applicable and available to each type of eligible clinician.

    For MIPS eligible clinicians and groups that are small practices, practices located in rural areas, practices located in geographic HPSAs, or non-patient facing MIPS eligible clinicians or non-patient facing MIPS eligible clinician groups, we proposed alternative scoring requirements for the improvement activities performance category. The rationale for this alternative scoring is grounded in the resource constraints these MIPS eligible clinicians face which was further discovered during listening sessions with small, rural and geographic HPSAs and medical societies for non-patient facing MIPS eligible clinicians and groups. We believe that while non-patient facing MIPS eligible clinicians and non-patient facing groups could select activities from some sub-categories (such as care coordination and patient safety), for other sub-categories (such as beneficiary engagement and population management) non-patient facing MIPS eligible clinicians and groups will need to consider novel practice activities that are within their scope and can improve beneficiary care. We will continue to work with non-patient facing MIPS eligible clinician professional organizations to further develop activities relevant for these clinicians in future years. Our rationale for small practices and practices located in rural areas and in HPSAs is grounded in the resource constraints that these MIPS eligible clinicians face. This rationale is especially compelling given that each activity requires at least 90 days and may not necessarily be conducted in parallel, with time allocated to pre-planning and post-planning, which would impact the practice's limited resources.

    All MIPS eligible clinicians would be allowed to self-identify as a certified patient-centered medical home or comparable specialty practice, a non-patient facing MIPS eligible clinician, a small practice, a practice located in a rural area, or a practice in a geographic HPSA or any combination thereof as applicable during attestation following the performance period. We refer readers to https://innovation.cms.gov/Medicare-Demonstrations/Medicare-Medical-Home-Demonstration.html for more information on the Medical Home Model.

    We would validate these self-identifications as appropriate. We proposed that the following scoring would apply to MIPS eligible clinicians who are a non-patient facing MIPS eligible clinician, a small practice (consisting of 15 or fewer professionals), a practice located in a rural area, or practice in a geographic HPSA or any combination thereof:

    • Reporting of one medium-weighted or high-weighted activity would result in 50 percent of the highest potential score.

    • Reporting of two medium-weighted or high-weighted activities would result in 100 percent of the highest potential score.

    In future years, we may adjust the weighting of activities at the MIPS eligible clinician level based on initial patterns of improvement activities reporting. For example, if a MIPS eligible clinician reports on the same medium-weighted activity over several performance periods, in a subsequent year that MIPS eligible clinician may not be allowed to continue to select that same activity. This is because section 1848(q)(2)(C)(v)(III) of the Act provides that the intent of the improvement activities performance category is to demonstrate improvement over time and not just demonstrate same benefit from year to year. Specifically, the statute defines that an activity is expected, when effectively executed, to result in improved outcomes, which would be demonstrated over time. If a MIPS eligible clinician reports on the same activity from year to year that does not show improved outcomes, it would not be in line with the spirit of statute.

    For example, continuing to provide expanded practice access year after year would not demonstrate improved outcomes over time. Further, should the weighting of activities change in future years, we may also adjust the improvement activities performance category point target accordingly. We requested comment on our proposed approach to score the improvement activities performance category, and solicited comment on alternative methodologies for the improvement activities performance category. We sought to assure equity in scoring MIPS eligible clinicians while still considering activity variation, impact and burden.

    The following is summary of the comments we received regarding our proposal to calculate the improvement activities performance score.

    Comment: Commenters requested that CMS reduce the complexity in scoring, especially since improvement activities is a new performance category. One commenter disagreed with the complexity of the MIPS final score methodology, including for the improvement activities performance category, because it is difficult for physicians to understand, and to plan for the future.

    Response: To address confusion regarding our proposal for calculating the improvement activities performance category score, we first explain in section II.E.5.f.(3) of this final rule with comment period, the number of activities that a MIPS eligible clinician or group must select to achieve the highest score. Under this same section, section II.E.5.f.(3), we also explain the number of activities that a small practice, a practice located in a rural area or geographic health professional shortage area, and non-patient facing MIPS eligible clinicians must select in order to achieve the highest score. Under section II.E.6.a.(4)(a) of this final rule with comment period, we explain the number of points that a medium-weighted activity and a high-weighted activity are worth for a MIPS eligible clinician or group, and we also explain the number of points that a medium-weighted activity and a high-weighted activity are worth for a small practice, a practice located in a rural area or geographic health professional shortage area, and non-patient facing MIPS eligible clinicians. In section II.E.6.a.(4)(d) of this final rule with comment period, we explain that the total number of points achievable for the improvement activities performance category are now 40 points since the maximum number of improvement activities a MIPS eligible clinician or group would have to report to achieve the highest score for improvement activities is four. This means that 40 points is the denominator for the improvement activities performance category. If a medium-weighted activity is worth 10 points and a MIPS eligible clinician reported four activities that would result in a total of 40 points (4 activities × 10 points each). A medium-weighted activity and a high-weighted activity are doubled for a small practice, a practice located in a rural area or geographic health professional shortage area, and non-patient facing MIPS eligible clinicians. We arrive at 40 points for a practice located in a rural area or geographic health professional shortage area, and non-patient facing MIPS eligible clinicians because the most these practices need to select are two medium-weighted activities that are double weighted (20 points × 2) which is equal to 40 points or one high-weighted activity that is double weighted (40 points × 1) which is equal to 40 points.

    Comment: Some commenters requested that CMS specify how many MIPS eligible clinicians in each group must participate in each project in order to provide the points for the entire group. Other commenters were confused as to whether everyone in the group or TIN had to be a certified patient-centered medical home to receive the highest score.

    Response: For the transition year of the MIPS program, there are no minimum participation thresholds established at the group level. There are also no thresholds for the number of practice sites within the same TIN that must be certified as a patient-centered medical home to receive the highest score. We anticipate that as we gain experience with the improvement activities category this may be modified in future years.

    Comment: One commenter requested that bonus points be applied to the calculated score for prior year awards.

    Response: We will not award bonus points for the improvement activities performance category in the transition year but will continue to monitor trends in the program to determine the need for a bonus in the future. We also clarify that we cannot give bonus points for an activity or award given outside of the program performance.

    Comment: Several commenters supported the proposal that non-patient facing MIPS eligible clinicians select two activities, recognizing that the MIPS statute requires consideration of special circumstances for these types of clinicians. One commenter did not support the proposed policy allowing “non-patient facing” providers to perform a single activity in the improvement activities category to achieve one-half of the total points toward the improvement activities score and recommended that we hold all clinicians to the same standard.

    Response: We believe there are several subcategories such as beneficiary engagement and expanded practice access that may limit a non-patient facing MIPS eligible clinician from having access to the broader list of activities more than other types of practices and believe it is reasonable to limit the number of activities for non-patient facing MIPS eligible clinicians.

    Comment: Commenters generally expressed their support for the approach of reducing improvement activities category requirements for non-patient facing MIPS eligible clinicians and groups, as well as clinicians practicing in rural areas or health professional shortage areas. One commenter disagreed with our proposed approach, however, noting that non-patient facing MIPS clinicians should be able to obtain the highest potential score for the improvement activities performance category without special modifications to improvement activities scoring. Another commenter suggested increasing the number of clinicians for small practices to 25 for purposes of the improvement activities category.

    Response: We agree with commenters that supported reducing the improvement activities category requirements for non-patient facing MIPS eligible clinicians to two medium-weighted activities, or one high-weighted activity, and this policy is consistent with the statute, which states that the Secretary shall give consideration to the circumstances of professional types who typically furnish services that do not involve face-to-face interaction with the patient. We are finalizing our proposal to allow for either two medium or one high-weighted activity for these types of practices.

    Comment: Commenters requested clarification regarding the need to self-identify during attestation following the performance period as a MIPS eligible clinician or group participating in an APM, certified patient-centered medical home or comparable specialty practice.

    Response: We clarify that for MIPS eligible clinicians or groups participating in an APM, self-identification by attestation following the performance period is not necessary. For eligible clinicians or groups participating in a certified patient-centered medical home or comparable specialty practice, however, self-identification will be required.

    After consideration of the comments, we are not finalizing our proposal to require achievement of 60 points to receive the highest score for the improvement activities performance category. Rather, we are only requiring a total of 40 points to receive the highest score for the improvement activities performance category. In alignment with the reduction in total points required, we are finalizing that the following scoring that will apply to MIPS eligible clinicians who are a non-patient facing clinician, a small practice, a practice located in a rural area, or practice in a geographic HPSA or any combination thereof:

    • Reporting of one medium-weighted activity would result in 20 points or one-half of the highest score.

    • Reporting of two medium-weighted activities would result in 40 points or the highest score.

    • Reporting of one high-weighted activity would result in 40 points or the highest score.

    In alignment with the reduction in total points required, we are finalizing the following scoring that will apply to MIPS eligible clinicians who are not a non-patient facing clinician, a small practice, a practice located in a rural area, or a practice in a geographic HPSA:

    • Reporting of one medium-weighted activity would result in 10 points which is one-fourth of the highest score.

    • Reporting of two medium-weighted activities would result in 20 points which is one-half of the highest score.

    • Reporting of three medium-weighted activities would result in 30 points which is three-fourths of the highest score.

    • Reporting of four medium-weighted activities would result in 40 points which is the highest score.

    • Reporting of one high-weighted activity would result in 20 points which is one-half of the highest score.

    • Reporting of two high-weighted activities would result in 40 points which is the highest score.

    • Reporting of a combination of medium-weighted and high-weighted activities where the total number of points achieved are calculated based on the number of activities selected and the weighting assigned to that activity (number of medium-weighted activities selected × 10 points + number of high-weighted activities selected × 20 points).

    The most any MIPS eligible clinician or group can achieve for the improvement activities performance category is 40 points, so if more activities are selected than, for example, 4 medium-weighted activities, the total points that could be achieved is still 40 points. We refer readers to section II.E.5.g. of this final rule with comment period, regarding activities in the improvement activities performance category that would also qualify for a bonus under the advancing care information performance category. This bonus would be calculated under the Advancing Care Information Performance Category and not under the improvement activities Performance Category.

    We also are not finalizing Table 24 of the proposed rule which provided an example of the scoring methodology based on a highest potential score of 60 points for the improvement activities performance category (81 FR 28267). We are instead finalizing Tables 27 and 28 that illustrate the sample scoring methodology for the improvement activities performance category based on a policy of a highest potential score of 40 points, which we are finalizing in this final rule with comment period. The first example in Table 27 illustrates a sample scoring methodology for the improvement activities category for a MIPS eligible clinician that is not an APM participant or certified patient-centered medical home or comparable specialty practice or Medical Home Model and does not qualify as a small practice or a practice located in a rural or HPSA and is not a non-patient facing MIPS eligible clinician.

    Table 27—Improvement Activities Performance Category Scoring Example 1 Activity Subcategory Total possible points Relative weight
  • (based on whether a small, rural, geographic HPSA or non-patient facing MIPS eligible clinician)
  • Total score
    For Midsize Practice (not rural, HPSA or non-patient facing) Activity 1 (Medium Weighted) Population Management 10 1 Activity 2 (High Weighted) Expanded Practice Access 20 1 30/40 points. Total 30

    The next example in Table 28 illustrates two examples of the scoring methodology for MIPS eligible clinicians that are small, rural or geographic HPSA practices or are a non-patient facing MIPS eligible clinician.

    Table 28—Improvement Activities Performance Category Scoring Example 2 Activity Subcategory Total possible points Relative weight
  • (based on whether a small, rural, geographic HPSA or non-patient facing MIPS eligible clinician)
  • Total score
    For Small, Rural, HPSA Practice or Non-Patient Facing Clinician Clinician #1: Activity 1 (Medium Weighted) Population Management 10 2 20 points. Activity 2 (Medium Weighted) Integrated Behavioral and Mental Health 10 2 20 points. Total 40/40 points Clinician #2: Activity 1 (High Weighted) Patient Safety and Practice Assessment 20 2 40 points. Total 40/40 points

    We also finalize our proposal to calculate a score of zero points for any MIPS eligible clinician, except for an APM, if they do not report at least one activity. We further finalize that MIPS eligible clinicians or groups participating in APMs are not required to self-identify as part of an APM, but all MIPS eligible clinicians will be required to self-identify as part of a certified patient-centered medical home or comparable specialty practice, a non-patient facing MIPS eligible clinician, a small practice, a practice located in a rural area, or a practice in a geographic HPSA or any combination thereof to self-identify as applicable during attestation following the performance period. We will validate these self-identifications as appropriate.

    (5) Scoring the Advancing Care Information Performance Category

    We refer readers to section II.E.5.g.(6) of this final rule with comment period, for our final methodology for scoring the advancing care information performance category.

    b. Calculating the Final Score

    Section II.E.6.a. of the proposed rule describes our proposed methodology for assessing and scoring MIPS eligible clinician performance for each of the four performance categories (81 FR 28248-28268). In this section, we proposed the methodology to determine the composite performance score (now called final score) based on the scores for each of the four performance categories. We proposed to define at § 414.1305 the final score as a composite assessment (using a scoring scale of 0 to 100) for each MIPS eligible clinician for a specific performance period determined using the methodology for assessing the total performance of each MIPS eligible clinician according to the performance standards for the applicable measures and activities for each applicable performance category. The final score is the sum of the products of each performance category score and each performance category's assigned weight multiplied by 100.

    (1) Formula To Calculate the Final Score

    Section 1848(q)(5)(A) of the Act requires the Secretary to develop a methodology for assessing the total performance of each MIPS eligible clinician according to the performance standards for the applicable measures and activities for each performance category applicable to such clinician for a performance period, and using the methodology, provide for a final score (using a scoring scale of 0 to 100) for each MIPS eligible clinician for the performance period. Additionally, sections 1848(q)(5)(E) and (F) of the Act address the weights for each of the performance categories in the final score.

    To create a final score from 0-100 based on the individual performance category scores, we proposed to multiply the score for each performance category by the assigned weight for the performance category. We provided in Table 25 of the proposed rule (81 FR 28269), the weights for each performance category for the 2019, 2020 and 2021 MIPS payment years. The resulting weighted performance category scores would be summed to create a single final score. As described in section II.E.2. of the proposed rule (81 FR 28176-28177), we proposed that the identifier for MIPS performance would be the same for all four performance categories, and therefore, the methodology to calculate a final score would be the same for both individual and group performance.

    The following equation summarizes the proposed final score calculation at § 414.1380(c): Final score = [(quality performance category score × quality performance category weight) + (cost performance category score × cost performance category weight) + (improvement activities performance category score × improvement activities performance category weight) + (advancing care information performance category score × advancing care information performance category weight)] × 100.

    We did not receive comments on our proposal to define at § 414.1305 the final score as a composite assessment (using a scoring scale of 0 to 100) for each MIPS eligible clinician for a specific performance period.

    We did receive several comments on our proposal to define at § 414.1380(c) the MIPS final score calculation.

    Comment: A few commenters stated the proposed scoring standards are confusing and complex and suggested that CMS revise the standards to produce a scoring formula that is streamlined and easier to understand. Several commenters simply believe the final score scoring approach is “too complex”. Several commenters noted that the scoring formula for the MIPS final score should be streamlined and scoring across the performance categories should be more integrated. Commenters raised concern that due to the complexity of the formulas, there would be an increased risk that scoring would lack accuracy and not reflect the philosophy behind this rule.

    Response: We address performance category scoring standards in section II.E.6.a.(2), II.E.6.a.(3), II.E.6.a.(4), and II.E.6.a.(5) of this final rule with comment period. We address our approach to a unified scoring system in MIPS at II.E.6.a.(1)(b) of this final rule with comment period. The weights of the MIPS performance categories to determine the final score are specified in section 1848(q)(5)(E) of the Act. Therefore, we must establish a formula for calculating the final score based upon the differing category weights as prescribed by the statute. To properly calculate a weighted score for each performance category, we must first calculate the performance category scores and then apply the statutory weights before adding the weighted scores together to determine the final score. The approach we have proposed meets the statutory requirements and will accurately reflect an eligible clinician's performance.

    We have aligned the approach to scoring across the performance categories. Measures in the quality, cost, and the advancing care information performance categories are scored based on a point scale between 0 and 10. The measures and activities within each performance category are designed to measure performance on different aspects of high value healthcare within each performance category, therefore the performance requirements and scoring calculations within the performance categories are differentiated as appropriate.

    Comment: Other commenters believed that there is no standard for quality care to form the basis for a MIPS final score. The commenters also stated that the quality care standards should be specific within a specialty.

    Response: We believe the performance standards we are adopting represent appropriate standards of quality care for MIPS eligible clinicians to strive to meet. We will take the commenter's views on MIPS scoring methodology under advisement as we continue its development.

    After consideration of the comments, we are codifying our final score definition and final score formula with minor changes for accuracy and to change the labeling of composite performance score to final score. At § 414.1305, final score means a composite assessment (using a scoring scale of 0 to 100) for each MIPS eligible clinician for a performance period determined using the methodology for assessing the total performance of a MIPS eligible clinician according to performance standards for applicable measures and activities for each performance category. The final score is the sum of each of the products of each performance category score and each performance category's assigned weight, multiplied by 100. At § 414.1380(c), we finalize that each MIPS eligible clinician receives a final score of 0 to 100 points equal to the sum of each of the products of each performance category score and each performance category's assigned weight, multiplied by 100.

    (a) Accounting for Risk Factors

    Section 1848(q)(1)(G) of the Act requires us to consider risk factors in our scoring methodology. Specifically, that section provides that the Secretary, on an ongoing basis, shall, as the Secretary determines appropriate and based on individuals' health status and other risk factors, assess appropriate adjustments to quality measures, cost measures and other measures used under MIPS and assess and implement appropriate adjustments to payment adjustments, final scores, scores for performance categories or scores for measures or activities under the MIPS. In doing this, the Secretary is required to take into account the relevant studies conducted under section 2(d) of the IMPACT Act of 2014 and, as appropriate, other information, including information collected before completion of such studies and recommendations. ASPE is conducting studies on the issue of risk adjustment for socioeconomic status on quality measures and cost measures as required by section 2(d) of the IMPACT Act and expects to issue a report to Congress in October 2016. We will closely examine the ASPE studies when they are available and incorporate findings as feasible and appropriate through future rulemaking. We also note that several MIPS measures, as appropriate, include risk adjustment in their measure specifications. For example, outcome measures in the quality performance category generally have risk adjustment embedded in the measure calculation specification, while process measures generally do not. Similarly, in the cost performance category, the proposed total per capita costs for all attributed beneficiaries measure is adjusted for demographic and clinical factors. That measure also has a specialty adjustment that is applied after the measure calculation to account for differences in specialty mix within a practice. The MSPB measure and other cost measures have different risk adjustments that are specific to the individual measure. For the transition year of MIPS (MIPS payment year 2019), for the quality and cost performance categories, we proposed to use the measure-specific risk adjustment for all measures (where applicable), as well as the additional specialty adjustment for the total per capita costs for all attributed beneficiaries.

    We invited public comments on this proposal. For discussion of comments specific to risk adjustment for sociodemographic and/or socioeconomic factors we refer readers to section II.E.5.b.(6) of this final rule with comment period.

    The following is summary of the comments we received regarding our proposal to use the measure-specific risk adjustment for all measures (where applicable), as well as the additional specialty adjustment for the total per capita costs for all attributed beneficiaries.

    Comment: Several commenters suggested that CMS undertake additional specialty adjustments to compare specialists and similarly situated eligible clinicians. These commenters believe CMS should group and compare MIPS eligible clinicians by patient profile rather than comparing all eligible clinicians to one another.

    Response: We have previously reviewed the option to segment eligible clinicians' measurement and scoring across geography, specialty, patient mix and other criteria. Such an approach may provide an advantage to certain eligible clinician types who historically have scored lower on performance measures. However, we are promoting and incentivizing high performance and identified the scoring approach as best suited for this purpose. Additionally, because we have aimed to make MIPS scoring simple to understand, we decided not to implement a complex system with multiple benchmarks for sub-groups.

    Comment: Many commenters expressed concern that under MIPS, eligible clinicians caring for poor and/or clinically complex patients will be unfairly penalized when compared with physicians caring for healthier patients. As with socioeconomic status, commenters believe MIPS eligible clinicians with higher risk patients should not be penalized for poor outcomes due to factors outside of their control. These commenters recommended that CMS risk adjust for clinical severity and complex patients.

    Response: We have incorporated specialty adjustment into the total per capita cost measure under the cost performance category, which will account for specialties focused on high-cost procedures. While we agree certain patients with additional comorbidities often require additional care, we are concerned additional adjustment for clinical severity may have a tendency to mask poor performance. We will closely examine the ASPE studies when they are available and incorporate findings, along with additional sources of valid information, and incorporate them as feasible and appropriate through future rulemaking.

    Comment: Several commenters recommended that CMS adjust for specifically rural-relevant socio-demographic factors. One commenter referenced the 2014 Update of Rural-Urban Chartbook that provides data on rural areas and riskier behaviors and pointed out that the Congress provides cost based reimbursement in rural settings in recognition of the additional costs of providing low volume services.

    Response: We appreciate commenter feedback on the role of rural relevant socio-demographic factors and will consider this information for future rulemaking. MIPS is intended to support the larger objective of ensuring excellent care for patients regardless of their geographic area. We will engage in further study to gauge the appropriateness of risk adjusting for sociodemographic factors, including those specific to rural populations, by reviewing the findings of the ASPE studies when they are available, along with other sources of information. In addition, we will actively monitor MIPS scoring outcomes to provide fair treatment for MIPS eligible clinicians serving rural areas.

    Comment: One commenter believes CMS should release the actual variables, coefficients and equations used for risk adjustment.

    Response: We have and will continue to publicly release information regarding our approach to risk adjustment for measures. However; as the variables and coefficients are frequently revised to improve system accuracy and efficiency, it would not be practical to provide this information of this type in a regulation.

    Comment: One commenter recommended that CMS require reporting mechanisms that allow stratification by demographic characteristics; and also add age to the list of demographic factors.

    Response: Calculation of performance by subgroup may be one way to identify and measure disparities, and could potentially help meet the objectives under the improvement activities subcategory “Achieving Health Equity”. We may consider such an approach in future rulemaking as we review approaches and recommendations, such as those from ASPE, for including sociodemographic evaluation in CMS programs.

    After consideration of the comments, we are finalizing our proposal for the quality and cost performance categories to use the measure-specific risk adjustment for all measures (where applicable), as well as the additional specialty adjustment for the total per capita costs measure. Cost measures in the cost performance category are risk adjusted as previously discussed in detail at 77 FR 69317 through 69318 and referenced in section II.E.5.e.(3). Measures finalized for MIPS (see Tables A through D in the Appendix) may be risk adjusted as described in the measure specification using statistical processes to identify and adjust for extraneous variables not associated with care. However, many quality measures are process measures for which the measure outcome is not subject to influence by factors outside the eligible clinicians' control.

    (2) Final Score Performance Category Weights (a) General Weights

    Section 1848(q)(5)(E)(i) of the Act specifies weights for the performance categories included in the MIPS final score: in general, 30 percent for the quality performance category, 30 percent for the cost performance category, 25 percent for the advancing care information performance category, and 15 percent for the improvement activities performance category. However, that section also specifies different weightings for the quality and cost performance categories for the first and second years for which the MIPS applies to payments. Section 1848(q)(5)(E)(i)(II)(bb) of the Act specifies that for year 1, not more than 10 percent of the final score will be based on the cost performance category and for year 2, not more than 15 percent will be based on cost performance category. Under section 1848(q)(5)(E)(i)(I)(bb) of the Act, the weight of the quality performance category for each of the first 2 years will increase by the difference of 30 percent minus the weight specified for the cost performance category for the year.

    We have proposed the performance category weights for the first MIPS payment year of 2019. In section II.E.5.e.(2) of the proposed rule (81 FR 28198), we proposed to set the cost performance category weight at 10 percent for the 2019 payment year and 15 percent for the 2020 payment year. Correspondingly, in section II.E.5.b.(2), we proposed to set the quality performance category weight to 50 percent for the 2019 payment year and 45 percent for the 2020 payment (81 FR 28185). The quality performance category weight proposal is based on the 30 percent required by statute for the quality performance category plus 30 percent minus the weight of the cost performance category, as required by section 1848(q)(5)(E)(i)(I)(bb) of the Act. As specified in section 1848(q)(5)(E)(i) of the Act, the weights for the other performance categories are 25 percent for the advancing care information performance category; and 15 percent for the improvement activities performance category. Section 1848(q)(5)(E)(ii) of the Act provides that in any year in which the Secretary estimates that the proportion of EPs (as defined in section 1848(o)(5) of the Act) who are meaningful EHR users (as determined under in section 1848(o)(2) of the Act) is 75 percent or greater, the Secretary may reduce the applicable percentage weight of the advancing care information performance category in the final score, but not below 15 percent, and adjust the weighting of the other performance categories. We refer readers to our policies concerning section 1848(q)(5)(E)(ii) of the Act in section II.E.5.g.(6)(e) of this final rule with comment period.

    We received comments on the proposed weights of the MIPS performance categories which are addressed in section II.E.5.b.(2) for quality, section II.E.5.e.(2) for cost, section II.E.5.f.(2) for improvement activities and section II.E.5.g.(2) for advancing care information. As noted in those sections, many commenters expressed concern regarding the proposed weight for the cost performance category. After consideration of the comments and for the reasons stated in those sections, we are adjusting our proposed category weights for the first 2 years of MIPS. We are finalizing that for the first MIPS payment year (2019), the quality performance category will account for 60 percent of the final score and the cost performance category will account for 0 percent of the final score. We are also finalizing that for the second MIPS payment year (2020), the quality performance category will account for 50 percent of the final score and the cost performance category will account for 10 percent of the final score. The final score weights for the improvement activities and advancing care information performance categories are specified in section 1848(q)(5)(E)(i) of the Act, and we did not propose to deviate from those values.

    Table 29 summarizes the weights specified for each performance category under section 1848(q)(5)(E)(i) of the Act and in accordance with our final policies which are summarized at § 414.1380(c)(1) and detailed at §§ 414.1330(b), 414.1350(b), 414.1355(b), and 414.1375(a).

    Table 29—Final Weights by Performance Category Performance category 2019 MIPS payment year
  • (%)
  • 2020 MIPS payment year
  • (%)
  • 2021 MIPS payment year and beyond
  • (%)
  • Quality 60 50 30 Cost 0 10 30 Improvement Activities 15 15 15 Advancing Care Information* 25 25 25 * The weight for advancing care information could decrease (not below 15 percent) if the Secretary estimates that the proportion of physicians who are meaningful EHR users is 75 percent or greater. The remaining weight would then be reallocated to one or more of the other performance categories.
    (b) Flexibility for Weighting Performance Categories

    Under section 1848(q)(5)(F) of the Act, if there are not sufficient measures and activities applicable and available to each type of MIPS eligible clinician involved, the Secretary shall assign different scoring weights (including a weight of zero) for each performance category based on the extent to which the category is applicable and for each measure and activity based on the extent to which the measure or activity is applicable and available to the type of MIPS eligible clinician involved.

    In section II.E.6.a (81 FR 28248-28268) and section II.E.5.g.(8) (81 FR 28230-28234) of the proposed rule, we describe scenarios where certain MIPS eligible clinicians might not receive a performance category score in the quality, cost, or advancing care information performance categories. We proposed that in such scenarios we would use the authority under section 1848(q)(5)(F) of the Act to assign a weight of zero to the performance category and redistribute the weight for that performance category or categories as described in the next section.

    Below we summarize these scenarios from the proposed rule. However, our transition year policies and modifications in this final rule to simplify scoring affect many of these scenarios, so we describe both the proposed scenario and how our final policies have impacted that scenario.

    For the quality and cost performance categories, in the proposed rule (81 FR 28269-28270), we stated our belief that having sufficient measures applicable and available meant that we are able to reliably calculate a score for the measures that adequately captures and reflects the performance of the MIPS eligible clinician. For the quality and cost performance categories, we proposed in sections II.E.6.a.(2)(d) (81 FR 28254-28255), II.E.6.a.(3)(a) (81 FR 28259-28260), and II.E.6.a.(3)(d) (81 FR 28260-28261) of the proposed rule that we would not calculate a performance category score if a MIPS eligible clinician does not have any measures with the required case minimum or any measures with a sufficient number of MIPS eligible clinicians to create a benchmark. We had proposed that measures that do not meet the required case minimum or a sufficient number of MIPS eligible clinicians to create a benchmark would be excluded from scoring, and the MIPS eligible clinician would not receive a quality or cost performance category score. (Note, this situation is different from a MIPS eligible clinician who elects not to submit any quality measures. A MIPS eligible clinician who elects not to submit any quality measures would receive a quality performance category score of zero.) In our segment II.E.6.a.(2). of the final rule with comment period, we noted that this policy has changed for the quality performance category. We established a policy to assign 3 points for scenarios where a MIPS eligible clinician has quality measures that do not meet case minimum thresholds, do not meet data completeness criteria, or do not have a benchmark. As we noted in those sections we believe that in the initial years of MIPS providing a set number of points for these types of measures rather than not scoring these measures will further incentivize clinicians' participation in the MIPS. We continue to believe MIPS eligible clinicians who would have no scored measures for a performance category under our proposals would not have sufficient measures applicable and available for that performance category; however, with the new measure scoring policy in the quality performance category, we do not anticipate as many MIPS eligible clinicians not having scored measures. Therefore, in almost all cases, we anticipate a MIPS eligible clinician would receive a quality performance category score. The only exception would be the rare circumstance that a MIPS eligible clinician does not have any measures that are relevant to the clinician's practice.

    In the proposed rule, we anticipated that most MIPS eligible clinicians would select the measures for the quality performance category that are most relevant to their practice and that in most cases, the measures they select would meet the required case minimum. We planned to monitor measure selection trends under the performance category and would revise this policy if it appears MIPS eligible clinicians are reporting measures that are not relevant to their practice or measures that do not meet the required case minimum. With the new 3-point policy, we do not believe MIPS eligible clinicians would purposefully select measures with low case volume in order to avoid a score. Rather, we believe that the overwhelming majority of MIPS eligible clinicians aim to meet our performance criteria in the most straightforward manner possible. As described in II.E.5.b.(3)(a)(i) and II.E.6.a.(2)(d) of this final rule with comment period, we will continue to monitor the selection of measures and may adjust policies if we determine MIPS eligible clinicians are not reporting measures for which they can be scored.

    In the cost performance category, we believe MIPS eligible clinicians who are not attributed enough cases to be reliably measured should not be scored for the performance category. We have finalized in section II.E.5.e. of this final rule with comment period, the measures for the cost performance category; however, if a MIPS eligible clinician is not attributed a sufficient number of cases for a measure (in other words, has not met the required case minimum for the measure) or if a measure does not have a benchmark, then the measure will not be scored for that clinician in accordance with the final policy in section II.E.6.(a)(3) of this final rule with comment period. However, while we are scoring cost measures in the transition year of MIPS (MIPS payment year 2019), they are not contributing to transition year final scores as we have set the cost performance category weight to 0 percent in the transition year.

    We refer readers to section II.E.5.g.(8) of this final rule with comment period for a detailed discussion of the scenarios in which a MIPS eligible clinician may not have sufficient measures applicable and available under the advancing care information performance category. For the improvement activities performance category, however, we envision that all MIPS eligible clinicians would have sufficient activities applicable and available and did not propose any scenario where a MIPS eligible clinician would not receive an improvement activities performance category score.

    In addition to scenarios where a MIPS eligible clinician would have no scored measures for a performance category, we stated in the proposed rule that we believe there may be scenarios in which a MIPS eligible clinician would have too few scored measures under the quality performance category for us to reliably calculate a performance category score that is worth half the weight of the final score for the 2019 MIPS payment year. We proposed that if a MIPS eligible clinician has fewer than three scored quality measures (either submitted measures or measures calculated from administrative claims data) for a performance period, we would consider the MIPS eligible clinician not to have a sufficient number of measures applicable and available for the 2019 MIPS payment year quality performance category weight and would therefore lower the weight of the quality performance category. In this situation, we stated in the proposed rule that the MIPS eligible clinician has a quality performance category score, but has data for only one or two scored measures, which is not a sufficient number of measures for the quality performance category because the quality performance category would constitute half of the final score for the 2019 MIPS payment year. In addition, as described in the next section, for MIPS eligible clinicians that are not scored on the cost or advancing care information performance category, we proposed to increase the weight of the quality performance category. For these reasons, we proposed that for the transition year of MIPS (MIPS payment year 2019), the quality performance category requires a sufficient number of measures to justify its weight in the final score. We noted we would reconsider this policy in future years as the weights for the performance categories change. We proposed that we would consider implementing a similar policy for the cost performance category for future years, but not for the transition year of MIPS based upon the lower weighting of the cost performance category.

    In the proposed rule (81 FR 28186), we proposed for the quality performance category, generally, that MIPS eligible clinicians submit a minimum of six measures for scoring in MIPS. In addition, we proposed to include up to three population-based measures derived from claims data. As described in section II.E.6.a.(2) of the proposed rule (81 FR 28250-28259), a MIPS eligible clinician may submit a measure that is not scored, either because the measure did not meet the required case minimum to be reliably measured or because fewer than 20 MIPS eligible clinicians with sufficient volume submitted a measure through a similar reporting mechanism and a benchmark could not be created for the performance or baseline period. We reiterated that a measure that is not scored due to not meeting the required case minimum or lack of a measure benchmark, is different than a required measure that is not reported. Any required measure that is not reported or reported with in a way that does not meet the data completeness requirements would receive a score of zero points and would be considered a scored measure. In section II.E.5.b.(6), we have modified our final policies to reflect that only one of the three population-based measures is being finalized. Additionally, in section II.E.6.a.(2)(d), we have modified our approach for quality measures that fall below case minimum requirements, data completeness thresholds and measures without a benchmark to include a 3-point measure floor.

    We stated in the proposed rule that we are concerned that if a large percentage of the expected measures are not able to be scored due to not meeting the required case minimums or a missing benchmark, then just one or two measures would contribute disproportionately to the final score because the quality performance category score is worth 30 to 50 percent (depending on the year) of the final score under section 1848(q)(5)(E)(i) of the Act. We did not believe a score for one or two quality measures can capture all the elements of quality performance during a performance period. We believed the lack of a sufficient number of measures for scoring limits the value of quality performance measurement toward the final score. Therefore, we proposed that if a MIPS eligible clinician has only two scored measures (including both submitted measures and measures derived from administrative claims data) to reduce the weight of the quality performance category by one-fifth (for example, from 50 percent to 40 percent in year 1) and redistribute the weight (for example, 10 percent in year 1) proportionately to the other performance categories for which the MIPS eligible clinician did receive a performance category score. If a MIPS eligible clinician has only one scored quality measure, then we proposed to reduce the weight of the quality performance category by two-fifths (for example, from 50 percent to 30 percent in year 1) and redistribute the weight (for example, 20 percent in year 1) proportionately to the other performance categories for which the MIPS eligible clinician did receive a performance category score. Lowering the weight of the quality performance category would be consistent with the relatively low percentage of expected quality measures that are able to be scored.

    We requested comment on these proposals to identify MIPS eligible clinicians without sufficient measures and activities applicable and available and our proposals to reweight those performance categories. We also sought comment on alternative methods for reweighting performance categories for MIPS eligible clinicians without sufficient measures and activities in certain performance categories. We seek to ensure that reweighting would not cause an eligible clinician to be either advantaged or disadvantaged due to a lack of sufficient measures and activities applicable and available, and a corresponding inability to generate a score for a certain performance category.

    The following is summary of the comments we received regarding our proposal to consider MIPS eligible clinicians with fewer than three scored quality measures as having insufficient measures applicable and available for the 2019 MIPS payment year quality performance category and to, therefore, lower the weight of the quality performance category.

    Comment: Several commenters were opposed to our proposal to reduce the weight of the quality performance category because they were concerned how this might impact specialty clinicians with only one or two measures available to report. For example, these commenters were concerned that continuous shifts in the weights for calculating their final score will make it more difficult to determine goals as they transition to subsequent reporting periods.

    Response: We understand the commenters' concerns with reducing the weight of the quality category for those MIPS eligible clinicians who may lack a sufficient number of applicable and available quality measures. Our proposal considered the potential downside of basing at least half of the final score on less than three measures when other performance categories with additional measures were applicable to these MIPS eligible clinicians. After consideration of these comments and other final policies in this final rule with comment period, we are seeking to simplify our approach in the initial years of MIPS to ensure clarity and to encourage eligible clinicians to participate in MIPS and report their quality data. As a result, we intend to maintain a consistent weight for the quality performance category and will score all measures that are submitted or calculated for the MIPS eligible clinician. Required measures that are not submitted will receive a score of zero points.

    We will not finalize our proposed policy to reduce and redistribute the weight of the quality performance category if only one or two measures are scored. We will finalize with modification our proposed policy to reduce and redistribute the quality performance category weight if a MIPS eligible clinician has no scored measures for the quality performance category for the transition year (MIPS payment year 2019), although we believe this scenario will be unlikely. We have modified our approach because, under our policies for the quality performance category for the transition year, we believe it is less likely that a MIPS eligible clinician will have only 1 or 2 scored measures. As discussed in section II.E.6.a.(2), all quality performance category measures that are submitted receive at least 3 points. In addition, any required measure that is not submitted receives a score of 0 points. Therefore, a MIPS eligible clinician submitting data as an individual, who has at least 6 measures applicable and available, who submits one measure is still scored on six measures. One measure receives a score of at least 3 points and the other five measures receive zero points. With this adjustment in the quality scoring, we believe the number of instances where a MIPS eligible clinician has fewer than 3 scored measures will be reduced. Eliminating the proposed reduction and redistribution of the weight of the quality performance category if only one or two measures are scored further simplifies scoring for the transition year. We refer readers to section II.E.6.b.(2)(c) of this final rule with comment period for discussion of how the quality performance category weight will be redistributed in instances where a MIPS eligible clinician is not scored on any quality measures and receives a null score in the quality performance category.

    In Table 17 in section II.E.6.a.(2)(c), we summarize two classes of quality measures for the quality performance category: “Class 1” are those measures for which performance can be reliably scored against a benchmark and “class 2” are measures for which performance cannot be reliably scored against a benchmark. For the transition year (MIPS payment year 2019), we have modified our proposed approach on how we will score measures submitted that are unreliable because they are below the case minimum requirements, or lack a benchmark or do not meet data completeness criteria. These measures will not be scored based on performance against a benchmark, but will receive an automatic three points. We believe this policy will simplify quality performance category scoring. We want to ensure that every clinician that submits quality data will receive a quality performance category score, even if the quality data submitted is class 2 measures. This is particularly important in the transition year because with a minimum 90-day performance period, we anticipate more MIPS eligible clinicians will submit measures below the case minimum requirements. We selected three points because we did not want to provide more credit for reporting a measure than cannot be reliably scored against a benchmark than for measures for which we can measure performance against a benchmark. Again, any measure that was not submitted would also receive a zero score.

    As noted in this final rule with comment period, we have decided not to finalize our proposed approach to reduce the weight of the quality performance category in the final score if only one or two measures are scored for the following reasons. First, we want to create an opportunity for all MIPS eligible clinicians to participate and succeed in MIPS through minimal quality performance category measure submission during the transition year. Second, we want to create a thoughtful “ramp” into the program for participants that is sensitive to stakeholder concerns. Many commenters in section II.E.6.a.(2)(c) requested that we provide “credit” for measures that were submitted that did not meet the quality submission criteria. In addition to scoring measures on performance, we will give at least 3 points for each measure that is submitted to MIPS, even if these measures are class 2 measures. Measures that are not submitted receive a score of zero. As a result of this policy, we think the number of MIPS eligible clinicians with only one or two scored measures will decrease and that removing the proposed reduction and redistribution of the weight of the quality performance category if only one or two measures are scored further simplifies the MIPS scoring for the transition year. As we gain experience with the MIPS, we will revisit these approaches in future rulemaking. For clarity we refer readers once again to section II.E.6.b.(2)(c) of this final rule with comment period for discussion of how the quality performance category weight will be redistributed in instances where a MIPS eligible clinician is not scored on any quality measures and receives a null score in the quality performance category.

    Comment: Commenters requested clarification on how we plan to identify MIPS eligible clinicians without sufficient measures and activities and whether reweighting will still allow for a maximum final score.

    Response: We note that the reweighting policies ensure that all MIPS eligible clinicians will receive a final score between 0 and 100 points. The only difference is how much the individual performance category scores contribute to the final score.

    In response to the request for clarification on how we would identify clinicians without sufficient measures and activities, we refer readers to section II.E.5.b.(3)(a)(i) and II.E.6.a.(2)(d) of this rule for a discussion of our validation process to assess whether measures for the quality performance category are applicable and available, section II.E.5.g.(8) of this rule for a discussion of when measures for the advancing care information performance category may not be applicable and available, and section II.E.6.a.(3) of this rule for a discussion of when measures for the cost performance category may not be are applicable and available. As we stated in the proposed rule, we believe the activities specified for the improvement activities performance category will always be applicable and available to all MIPS eligible clinicians.

    Comment: A commenter stated they did not believe the quality category should be reweighted when a MIPS eligible clinician has fewer than three quality measures because the commenter believes that all MIPS eligible clinicians should be required to report six measures and that specialists could find at least six quality measures by using cross-cutting measures.

    Response: We share the commenter's goal to have MIPS eligible clinicians report at least six measures. However, we also recognize that not every MIPS eligible clinician may have six measures relevant to their practice. We note that if a MIPS eligible clinician is able to report six measures, yet submits fewer measures, then the MIPS eligible clinicians would receive a zero for the measures that were not submitted.

    Comment: Multiple commenters supported our proposal for lowering the weight of the quality performance category for MIPS eligible clinicians with fewer than three applicable and available scored measures.

    Response: We appreciate the support from commenters; however, as discussed in this final rule with comment period, in a desire to simplify the scoring process, we are not finalizing the proposal to reduce and redistribute the quality performance category weight for MIPS eligible clinicians with only one or two scored measures. As discussed in section II.E.6.b.(2)(c) of this final rule with comment period, we will only reduce and redistribute the weight of the quality performance category when a MIPS eligible clinician has no scored quality measures, which we believe will be rare.

    Comment: Other commenters disagreed with our proposal to redistribute weight from the quality performance category to other performance categories when fewer than three scored measures are available because these commenters believed that the intent of the MACRA was to limit the weight given to cost and that any redistribution should not include an increase in the weight of cost in the final score.

    Response: As a result of other final policies, the cost category is weighted to zero percent in the final score for the transition year as detailed in section II.E.6.a.(3)(d), therefore its weight is not eligible for redistribution to the other performance categories. We also believe section 1848(q)(5)(F) of the Act gives the Secretary discretion to redistribute weight to the cost performance category and thus disagree with commenters that weight should never be redistributed to that category.

    After consideration of the comments, and for the reasons explained in our responses above, we are not finalizing our proposal to reduce and redistribute the weight of the quality performance category when a MIPS eligible clinician has only one or two scored quality measures. Maintaining a consistent quality performance category weight whenever at least one measure can be scored will increase simplicity and predictability of scoring for MIPS eligible clinicians. We may revisit this policy at a future date through rulemaking. See section II.E.6.b.(2)(c) of this final rule with comment period for discussion of how we will reduce and redistribute the weight of the quality performance category to other performance categories when a MIPS eligible clinician has no scored measures in the quality performance category.

    (c) Redistributing Performance Category Weights

    We proposed at § 414.1380(c)(2) to redistribute the weights of the performance categories for MIPS eligible clinicians when there are not sufficient measures and activities applicable and available to them. We proposed to redistribute the weights of the performance categories in the following situations.

    If the MIPS eligible clinician does not receive a cost or advancing care information performance category score, and has at least three scored measures (either submitted measures or those calculated from administrative claims) in the quality performance category, then we proposed to reassign the weights of the performance categories without a score to the quality performance category. We believed this policy was appropriate for several reasons. First, section 1848(q)(5)(E)(i)(I)(bb) of the Act redistributes weight from the cost performance category to the quality performance category in the first 2 years of MIPS. This proposal is consistent with that redistribution logic. In addition, MIPS eligible clinicians have experience reporting quality measures through the PQRS program, and measurement in this performance category is more mature. Finally, for the 2019 MIPS payment year, quality performance would be worth at least half of the final score. By requiring the MIPS eligible clinician to have at least three scored quality measures, we believe the quality performance category would be robust enough to support more weight reassigned to it than other performance categories. We stated that we may revisit this policy in future years as the weight for the cost performance category increases and the weight for the quality performance category decreases.

    We also proposed an alternative that does not reassign all the weight to the quality performance category, but rather reassigns the weight proportionately to each of the other performance categories for which the MIPS eligible clinician has received a performance category score.

    We requested public comments on the proposal to reassign the weights to the quality performance category, as well as the alternate proposal to redistribute proportionately to other performance categories.

    If the MIPS eligible clinician has fewer than three scored measures in the quality performance category score, then we proposed to reassign the weights for the performance categories without scores proportionately to the other performance categories for which the MIPS eligible clinician has received a performance category score. We requested comment on this proposal.

    Finally, because the final score is a composite score, we stated in the proposed rule that we believe the intention of section 1848(q)(5) of the Act is for MIPS eligible clinicians to be scored based on multiple performance categories. Basing a final score on a single performance category, even a robust and familiar performance category like quality, would frustrate that intent. In our proposals, improvement activities is the only performance category which would always have a performance category score. We were particularly concerned about the possibility that a MIPS eligible clinician might, for the reasons discussed above, not have sufficient measures applicable and available for the quality, cost, and advancing care information performance categories, and would only receive a score for the improvement activities performance category. The improvement activities performance category is based on activities that are reported by attestation, not on measured performance. In addition, because the improvement activities performance category is not as mature as the other performance categories, each of which include certain aspects of existing CMS programs, we were unsure how much variation we will have in the improvement activities performance category. We did not believe it would be equitable to allow MIPS eligible clinicians that attest to receive the maximum points for that performance category and then base the final score solely on the improvement activities performance category. Such a scenario may result in higher final score and MIPS adjustment factors for some MIPS eligible clinicians based solely on the improvement activities performance category, while other MIPS eligible clinicians are measured based on their performance under the other performance categories. Therefore, we proposed that if a MIPS eligible clinician receives a score for only one performance category, we would assign the MIPS eligible clinician a final score that is equal to the performance threshold described in section II.E.7.c of the proposed rule (81 FR 28273), which means the eligible clinician would receive a MIPS payment adjustment factor of 0 percent for the year. We anticipated this proposal would affect very few MIPS eligible clinicians in year 1 and even fewer in future years as more eligible clinicians are able to report on and receive scores for more of the performance categories.

    We requested public comment on this proposal.

    The following is summary of the comments we received regarding our proposal to reassign performance category weights to the quality performance category when an eligible clinician cannot be scored in a category and has at least three scored measures in the quality performance category.

    Comment: Multiple commenters supported CMS' proposal to distribute the weights for the advancing care information and cost performance categories to the quality performance category in cases where the category is not scored for an eligible clinician.

    Response: We agree with the commenters. By redistributing weight to the quality performance category, we are providing a clear scoring approach for eligible clinicians who do not receive a score in another performance category. This approach is also in line with section 1848(q)(5)(E)(i) of the Act, which requires that we redistribute weight from the cost category to the quality category during the first 2 years of MIPS. We would like to note, that the cost performance category is weighted at 0 percent for the transition year (MIPS payment year 2019), so the cost performance category weight will not need to be redistributed.

    Comment: Several commenters recommended that CMS implement the alternate reweighting proposal (the proportional reassignment of weights to the remaining performance categories) for MIPS eligible clinicians that receive a zero weight in the advancing care information and cost performance categories and have at least three scored quality measures. Commenters who supported this approach did so because they were concerned about disproportionate weighting in the quality performance category.

    Response: MIPS eligible clinicians have prior experience with quality reporting through PQRS and VM. Also, as discussed in section II.E.6.a.(3)(d) of this final rule with comment period, we are weighting the cost performance category at zero percent for the transition year of MIPS (MIPS payment year 2019) and assigning its weight to the quality performance category. As a result, for the transition year, there will be no need to redistribute the weight of the cost performance category if there are not sufficient measures applicable and available under that category. In the event that an eligible clinician does not receive a score for advancing care information, it would not be appropriate to allocate substantial additional weight to improvement activities in the transition year of MIPS before we have gained additional experience with the improvement activities performance category. While we understand commenter concerns about placing additional weight in the quality category, section 1848(q)(5)(E)(i) of the Act seems to favor an approach where quality is given substantial weight in the final score during the first 2 years of MIPS.

    Comment: Several commenters expressed concern that both options for reweighting the remaining performance categories would increase the importance of the quality performance category in determining the final score. These commenters were concerned that allocating additional weight within the final score to the quality performance category could become detrimental to eligible clinicians who do not have a sufficient number of quality measures applicable to their practice.

    Response: While we understand commenter concerns about allocating additional weight to the quality category, we believe our approach of redistributing the weight to quality is consistent with section 1848(q)(5)(E)(i) of the Act, which gives quality substantial weight in the final score during the first 2 years of MIPS. In addition, many eligible clinicians will have prior experience reporting quality measures to PQRS; while, on the other hand, improvement activities is a new performance category without any reporting history. With our transition year policies, we anticipate that the advancing care information performance category will be the one performance category that may need to be reweighted if there are not sufficient measures applicable and available to some MIPS eligible clinicians, as discussed in section II.E.5.g.(8) of this final rule with comment period. Therefore, reallocating additional weight to the quality performance category presents a clear option for rebalancing the final score when the advancing care information performance category is weighted at zero percent.

    Comment: Several commenters suggested that CMS work with affected physicians who have insufficient measures and activities and with physician organizations to determine the best method of reweighting to accommodate the unique needs of various practices.

    Response: We appreciate that developing a single reweighting approach may not satisfy all stakeholders. However, we are not prepared to develop specialty-specific reweighting schema at this time, and doing so prematurely would impair our ability to maintain simplicity and clearly articulate scoring expectations to MIPS eligible clinicians. We may reassess our approach in future years and do intend to continue our engagement with physician organizations and other stakeholders to incorporate their feedback as appropriate in future rulemaking.

    Comment: Multiple commenters recommended that MIPS eligible clinicians who are unable to report performance categories other than improvement activities should have the option to increase the weight of the improvement activities performance category. These commenters believed that this approach would provide greater flexibility for MIPS eligible clinicians to be measured on activities relevant to their practice.

    Response: The weights for the performance categories are prescribed by statute in section 1848(q)(5)(E) of the Act or determined by the Secretary in accordance with section 1848(q)(5)(F) of the Act. The statute, as written, would not allow for an approach such as the commenters suggest.

    Comment: Many commenters stated that when there are not sufficient measures and activities applicable and available for a MIPS eligible clinician in a performance category, the most appropriate action would be to score the physician as “meets performance standard”, and that the MIPS eligible clinician should be assigned the median score for the performance category. These commenters believed that reweighting may ultimately disadvantage MIPS eligible clinician types who may tend not to have an advancing care information performance category score.

    Response: While we recognize the simplicity of the approach proposed by commenters, it would not be permissible under statute. Section 1848(q)(5)(F) of the Act stipulates the Secretary shall assign different scoring weights (including a weight of 0) if there are not sufficient measures and activities applicable and available to the MIPS eligible clinicians. We do not believe that assigning a MIPS eligible clinician a score of “meets performance standard” would be consistent with that statutory requirement. Redistributing final score weight to performance categories in which a MIPS eligible clinician has engaged allows us to produce a composite assessment between 0 and 100 and maintains and eligible clinician's ability to reach 100 percent of the final score even when they cannot be scored in all categories.

    Comment: We received comments from hospital-based eligible clinicians who did not agree with our proposal to reweight their advancing care information performance category to zero in their final score and to reallocate the performance category weight to the quality performance category based upon the Secretary's authority under section 1848(q)(5)(F) of the Act. These commenters did not believe the resulting final score would be representative of their performance in MIPS. The commenters further stated that, in combination with reweighting the cost performance category to zero, doing so for the advancing care information performance category would shift a large and disproportionate amount of weight to the quality performance category. This would result in significant difference in total quality performance category scores for minor variances in quality measure performance, making it very difficult to earn a high score in the category and in the final score. For example, a score of 99.9 percent versus 100 percent for a quality measure would make a larger difference in the overall quality performance category score if the weight of that performance category is larger than for those MIPS eligible clinicians who also have the opportunity earn points in the advancing care information performance category. The commenter suggested that an alternate method of reweighting and redistributing the advancing care information performance category score be considered. For example, the commenter suggested that the score distribution be across multiple performance categories and not just quality.

    Response: As discussed in section II.E.5.g.(8)(a)(i) of this final rule with comment period, we believe there may not be sufficient measures applicable and available to hospital-based MIPS eligible clinicians under the advancing care information performance category of MIPS.

    The cost performance category is weighted at zero percent in the final score under our transition year policies. As discussed earlier in this section, we believe it would not be appropriate to allocate substantial additional weight to improvement activities in the transition year of MIPS before we have gained additional experience with the improvement activities performance category. Therefore, while we understand the commenters concerns about placing additional weight in the quality category, section 1848(q)(5)(E)(i) of the Act seems to favor an approach where quality is given substantial weight in the final score during the first 2 years of MIPS. We may revisit this policy in future years.

    After consideration of the comments, we are codifying at § 414.1380(c)(2) that we will assign different weights than the ones listed in § 414.1380(c)(1) when we determine that there are not sufficient measures and activities applicable and available to MIPS eligible clinicians.

    For the transition year (MIPS payment year 2019), we are codifying with modification our proposal to redistribute the weight of the cost and advancing care information performance categories to the quality performance category when there are not sufficient measures applicable and available to a MIPS eligible clinician under the cost and advancing care information performance categories and thus the clinician does not receive a score for those performance categories. We are not finalizing the requirement that the quality performance category have a minimum of three scored measures in order to redistribute the weight of the cost and advancing care information performance categories to the quality performance category. Maintaining a consistent quality performance category weight whenever at least one measure can be scored will increase simplicity and predictability of scoring for MIPS eligible clinicians while learning the program.

    The following is a summary of the comments regarding our proposal to reduce the weight of the quality performance category and redistribute the amount by which it is reduced to the other performance categories, in the event a MIPS eligible clinician has fewer than three scored measures in the quality performance category.

    Comment: Several commenters stated that if a MIPS eligible clinician lacks sufficient measures to report into the quality performance category, then CMS should assign a neutral final score that meets the performance threshold and thus a 0 percent update.

    Response: If there are not sufficient measures applicable and available under the quality performance category, section 1848(q)(5)(F) of the Act directs the Secretary to assign different scoring weights for the performance categories. As stated above, we are not finalizing our proposal to reduce and redistribute the quality performance category weight to other categories if a MIPS eligible clinician has only one or two scored quality measures. We believe our final policies will reduce the instances where a MIPS eligible clinician does not receive any quality performance category score by applying a 3-point minimum score for all quality measures reported in the quality performance category (see section II.E.6.a.(2)(b) of this final rule with comment period). In event that a MIPS eligible clinician is not scored in the quality performance category because there are not sufficient measures applicable and available, for the transition year (MIPS payment year 2019), we will redistribute the 60 percent weight of the quality performance category so that the performance category weights are 50 percent for advancing care information and 50 percent for improvement activities.

    Comment: Multiple commenters recommended that CMS simplify the final score scoring methodology and our proposals for reweighting to make it easier for MIPS eligible clinicians to understand how to maximize their score. Commenters recommended that CMS balance the value of requiring MIPS eligible clinicians to understand various reweighting scenarios versus clearly laying out the results for MIPS eligible clinicians reporting the data they have available. Commenters also recommended that CMS maintain the weight of the quality category at 50 percent, as MIPS eligible clinicians may be unfamiliar with the improvement activities and advancing care information performance categories. Finally, commenters believed that a streamlined weighting methodology will improve fairness in the absence of greater historical data for certain performance categories.

    Response: We understand the commenters' concerns with complexity in our approach to reweighting performance category weights when a MIPS eligible clinician cannot be scored in one or more categories. In response to these comments and other finalized policies, we are simplifying our approach in the first 2 years of MIPS to ensure clarity and to encourage MIPS eligible clinicians to report their quality data. As discussed in section II.E.5.e.(2) of this final rule with comment period, we are reducing the cost performance category weight to zero percent for the transition year (MIPS payment year 2019) only. We are also making adjustments to quality scoring by providing a 3-point floor for all reported quality measures (see II.E.6.a.(2) of this final rule with comment period). We are also not finalizing our proposal to reduce and redistribute the weight of the quality performance category if MIPS eligible clinicians have only one or two scored quality measures. For the transition year (MIPS payment year 2019), we will redistribute the 60 percent weight of the quality performance category so that the performance category weights are 50 percent for advancing care information and 50 percent for improvement activities in the event that a MIPS eligible clinician is not scored in the quality performance category because there are not sufficient measures applicable and available. All of these modifications will help provide stability and predictability in the MIPS final score methodology.

    After consideration of the comments, and for the reasons discussed in our responses above, we are finalizing a modification of our proposal to reduce the weight of the quality performance category and redistribute the amount by which it is reduced to the other performance categories if the eligible clinician has fewer than three scored quality measures. MIPS eligible clinicians will receive a quality performance category score as long as they are scored on at least one quality measure. We believe it is unlikely that a MIPS eligible clinician will not receive a score for at least one quality measure as a result of our final policy for the transition year to provide a 3-point floor for all reported quality measure in the quality performance category (see II.E.6.a.(2) of this final rule with comment period). In the event a MIPS eligible clinician is not scored on at least one measure in the quality performance category because there are not sufficient measures applicable and available, for the transition year (MIPS payment year 2019), we will redistribute the 60 percent weight of the quality performance category so that the performance category weights are 50 percent for advancing care information and 50 percent for improvement activities. We are finalizing this policy for the MIPS payment year 2019 and will revisit this approach for later years through future rulemaking. With the 3-point floor policy, we anticipate almost all MIPS eligible clinicians will have a quality performance category score. We believe that only in rare circumstances would a MIPS eligible clinician have no applicable and available quality measures. This approach is similar to our proposal but takes into account our final policy to weight the cost performance category at 0 percent in the transition year of MIPS and responds to commenter requests for additional simplicity in our policies for reweighting the performance categories. Table 30 summarizes these final policies.

    The following is summary of the comments we received regarding our proposal to assign MIPS eligible clinicians with only one scored performance category a final score that is equal to the performance threshold.

    Comment: Several commenters expressed agreement with CMS' proposal to assign a final score that is equal to the performance threshold, resulting in a zero percent adjustment, to MIPS eligible clinicians who receive a score for only one performance category.

    Response: We agree with commenters and will finalize the proposal to define a final score as more than one performance category and to assign a final score at the performance threshold to a MIPS eligible clinician who has only one performance category score.

    Comment: A few commenters suggested that the minimum number of performance categories for a final score should be based on the assumption that most participants will complete the improvement activities performance category and the advancing care information performance category, and will be able to report on at least one of the remaining cost and quality performance categories.

    Response: We agree that a large number of MIPS eligible clinicians will be able to participate in all performance categories. However, there are instances we have identified when MIPS eligible clinicians would not receive an advancing care information performance category score, or a cost performance category score; therefore, we believe it would be inappropriate to not have policies in place for those MIPS eligible clinicians that do not have measures applicable and available in all performance categories.

    After consideration of the comments, we are finalizing our proposal to assign MIPS eligible clinicians with only one scored performance category a final score that is equal to the performance threshold. We note that with the scoring changes to the quality performance category, we do anticipate that almost all MIPS eligible clinicians will have performance category scores for both quality and improvement activities.

    Based upon the policies we are finalizing; we summarize in Table 30 the potential reweighting scenarios for the transition year of MIPS (MIPS payment year 2019):

    Table 30—Performance Category Redistribution Policies for the Transition Year [MIPS payment year 2019] Performance category Weighting for 2019 MIPS payment year
  • (%)
  • Reweight
  • scenario if no advancing care information performance category score
  • (%)
  • Reweight
  • scenario if no quality
  • performance category score
  • (%)
  • Quality 60 85 0 Cost 0 0 0 Improvement Activities 15 15 50 Advancing Care Information 25 0 50

    We do not include a scenario where a MIPS eligible clinician does not receive an improvement activities performance category score. MIPS eligible clinicians that do not submit any improvement activities data receive a zero percent score for that performance category.

    7. MIPS Payment Adjustments a. MIPS Payment Adjustment Identifier and Final Score Used in the MIPS Payment Adjustment Calculation i. MIPS Payment Adjustment Identifier

    As we described in section II.E.2. of the proposed rule (81 FR 28271), we proposed to allow MIPS eligible clinicians to measure performance as an individual, as a group defined by TIN, or as an APM Entity group using the APM scoring standard. However, for purposes of the application of the MIPS payment adjustment factors to payments in accordance with section 1848(q)(6)(E) of the Act (referred to as the MIPS payment adjustment), we proposed to use a single identifier, TIN/NPI, for all MIPS eligible clinicians, regardless of whether the TIN/NPI was measured as an individual, group or APM Entity group. In other words, a TIN/NPI may receive a final score based on individual, group, or APM Entity group performance, but the MIPS payment adjustment would be applied at the TIN/NPI level.

    We proposed to use the single identifier, TIN/NPI, for the MIPS payment adjustment for several reasons. First, the final eligibility status of some clinicians would not be known until after the performance period ends. For example, the calculations determining which clinicians would be excluded from MIPS, such as identifying clinicians that are QPs or are below the low-volume threshold, occur after the performance period ends. Using TIN/NPI would allow us to correctly identify which TIN/NPIs are still MIPS eligible clinicians after the exclusion criteria have been applied.

    Second, the identifiers for quality measurement are not mutually exclusive, and using TIN/NPI to apply the MIPS payment adjustment would allow us to resolve any inconsistencies that arise from the measurement identifiers. For example, a TIN may have 40 percent of its eligible clinicians participating in a MIPS APM and the remaining 60 percent are not participating in any APM. The TIN elects to submit performance information for all the eligible clinicians in the TIN, including those that are participating in the MIPS APM, so that it can ensure all of its eligible clinicians are being measured in MIPS. We cannot simply use the APM Entity and TIN identifiers because in this case, we either have eligible clinicians with duplicative data and overlapping scores, or we have portions of the measurement identifier carved out if we eliminate the overlap. In our example, the eligible clinicians participating in the MIPS APM would have data for two final scores (one based on the APM Entity group performance and one based on the group TIN performance). The eligible clinicians not participating in the MIPS APM would have only one final score (one based on the group TIN performance). Applying the MIPS payment adjustment at the TIN/NPI level provides us the flexibility to correctly identify and resolve the conflicts emerging when measurement identifiers overlap. The TIN/NPI identifier is mutually exclusive on all of our measurement identifier options; therefore, we believe this identifier can be consistently used for individual, group, or APM scoring standard identifiers. We refer readers to 81 FR 28271 for a discussion of identifiers and our proposals related to them.

    The following is summary of the comments we received regarding our proposal to use the TIN/NPI combination as the MIPS payment adjustment identifier.

    Comment: Some commenters opposed using the TIN/NPI as the MIPS payment adjustment identifier. They are concerned that TIN/NPI promotes individual achievement and undercuts a practice's ability to incentivize quality improvement behaviors through group or teamwork. Other commenters noted the administrative burden for group practices because they would have to track multiple MIPS payment adjustments within their TIN. They recommended applying the MIPS payment adjustment uniformly for each TIN.

    Response: We want the MIPS to encourage teamwork and coordination. We have finalized measurement at the group level (TIN) and the APM entity level to help encourage that goal. Generally, all TIN/NPIs that are measured as a group or an APM entity will have the same final score, and therefore have the same MIPS payment adjustment. We believe it would be challenging to apply the MIPS payment adjustment uniformly at the TIN level, because as noted earlier, we need to account for individuals who are excluded from MIPS and to resolve scenarios where there are overlapping or duplicative final scores. For MIPS eligible clinicians that report as a group, the low-volume threshold will be determined based on the group as a whole—in this case, the low volume threshold would be determined based on considering the volume across all NPIs billing within that TIN regardless of MIPS eligibility. Other exclusions, however, such as newly enrolled and QP, are applied at the NPI level. Therefore, some NPIs within a TIN may be excluded from MIPS individual reporting requirements and payment adjustments; however, if the TIN chooses to participate in MIPS as a group, data for those NPIs would be included when determining the group's performance. We refer readers to section II.E.3 of this final rule with comment period for the list of MIPS statutory exclusions. In response to concerns on administrative burden, we intend to work with stakeholders to develop tools to minimize the potential burden of tracking numerous MIPS eligible clinician's payment status.

    Comment: One commenter believed that applying the MIPS payment adjustment at the TIN level also closes potential loopholes that would otherwise allow avoidance of payment reductions through switching NPIs. Another commenter expressed significant concerns related to our proposal to use multiple identifiers when assessing participation and performance in MIPS while relying solely on an eligible clinician's TIN/NPI for the purpose of the MIPS payment adjustment under certain circumstances, and requested clarification on how MIPS eligible clinicians would be scored across performance categories when they are a part of a group, whether this score is based on individual or group data, and whether the process is consistent across performance categories.

    Response: The NPI is meant to be a lasting identifier, and is expected to remain unchanged even if a health care provider changes his or her name, address, provider taxonomy, or other information that was furnished as part of the original NPI application process. Assignment of a unique NPI to each clinician is managed by the National Plan and Provider Enumeration System (NPPES) which only assigns a single NPI to each individual clinician. We will use the individual NPI, which cannot be changed when the clinician reassigns payment to a different TIN.

    Eligible clinicians will be scored across the four performance categories either as an individual or through their group. It is our intent that an eligible clinician reporting through a group will be scored as part of that group for all performance categories, or conversely, that an eligible clinician reporting as an individual will be scored on their individual data for all performance categories.

    We would also like to note that all TIN/NPIs participating in a group practice or APM Entity will have the same final score and the same MIPS payment adjustment. The only time the TIN/NPIs will vary across a group practice will be when a TIN/NPI: (1) Is excluded from MIPS; (2) has multiple possible final score submissions (for example an APM Entity final score and a TIN final score); or (3) the TIN/NPI is new to a TIN or a TIN is new and therefore does not have historical data associated with the TIN/NPI.

    Comment: Several commenters supported the TIN/NPI proposal. Reasons for support included that the TIN/NPI: Holds MIPS eligible clinicians accountable for their own performance; could simplify how the MIPS payment adjustment is applied and creates a consistent set of rules. Commenters cautioned, however, that failing to ensure TIN accuracy and completeness and having a complicated and inaccessible process for rectifying errors undermines trust in the program.

    Response: We agree with commenters that TIN/NPI simplifies how the MIPS payment adjustment is applied. We also note that MIPS eligible clinicians will have an opportunity to request a targeted review of their MIPS payment adjustment factor(s) for a year, which is described in more detail in section II.E.8, and we believe that process to be responsive to the commenters' requests.

    After consideration of these comments, we are finalizing our proposal to adopt the TIN/NPI combination as the MIPS payment adjustment identifier.

    ii. Final Score Used in MIPS Payment Adjustment Calculation

    Because we proposed to use only TIN/NPI to apply the MIPS payment adjustments and because there is a gap between the performance period and the MIPS payment year, we believe we should assign the historical final score to each TIN/NPI that is subject to MIPS for the payment year.

    In general, we proposed to use the final score associated with the TIN/NPI combination in the performance period. For groups submitting data using the TIN identifier, we proposed to apply the group final score to all the TIN/NPI combinations that bill under that TIN during the performance period. For individual MIPS eligible clinicians submitting data using TIN/NPI, we proposed to use the final score associated with the TIN/NPI that is used during the performance period. For eligible clinicians in MIPS APMs, we proposed to assign the APM Entity group's final score to all the APM Entity Participant Identifiers that are associated with the APM Entity. We refer readers to section II.E.5.h. of this final rule with comment period for more information about the process to identify participating APM Entities. For eligible clinicians that participate in APMs for which the APM scoring standard does not apply, we proposed to assign a final score using either the individual or group data submission assignments described above.

    In the case where a MIPS eligible clinician starts working in a new practice or otherwise establishes a new TIN that did not exist during the performance period, there would be no corresponding historical performance information or final score for the new TIN/NPI. Because we want to connect actual performance to the individual MIPS eligible clinician as often as possible, in cases where there is no final score associated with a TIN/NPI from the performance period, we proposed to use the NPI's performance for the TIN(s) the NPI was billing under during the performance period. If the MIPS eligible clinician has only one final score associated with the NPI from the performance period, then we proposed to use that final score. For example, if a MIPS eligible clinician worked in one practice (TIN A) in the performance period, but is working at a new practice (TIN B) during the payment year, then we would use the final score for the old practice (TIN A/NPI) to apply the MIPS payment adjustment for the NPI in the new practice (TIN B/NPI). This proposal most closely linked the MIPS eligible clinician's performance during the performance period to the MIPS payment adjustment. It also ensured that MIPS eligible clinicians that qualify for a positive MIPS payment adjustment are able to keep it, even if they change practices. For those who have a negative MIPS payment adjustment, this proposal also ensured MIPS eligible clinicians are still accountable for their performance.

    In scenarios where the MIPS eligible clinician billed under more than one TIN during the performance period, and the MIPS eligible clinician starts working in a new practice or otherwise establishes a new TIN that did not exist during the performance period, we proposed to use a weighted average final score based on total allowed charges associated with the NPI from the performance period. This proposal would provide a final score that is based on all the services the NPI billed to Medicare during the performance period. Table 26 of the proposed rule (81 FR 28272), presents an example of how the weighted average proposed approach would work. If an NPI did not have any allowed charges in the performance period, then the clinician would not be included in MIPS due to the low-volume exclusion.

    We also proposed an alternative policy where in lieu of taking the weighted average, we take the highest final score from the performance period. We believe the alternative approach rewards MIPS eligible clinicians for their prior performance and may be easier to implement in the transition year of MIPS. Our concern with this approach is that the highest final score may represent a relatively small portion of the MIPS eligible clinician's practice during the performance period.

    We requested comment on the proposal to use the final score associated with the TIN(s) the NPI was billing under during the performance period when the TIN/NPI does not have a final score from the performance period. We also requested comment on our proposal to use a weighted average, and the alternative proposal to select the highest final score from the performance period.

    We also considered, but did not propose, a policy to have the performance follow the group (TIN) rather than the individual (NPI). In other words, the MIPS eligible clinician's performance would be based on the historical performance of the new TIN that the MIPS eligible clinician moved to after the performance period, even though the MIPS eligible clinician was not part of this group during the performance period. This policy is consistent with the VM and would create incentives for MIPS eligible clinicians to move to higher performing practices (77 FR 69308). We also believe this policy would provide a lower burden for practice administrators as all MIPS eligible clinicians in the TIN would have the same MIPS payment adjustment. On the other hand, having performance follow the TIN creates some challenges. We are concerned that MIPS eligible clinicians who earned a positive MIPS payment adjustment based on their performance during the performance period would not retain the positive MIPS payment adjustment if the new TIN had a lower final score. Finally, we believe that having performance follow the TIN could create some unanticipated issues with budget neutrality if high-performing TINs expand. For all of these reasons, we did not propose to have performance follow the TIN, but rather have performance follow the NPI; however, we solicited comment on this option.

    In some cases, a TIN/NPI could have more than one final score associated with it from the performance period, if the MIPS eligible clinician submitted duplicative data sets. In this situation, the MIPS eligible clinician has not changed practices, rather for example, a MIPS eligible clinician has a final score for an APM Entity and a final score for a group TIN. If a MIPS eligible clinician has multiple final scores, we proposed a multi-pronged approach to select the final score that would be used to determine the MIPS payment adjustment. First, we proposed that if a MIPS eligible clinician is a participant in MIPS APM, then the APM Entity final score would be used instead of any other final score (such as a group TIN final score or individual final score). We proposed that if a MIPS eligible clinician has more than one APM Entity final score for the same TIN (by participating in multiple MIPS APMs), we would apply the highest APM Entity final score to the MIPS eligible clinician. Second, if a MIPS eligible clinician reports as a group and as an individual, we would calculate a final score for the group and individual identifier and use the highest final score for the TIN/NPI. We requested comment on this proposed approach.

    The following is summary of the comments we received regarding our proposals for the final score used in the MIPS payment adjustment calculation.

    Comment: Some commenters did not support applying MIPS payment adjustments based on a previous employer's performance or use of prior TIN/NPI combinations if there is no historical information for the current TIN/NPI. Commenters noted it is unfair to base payments on the previous TIN/NPI combinations and supported awarding a neutral score when a MIPS eligible clinician comes to a new practice. Commenters also expressed concerns about placing undue burden on the hiring entity and the potential to influence hiring decisions based on data that are 2 years old. Finally, some commenters expressed concerns that a new TIN would be adversely affected by having to accept a negative MIPS payment adjustment for a MIPS eligible clinician that was hired into that TIN after the performance period. These commenters also imply that calculating the MIPS payment adjustment for the individual based on their performance for that corresponding payment year does not recognize that the clinician may learn better compliance at the new practice. Many of these commenters recommended having the NPI inherit the final score of the TIN they moved to after the performance period, if that TIN was an existing TIN during the performance period, even though that NPI was not part of that TIN during that performance period.

    Response: In the case where a MIPS eligible clinician starts working in a new practice or otherwise bills Medicare under a new TIN, we have no historical performance data for the TIN/NPI. We examined using either the TIN's historical performance or the NPI's historical performance. However, we do not always have a TIN's historical performance. For example, in cases where a TIN elected to have its MIPS eligible clinicians submit individual data, then we would not have a TIN score, only individual scores. In contrast, we would always have NPI historical performance if the TIN/NPI is subject to MIPS. Therefore, we proposed, and will finalize, using the NPI's performance for the TIN(s) the NPI was billing under during the performance period.

    We do not believe it would be appropriate to assign a neutral score when performance data for the NPI is available.

    In response to concerns that an undue burden would be placed on the hiring entity, we are not asking TINs to perform any of the calculations. We will apply the specific MIPS payment adjustment that needs to be applied for that specific TIN/NPI for the payment year. We seek feedback on ways to provide the necessary information to practices to minimize burden for them.

    In response to concerns about the adverse effect on a new TIN that hires an individual that had a lower final score in the performance period, we want to reiterate that the MIPS payment adjustment is only being applied to that individual TIN/NPI and not all NPIs in that same hiring TIN and that in some cases the MIPS payment adjustment is positive. We believe that our policy tracks accountability to the clinician and will actually encourage all clinicians to seek high performance.

    Comment: Some commenters generally supported our proposal that the score follows the clinician to the new practice if there is a change after performance period to a new practice in the payment year, acknowledging that this holds clinicians accountable, but questioned the reasonableness of tracking this for the new receiving practice. One commenter encouraged CMS to consider how to mitigate these problems.

    Response: We will work with stakeholders to develop strategies to minimize the burden of tracking adjustments for MIPS eligible clinicians that change practices over time.

    Comment: Some commenters supported CMS' proposal to use a weighted final score average of TIN/NPI combinations and apply it to a new TIN/NPI that did not exist during the performance period. One of these commenters stated this was a straightforward approach for handling MIPS eligible clinicians who have changed practice mid-year. Commenters that supported the TIN/NPI combination also supported using the final score associated with each TIN/NPI combination (not weighting across each TIN/NPI) if the clinician was in those TIN/NPIs in the performance period and still in those TINs/NPIs in the payment year. Some commenters supported the approach to use the highest TIN score in instances where a clinician has multiple MIPS scores rather than a weighted average. One commenter supported CMS's alternative approach for eligible clinicians who bill under more than one TIN.

    Response: We agree that performance should follow the clinician's NPI. We believe that a weighted average final score would provide a more accurate picture of the NPI's performance. We believe it is easier to communicate and operationalize a methodology that selects the highest final score available for a MIPS eligible clinician, particularly for the transition year. Therefore, we are finalizing our alternative policy to use the highest final score associated with an NPI from the performance period. We may revisit this policy in future rulemaking and consider whether we should require a certain percentage of billings by an NPI under a TIN before attributing the TIN's final score to the NPI.

    Comment: One commenter proposed that CMS give eligible clinicians practicing in multiple TINs the option of being scored based on their performance across all TINs and did not recommend that CMS simply calculate a weighted average across all TINs.

    Response: We are finalizing the policy that compares scores across all practices and takes the highest final score.

    Comment: Some commenters did not support CMS' proposal to calculate a final score for both a group and individual identifier, taking the higher final score, in cases where a MIPS eligible clinician reported as both a group and as an individual. One commenter recommended CMS use a weighted final score average based on total allowed charges associated with the NPI because this approach takes into account the eligible clinician's entire performance during the period. One commenter specifically stated they did not support CMS' proposal to apply the highest APM entity final score to the eligible clinician in cases where a MIPS clinician has more than one APM entity final score for the same TIN.

    Response: We are unclear as to how we would calculate a weighted score for the same TIN/NPI during the same performance period. For simplicity, we have elected to take the highest final score.

    Comment: Another commenter stated the proposed process to determine which final score takes precedence (APM entity, group, or individual) for eligible clinicians is confusing and unnecessarily complicated, as it is currently possible for MIPS eligible clinicians to earn multiple final scores based on their unique reporting experience. The commenter suggested CMS assign only one score to each TIN/NPI.

    Response: Each TIN/NPI will receive only one final score for purposes of the MIPS payment adjustment determination. However, since we allow each MIPS eligible clinician to decide how they want to report, either individually, through a group, or through an APM as a MIPS APM participant, we cannot completely control the number of submissions that one TIN/NPI may have. To address these types of circumstances, we have established policies in this section to clearly articulate the hierarchy for which of the final scores will take precedence for the MIPS payment adjustment.

    After consideration of these comments, we are finalizing our policy to use the TIN/NPI's historical performance from the performance period associated with the MIPS payment adjustment, regardless of whether that NPI is billing under a new TIN after the performance period. In the event that an NPI bills under multiple TINs in the performance period and bills under a new TIN in the MIPS payment year, we are finalizing our alternative policy of taking the highest final score associated with that NPI in the performance period.

    b. MIPS Payment Adjustment Factors

    Section 1848(q)(6)(A) of the Act requires the Secretary to specify a MIPS adjustment factor (referred to as a MIPS payment adjustment factor) for each MIPS eligible clinician for a year determined by comparing the final score of the MIPS eligible clinician for such year to the performance threshold established under paragraph (D)(i) for such year, in a manner such that the adjustment factors specified for a year result in differential payments. Section 1848(q)(6)(A)(iii) of the Act provides that MIPS eligible clinicians with a final score at or above the performance threshold receive a zero or positive MIPS adjustment factor on a linear sliding scale such that a MIPS adjustment factor of 0 percent is assigned for a final score at the performance threshold and a MIPS adjustment factor of the applicable percent is assigned for a final score of 100. Positive MIPS adjustment factors may be increased or decreased by a scaling factor (not to exceed 3.0) to ensure the budget neutrality requirement is met.

    Section 1848(q)(6)(A)(iv) of the Act provides that MIPS eligible clinicians with a final score below the performance threshold receive a negative MIPS adjustment factor on a linear sliding scale such that a MIPS adjustment factor of 0 percent is assigned for a final score at the performance threshold and a MIPS adjustment factor of the negative of the applicable percent is assigned for a final score of 0; further, MIPS eligible clinicians with final scores that are equal to or greater than zero, but not greater than one-fourth of the performance threshold, receive a negative MIPS adjustment factor that is equal to the negative of the applicable percent. Section 1848(q)(6)(B) of the Act defines the applicable percent for each year as follows: (i) For 2019, 4 percent; (ii) for 2020, 5 percent; (iii) for 2021, 7 percent; and (iv) for 2022 and subsequent years, 9 percent.

    Section 1848(q)(6)(C) of the Act provides for an additional positive MIPS payment adjustment factor for exceptional performance (referred to as an additional MIPS payment adjustment factor), for each of the years 2019 through 2024, for each MIPS eligible clinician with a final score for a year at or above the additional performance threshold under paragraph (D)(ii) for such year. The additional MIPS payment adjustment factor shall be in the form of a percent and determined in a manner such that MIPS eligible clinicians having higher final scores above the additional performance threshold receive higher additional MIPS payment adjustment factors.

    We are codifying these requirements as follows:

    At § 414.1405(a), we are codifying that each MIPS eligible clinician receives a MIPS payment adjustment factor, and if applicable an additional MIPS payment adjustment factor for exceptional performance, for a MIPS payment year determined by comparing their final score to the performance threshold and additional performance threshold for the year.

    At § 414.1405(b)(1), we are codifying that MIPS eligible clinicians with a final score at or above the performance threshold receive a zero or positive MIPS payment adjustment factor on a linear sliding scale such that an adjustment factor of 0 percent is assigned for a final score at the performance threshold and an adjustment factor of the applicable percent is assigned for a final score of 100.

    At § 414.1405(b)(2), we are codifying that MIPS eligible clinicians with a final score below the performance threshold receive a negative MIPS payment adjustment factor on a linear sliding scale such that an adjustment factor of 0 percent is assigned for a final score at the performance threshold and an adjustment factor of the negative of the applicable percent is assigned for a final score of 0; further, MIPS eligible clinicians with final scores that are equal to or greater than zero, but not greater than one-fourth of the performance threshold, receive a negative MIPS payment adjustment factor that is equal to the negative of the applicable percent.

    At § 414.1405(c), we are codifying the applicable percent.

    c. Determining the Performance Thresholds (1) Establishing the Performance Threshold

    Under section 1848(q)(6)(D)(i) of the Act, for each year of the MIPS, the Secretary shall compute a performance threshold for which the final scores of MIPS eligible clinicians are compared for purposes of determining the MIPS payment adjustment factors under section 1848(q)(6)(A) of the Act for a year. The performance threshold for a year must be either the mean or median (as selected by the Secretary, and which may be reassessed every 3 years) of the final scores for all MIPS eligible clinicians for a prior period specified by the Secretary. Section 1848(q)(6)(D)(iii) of the Act outlines a special rule for the initial 2 years of MIPS, which requires the Secretary, prior to the performance period for such years, to establish a performance threshold for purposes of determining the MIPS payment adjustment factors under section 1848(q)(6)(A) of the Act and an additional performance threshold for purposes of determining the additional MIPS payment adjustment factors under section 1848(q)(6)(C) of the Act, each of which shall be based on a period prior to the performance periods and take into account data available for performance on measures and activities that may be used under the performance categories and other factors determined appropriate by the Secretary.

    We are codifying the definition of the term “performance threshold” at § 414.1305 as the numerical threshold for a MIPS payment year against which the final scores of MIPS eligible clinicians are compared to determine the MIPS payment adjustment factors. Final scores above the performance threshold receive a positive MIPS payment adjustment factor and final scores below the performance threshold receive a negative MIPS payment adjustment factor. Final scores that are equal to or greater than 0, but not greater than one-fourth of the performance threshold receive the maximum negative MIPS payment adjustment factor for the MIPS payment year. Final scores at the performance threshold receive a neutral MIPS payment adjustment factor.

    To establish the performance threshold for the 2019 MIPS payment year, we proposed to model 2014 and 2015 Medicare Part B allowed charges, 2014 and 2015 PQRS data submissions, 2014 and 2015 QRUR and sQRUR feedback data, and 2014 and 2015 Medicare and Medicaid EHR Incentive Program data to inform where the performance threshold should be. We would use this data to estimate the impact of the quality and cost scoring proposals. We would also use the EHR Incentive Program information to estimate which MIPS eligible clinicians are likely to receive points for the advancing care information performance category. Because of the lack of historical data for the improvement activities performance category, we would apply some sensitivity analyses to help inform where the performance threshold should be.

    For the 2019 MIPS payment year, we proposed to set the performance threshold at a level where approximately half of the eligible clinicians would be below the performance threshold and half would be above the performance threshold, which we believe is consistent with the intent of section 1848(q)(6)(D)(i) of the Act which requires the performance threshold in year 3 and beyond to be equal to the mean or median of final scores from a prior period. We also considered other policy options when setting the performance threshold. For example, we considered setting the performance threshold so that the scaling factor (which is described in section II.E.7.b. of the proposed rule (81 FR 28273) is 1.0. We could set the performance threshold based on policy goals to ensure a minimum number of points are earned before an eligible clinician is able to receive a positive MIPS adjustment factor and potentially an additional adjustment factor for exceptional performance. We solicited comment on the policy options for setting the performance threshold.

    The following is summary of the comments we received regarding our proposals for setting the performance threshold for the 2019 MIPS payment year.

    Comment: One commenter stated that it is unreasonable to punish nearly half of clinicians in MIPS. Several commenters requested that CMS seek to establish a performance threshold that would ease the transition to MIPS by minimizing penalties under the program. One of those commenters noted that section 1848(q)(6)(D)(iii) of the Act provides the Secretary with a level of discretion in establishing the MIPS performance threshold during the first 2 years of the program and requested CMS adopt a threshold methodology for years 1 and 2 that would ease the transition to MIPS by minimizing penalties under the program. Several commenters recommended setting the performance threshold at a modest level for the initial performance year such that it would be readily attainable through data reporting alone (for example, no downward adjustment for those who report measures during the first 2 years). These same commenters suggested that if CMS insists upon setting the performance threshold such that the distribution of penalties and bonuses under MIPS would be expected to be roughly equal, then commenters recommended that CMS adopt the lower-bound estimate of the final score that would be needed to establish such a performance threshold. In other words, CMS should take the lowest possible performance threshold value from the different estimates it generates. According to one commenter, such an approach would be justified because (1) CMS has admitted that it is unclear how MIPS will impact small and solo practices and should therefore go with the threshold that is least likely to have negative impacts, (2) the scaling factor will help ensure budget neutrality in a case where the threshold is set too low, (3) the additional performance threshold will reward true exceptional performance even in cases where the threshold is too low. Other commenters recommended exercising caution in setting the initial performance threshold under MIPS.

    One commenter referred to the estimate that half of eligible clinicians would fall below the performance threshold and recommended that CMS create a structure whereby fewer clinicians are penalized during the transition year of the program. Some commenters suggested various approaches to setting the performance threshold lower. One commenter suggests pushing a greater number of physicians into the category where no MIPS payment adjustment is made as one possible solution. Another commenter proposed identifying a threshold range, at least for 2017 performance, to hold clinicians harmless falling in that range. And another commenter suggested to “flatten the curve” of negative MIPS payment adjustments and positive MIPS payment adjustments in the transition year so that more eligible clinicians fall in the middle of the curve and there will be fewer negative MIPS payment adjustments.

    Response: We heard significant concern, as summarized above, that MIPS eligible clinicians will not have time to prepare for MIPS, that there is confusion about MIPS, and that the performance threshold should be set low so that the majority of MIPS eligible clinicians are not subject to a negative MIPS payment adjustment. Given the numerous concerns, we agree that year 1 of MIPS should be a transition year, and we have determined that it would be inappropriate to set a performance threshold that would result in downward adjustments to payments for many clinicians who may not have had time to prepare adequately to succeed under the MIPS. By providing a pathway for many clinicians to succeed under MIPS, we believe that we will encourage early participation in the program, which would enable more robust and thorough engagement with the program over time. We agree with the commenters that setting the performance threshold at an appropriately low number will provide MIPS eligible clinicians an opportunity to achieve a minimum level of success under the program, while gaining experience with reporting on the measures and activities and becoming familiar with other program policies and requirements. By contrast, if we set the threshold too high, using a new formula that is unfamiliar and confusing to clinicians, many could be discouraged from participating in the first year of the program, which may lead to lower participation rates in future years. We believe that active participation of MIPS eligible clinicians in MIPS will improve the overall quality, cost, and care coordination of services provided to Medicare beneficiaries.

    Section 1848(q)(6)(D)(iii) of the Act includes a special rule to establish the performance threshold and the additional performance threshold for the first 2 years of MIPS. Specifically, the Secretary shall, prior to the performance period for such years, establish a performance threshold for purposes of determining the MIPS payment adjustment factors under section 1848(q)(6) (A) of the Act and an additional performance threshold for purposes of determining the additional MIPS payment adjustment factors under section 1848(q)(6)(C) of the Act, each of which shall be based on a period prior to the performance periods and take into account data available for performance on measures and activities that may be used under the performance categories and other factors determined appropriate by the Secretary.

    We are relying on the special rule under section 1848(q)(6)(D)(iii) of the Act to establish the performance threshold and the additional performance threshold for the 2019 MIPS payment year to create a transition year policy that encourages participation and provides an opportunity for MIPS eligible clinicians to become familiar with MIPS and other aspects of the Quality Payment Program.

    We considered available data regarding performance on measures and activities that may be used under the MIPS performance categories. With regard to the quality performance category, we took several steps to identify PQRS participation rates for MIPS eligible clinicians. First, we identified the TIN/NPIs who billed a Medicare Part B service in 2015. We used clinician type and specialty information from NPPES to establish a subset of those clinician types who are eligible for MIPS as described in section II.E.1 of this rule. We then used 2015 Part B data to exclude those who did not exceed the low-volume threshold as defined in section II.E.3 of this rule. We used 2015 PQRS data to assess whether to apply the low-volume at the individual (TIN/NPI) or group (TIN) level. We assumed all Shared Savings Program participants would exceed the low volume threshold because the Shared Savings Program has a requirement that the ACOs be attributed a minimum number of patients.

    Due to data limitations, we had to proxy new enrollment by identifying NPIs who billed PFS services in 2015 and not in 2014. We also excluded 2015 Pioneer ACO participants and CPC participants as we estimated they might represent QPs in the future. We were not able to specifically identify the exact number of QPs or Partial QPs. We refer readers to the regulatory impact analysis in section V.C. of this final rule with comment period for more details on this analysis. We estimate between 592,119-642,119 MIPS eligible clinicians, but due to the model limitations to identify QP and Partial QPs, we included 676,722 MIPS eligible clinicians in our model.

    We used the 2015 PQRS data to create benchmarks for our model based on our final policies and assign points under the quality performance category based on performance. For the readmission measure we used the 2014 VM analytic file, which was the most recent data available. We then estimated final scores using the quality performance category scores. We did not include cost measures because the cost performance category has 0 percent weight in the 2019 final score. We did not include data for improvement activities or advancing care information because we did not have detailed performance data available for all MIPS eligible clinicians. While we have some performance data for the Medicare EHR Incentive Program, we do not have detailed performance data for the Medicaid EHR Incentive Program. Having performance data for only a portion of MIPS eligible clinicians would have skewed the analysis; therefore, we restricted our analysis to the quality performance data only.

    Using 2015 PQRS data, we determined which of these MIPS eligible clinicians participated and calculated participation rates for the MIPS quality performance category, the performance category that accounts for a minimum of 60 percent of the transition year final score. For our participation counts, we did not include other data files because 2015 information was either not available or would not have impacted the participation score. We noted that 87.2 percent of the estimated MIPS eligible clinicians submitted data to PQRS, but the participation rate is lower for solo practitioners and practices with 2-9 clinicians at 58.2 percent. As mentioned in this final rule with comment period, we want to create a scenario where many MIPS eligible clinicians have the ability to participate while transitioning to the MIPS.

    We are setting the performance threshold at 3 points for the 2019 MIPS payment year taking into account the data available as described above, but also based on other factors we believe are appropriate, such as encouraging participation in the first year of MIPS. We want to ensure that MIPS eligible clinicians are allowed time to gain understanding of the MIPS and pick their pace as they report on the MIPS performance categories. We believe that setting the performance threshold at 3 points will encourage more MIPS eligible clinicians to participate in the MIPS during the transition year and provide a structure for eligible clinicians to gain experience in order to successfully participate in the future years of the MIPS. With a 3 point performance threshold, an eligible clinician could meet or exceed the performance threshold through a minimal level of performance during the transition year. For example, under the quality performance category, the 3-point floor for any submitted quality measure would result in a neutral or positive MIPS payment adjustment for most MIPS eligible clinicians submitting a single measure. A MIPS eligible clinician, including solo practitioner or small practice, that submits one quality measure with low performance, and no improvement activities or measures specified for the advancing care information performance category (assuming advancing care information performance category measures are available and applicable to the MIPS eligible clinician) would have the following performance category scores: The quality performance category score is 3 points out of a possible 60 points (the total possible points is 10 points for each of the six required measures) or 5 percent; improvement activities is 0 points out of a possible 40 points or 0 percent; and advancing care information is 0 percent out of 100 percent. The final score would equal the performance category scores times the performance category weights (([5 percent*60 percent] + [0 percent*15 percent] + [0 percent*25 percent]) *100), which totals 3 points. This MIPS eligible clinician would receive a neutral MIPS payment adjustment because the performance threshold is set at 3 points. Similarly, any MIPS eligible clinician reporting as a group of 16 or more clinicians would receive at least 3.75 points for submitting at least one improvement activity (10 points out of a possible 40 points × 15 percent (improvement activities performance category weight)). Solo practitioner clinicians and those in groups of 15 or less would receive at least 7.5 points (20 points out of a possible 40 points × 15 percent (improvement activities performance category weight)). We provide further details of these calculations in the examples listed at the end of this section. The exception that should be noted is under the advancing care information performance category. For a MIPS eligible clinician to receive a neutral or positive MIPS payment adjustment based solely on the advancing care information performance category, the MIPS eligible clinician must report on all of the measures in the base score, for the reasons discussed in section II.E.5.g.(6)(b) of this final rule with comment period. Finally, we note that if a group of 16 or more, does not report any quality performance category data, the group would be scored on the all-cause readmission measure (assuming the group meets the readmission measure minimum case size requirements) even if they did not submit any other quality performance category measures if they submitted information in other performance categories. If a group of 16 or more did not report any information in any of the performance categories, then the readmission measure would not be scored. A group will never have a final score based on the readmission measure alone.

    As commenters note above, the lower performance threshold will “flatten the curve” in that relatively fewer MIPS eligible clinicians would receive a negative MIPS payment adjustment which would lower the scaling factor required by budget neutrality. In other words, the amount of the positive MIPS payment adjustment from the adjustment factor on a per-clinician basis will be less than under our initial proposal as more MIPS eligible clinicians would qualify for a positive MIPS payment adjustment; however, we believe this is necessary in order to achieve our transition year goals.

    While we have lowered the performance threshold as part of our transition year policies, we do not think it would be appropriate to lower the additional performance threshold, as the additional performance threshold is the point at which MIPS eligible clinicians can receive an additional adjustment factor for exceptional performance. As we discuss in the next section, we will decouple the performance threshold and the additional performance threshold and set the additional performance threshold at 70 points. The lower performance threshold of 3 points will meet our policy goal to increase participation in the first year of MIPS and transparency in the scoring methodology; however, we believe that MIPS eligible clinicians must demonstrate exceptional performance to receive an additional adjustment factor.

    We intend to increase the performance threshold in year 2 and beginning in year 3 we will use the mean or median final score from a prior period as required by section 1848(q)(6)(D)(i) of the Act. The performance threshold and other transition year policies provide an opportunity for MIPS eligible clinicians to pick their pace in participation. This encourages MIPS eligible clinicians to participate and become familiar with the MIPS requirements.

    Also, while we are finalizing a performance threshold of 3 and an additional performance threshold of 70 in this rule, in future years we may not publish the numerical performance threshold and additional performance threshold in a final rule. We would finalize our methodology for calculating these thresholds via notice and comment rulemaking and then utilize that methodology to calculate and announce the performance threshold and additional performance threshold for a MIPS payment year on a Web site prior to the performance period, rather than publishing the numerical thresholds within a final rule.

    Comment: Several commenters expressed support for our proposal to set the performance threshold as the median of all expected final scores. Another commenter expressed support for CMS' proposal to set the performance threshold for 2019 such that half of eligible clinicians would be below the performance threshold and half would be above it.

    Response: As described in this final rule with comment period, we are not finalizing our proposal to set the performance threshold at a level where approximately half of the MIPS eligible clinicians would be below the threshold and half would be above the performance threshold; rather, we are relying on the special rule under section 1848(q)(6)(D)(iii) to set the performance threshold at 3 points for the 2019 MIPS payment year to encourage participation by MIPS eligible clinicians. We will take these commenter's support into consideration as we monitor the MIPS scoring system over time.

    Comment: One commenter requested CMS to clarify in the final rule with comment period how the performance thresholds will be set each year. Another commenter questioned the use of the term “approximately” in defining the performance threshold, asking why it would be approximate, rather than precise. One commenter recommended that CMS allow stakeholders to provide input on how the methodology is applied to calculate the 2019 performance threshold, since the description in the proposed rule on the proposed methodology, and alternative methodologies, is not sufficiently detailed. Another commenter is concerned that performance data from 2019 could yield a less equal distribution if CMS chooses to move towards a mean for the performance threshold because if half of MIPS eligible clinicians are above and half are below the performance threshold, this could lead to MIPS eligible clinicians receiving a penalty in 2021 after 2 years of increases, even if their performance did not objectively change. One commenter advised the creation of a policy to mitigate instability in MIPS payment adjustments as the MIPS transitions to its own data. Another commenter expressed concern that CMS' proposal to set the performance threshold at the 50th percentile of national MIPS eligible clinician performance could have disparate impacts on different types of clinicians, particularly those in small practices.

    Response: To inform our policies we performed data modeling based on available data. Please see description of our model to assess participation described earlier in this section. We took into account this data to set the additional performance threshold, which we have decoupled from the performance threshold and will set at 70 points. As we noted in this final rule with comment period, for future MIPS payment years, we intend to publish the numerical performance threshold and additional performance threshold on a Web site prior to the performance period. These thresholds will be specific numbers, not approximations.

    Beginning with the third MIPS payment year (2021), we must set the performance threshold according to section 1848(q)(6)(D)(i) of the Act at the mean or median of the final scores for all MIPS eligible clinicians for a prior period.

    Comment: Several commenters opposed the use of pre-MACRA data for setting performance thresholds. One commenter did not favor using non-MIPS historical data to set performance thresholds during the first 2 years of MIPS. Another commenter did not support CMS proposal to use existing quality and cost data to set MIPS performance thresholds as this data did not align exactly with MIPS. Another commenter proposed withdrawing the use of 2014 data and using 2016 data in the establishment of the thresholds and noted that using 2016 data will more accurately reflect clinical practice improvements as a result of PQRS. A commenter requested, to the extent that CMS is using 2014 and 2015 PQRS data submissions in setting the initial performance threshold, that CMS exclude data submitted via Measure Groups reporting, which requires only a non-random sample of 20 patients per measure. Additionally, one commenter suggested that thresholds should be determined by clinicians and clinician practices in MIPS. Another commenter requested that CMS devise an approach to use 2017 data to set thresholds for both 2017 and 2018 performance periods. One commenter recommended that CMS not rely only on existing data, but to apply lessons learned from previous legacy reporting programs and changes that have been incorporated into MIPS and build those into future performance thresholds.

    Response: We disagree with commenters on using prior data from PQRS, VM, and the EHR Incentive program. Section 1848(q)(6)(D)(iii)(II)(aa) of the Act requires us to consider data available with respect to performance on measures and activities that may be used under the performance categories and we believe this data to be the most comparable. As described earlier in this section, we have used data from 2015 PQRS and the 2014 Physician Feedback Program and VM to inform our models, which is the most recent data available. We excluded the PQRS measures group submissions as that option is no longer available in MIPS. In addition, we have used 2015 CMS enrollment files and administrative claims to estimate who is a MIPS eligible clinician.

    Comment: One commenter recommended CMS conduct additional analyses assessing the differences in requirements between the preexisting reporting programs and the MIPS performance categories, and use this analysis as the basis for adjusting performance thresholds in the MIPS performance categories.

    Response: PQRS, VM, and EHR Incentive Program are different programs than MIPS; however, many of the measures used in the MIPS performance categories are drawn from these programs. In addition, we are unaware of other data sources that would be more appropriate. We have used the source data and tried to replicate the MIPS requirements to create the most informed models possible. For example, we created benchmarks using PQRS data based on the finalized MIPS policies. We created group scores for PQRS group practice reporting options and individual scores for individual submissions. The VM and QRUR data from the Physician Feedback Program data is only available at the TIN level, so we applied the group score to individuals when individuals were reporting. While this is not an exact replication of the MIPS methodology, we believe this is a close approximation and we used these data to inform our policies. The performance threshold is set at 3 points to encourage MIPS eligible clinicians to pick their pace as they participate under the Quality Payment Program.

    Comment: One commenter requested that CMS clarify how it plans to calculate the MIPS performance threshold for the 2019 payment year by providing detail about the “sensitivity analyses” used to account for the improvement activities performance category. Another commenter recommended that CMS publish this methodology and include a public comment period prior to the start of MIPS.

    Response: We elected to not use sensitivity analyses for the creation of the performance threshold. Rather, we used PQRS data to estimate participation and our scoring policies to set the performance threshold at 3 points, and the additional performance threshold at 70 points. As noted above we intend to finalize our methodology for calculating these thresholds for future years via notice and comment rulemaking.

    Comment: A commenter disagreed with CMS's alternative proposal that would require a clinician to earn a minimum number of points above the performance threshold before receiving a positive MIPS adjustment factor, and believed clinicians performing above the established threshold have shown a high level of performance and should be able to immediately begin earning incentives.

    Response: We are explaining that our alternative proposal would not have required a MIPS eligible clinician to earn a minimum number of points above the performance threshold to achieve a positive MIPS payment adjustment. Rather, we would set the performance threshold where clinicians would be required to meet a certain number of points. As described above, we believe it is important to set transition year policies that encourage participation while allowing the flexibility for MIPS eligible clinicians to pick their pace with the MIPS and other provisions of the Quality Payment Program. Therefore, we are establishing the performance threshold at 3 points.

    Comment: A commenter did not believe MIPS eligible clinicians should be rewarded or penalized for scores that do not reflect significant statistical differences from their peers. One commenter stated that given that previous CMS performance analyses were unable to distinguish between large majorities of clinicians, the commenter recommended that performance adjustments be made only the high and low end with clinicians in the middle areas receiving adjustments of a de minimis amount. The commenter understood there were statutory questions involved in this decision by CMS, but the commenter believed that CMS can operate within the text of the statute and employ an adjusted linear structure that recognized the reality that most physicians' performance will be indistinguishable from one another. One commenter suggested incentivizing high-performers and suggested that eligible clinicians under the national performance threshold that improve score by a certain percentage would be eligible to have their penalty decreased by 0.5 percent. Another commenter suggested that CMS apply MIPS payment adjustments based on aggregated MIPS scores that are one standard deviation above (incentive) or one standard deviation below (penalty) the mean or median.

    Response: We would like to emphasize that the MIPS payment adjustment a MIPS eligible clinician receives is determined by the final score and how it relates to the performance threshold. Once the linear sliding scale that is described in section II.E.7.b. of this final rule with comment period is established, we will not modify the amount of the MIPS payment adjustment based on factors such as improvement or standard deviations. In our scoring policies described in section II.E.6. of this final rule with comment period, we have discussed in detail how we are differentiating performance for the different performance categories and how those performance categories scores are combined into a final score. All MIPS eligible clinicians with the same final score will receive the same MIPS payment adjustment. We also note that with our transition policies that we anticipate most MIPS eligible clinicians that submit data will receive a neutral to small positive MIPS payment adjustment in the transition year. We anticipate the slope of the positive MIPS payment adjustment due to budget neutrality to be relatively flat, which will minimize differences based on the adjustment factor, although there will be more MIPS payment adjustments for the additional adjustment factor for exceptional performance.

    Comment: Numerous commenters expressed concern about the negative impact the Quality Payment Program would have on small and rural practices. One commenter recommended that small practices have lower reporting thresholds and adjusted scoring mechanisms throughout the MIPS program. Another commenter recommended that CMS set the performance threshold at 15 points in the transition year of implementation to reduce the negative impact on small practices. One commenter noted that CMS estimates that 87 percent of solo practices will face a negative MIPS payment adjustment in 2019, causing them disproportionate hardship as a result of this system of evaluation.

    Response: We recognize the particular challenges faced by small and rural practices. We agree with the commenter that a reduced performance threshold should ease participation burden for small practices, therefore as noted above we are lowering the performance threshold for the transition year to 3 points. We did consider creating different performance criteria for small practices, but determined that these different performance criteria levels would create additional confusion and additional burden for administrators to have to track towards. Rather we believe our approach of modifying the low volume threshold exclusion in combination with the modified performance threshold has created a path for solo and small practices to participate in MIPS. In addition, we will provide additional technical assistance to these practices.

    Comment: Several commenters disagreed with the MIPS negative MIPS payment adjustment proposal because they believed it will penalize small practices while subsidizing larger practices. Another commenter requested that small physician practices be exempted from negative MIPS payment adjustments so that they can continue to participate in Medicare. One commenter recommended that CMS reduce the proposed MIPS payment adjustments for 2019, which would result from the 2017 performance period, especially for small practices of 2-9 clinicians. Another commenter hoped there was a reasonable penalty for zero percent compliance for small FFS practices. A few commenters expressed concern that the adjustment factors would exacerbate distortions between well-resourced and less resourced practices. Another commenter requested that CMS take into consideration practice size and location in determining overall MIPS incentives or payment reductions, so that rural clinicians and practices are not penalized at greater levels than urban clinicians and practices. One commenter stated the adverse effects to small and solo practices of the estimated negative MIPS payment adjustments would jeopardize patient care.

    Response: We appreciate the commenters concerns and want MIPS to be an equitable program regardless of practice size. We do recognize that many solo and small practices did not participate in the sunsetting programs and therefore have less experience with the requirements under the MIPS. To ease the participation burden, we have reduced the performance threshold to 3 points for year 1, which provides a pathway for solo and small practices to engage in MIPS. We do not have the statutory authority to exempt solo practitioners and small practices from MIPS. We have however increased the low-volume exclusion to exclude groups and individuals with less than or equal to $30,000 in Part B charges or less than or equal to 100 beneficiaries, which will exclude more small groups and solo practices from being MIPS eligible clinicians. Lastly, we note the applicable percent for the MIPS payment adjustments are established in section 1848(q)(6)(B) of the Act and we are not able to modify that amount.

    Comment: Several commenters suggested that the scoring system may need special rules for IHS, Tribal, and Urban Indian health programs. These commenters suggested that these clinicians should have their own performance threshold that accounts for the government's responsibility to provide quality health care to AI/ANs and the chronic underfunding of their health care systems.

    Response: We appreciate the unique challenges that face MIPS eligible clinicians that are part of IHS, Tribal, and Urban Indian health programs. We considered creating different performance criteria for certain types of clinicians, however, we believe that approach would create more confusion and burden than a cohesive set of criteria. Rather, to ease the participation burden, we have reduced the performance threshold to 3 points for the transition year only, which provides a pathway for MIPS eligible clinicians to engage in MIPS. We are also committed to continuing to work with IHS and its partners to streamline and coordinate programs where possible.

    Comment: A few commenters were concerned with the proposed notification of the performance threshold. One commenter was concerned that the threshold for the MIPS final score had not been identified, which would make it difficult for a practice to assess what changes may need to occur. Another commenter was concerned clinicians will not know where they stand relative to the performance threshold on an annual basis until after the close of the reporting period. A commenter proposed that CMS make the MIPS adjustment information available to each eligible clinician at least 2 months prior to when the MIPS payment adjustment is applied each year.

    Response: We will publish the performance threshold in advance of each performance period. We also intend to provide performance feedback as discussed in section II.E.8.a. of the final rule with comment period to provide eligible clinicians with meaningful information regarding their performance trends. We also intend to develop toolkits and educational materials which will allow MIPS eligible clinicians to estimate their total score and the associated adjustment percentage they could receive.

    Comment: Another commenter believed the development of more “real-time” feedback mechanisms would greatly increase the impact of the published performance threshold.

    Response: We appreciate the desire for “real-time” feedback and the impact it may have on eligible clinicians' performance. We refer readers to section II.E.8.a. of this final rule with comment period for detailed policies on the performance feedback. We have also established performance standards so that MIPS eligible clinicians will be able to estimate their performance throughout the performance period.

    After consideration of the public comments, we are finalizing at § 414.1405(b) that a performance threshold will be specified for each MIPS payment year. Specifically, we are finalizing a performance threshold of 3 points for the 2019 MIPS payment year in accordance with the special rule set forth in section 1848(q)(6)(D)(iii) of the Act. We believe this approach to establishing the performance threshold will enable more robust and thorough engagement with the program over time consistent with our goal for a transition year. As noted above, however, we intend to increase the performance threshold in year 2, and beginning in year 3 we will use the mean or median final score from a prior period as required by section 1848(q)(6)(D)(i) of the Act.

    (2) Additional Performance Threshold for Exceptional Performance

    In addition to the performance threshold, section 1848(q)(6)(D)(ii) of the Act requires the Secretary to compute, for each year of the MIPS, an additional performance threshold for purposes of determining the additional MIPS payment adjustment factors for exceptional performance under paragraph (C). For each such year, the Secretary shall apply either of the following methods for computing the additional performance threshold: (1) The threshold shall be the score that is equal to the 25th percentile of the range of possible final score above the performance threshold determined under section 1848(q)(6)(D)(i) of the Act; or (2) the threshold shall be the score that is equal to the 25th percentile of the actual final score for MIPS eligible clinicians with final score at or above the performance threshold for the prior period described in section 1848(q)(6)(D)(i) of the Act.

    For each year of the MIPS, we will compute an additional performance threshold for purposes of determining the additional MIPS payment adjustment factors under section 1848(q)(6)(C) of the Act. We proposed at § 414.1405(e) the following methods for computing the additional performance threshold: The threshold shall be equal to the 25th percentile of the range of possible final score above the performance threshold; or it shall be equal to the 25th percentile of the actual final score for MIPS eligible clinicians with final scores at or above the performance threshold for the prior period used to determine the performance threshold.

    As discussed above, section 1848(q)(6)(D)(iii) of the Act outlines a special rule for establishing the additional performance threshold for the initial 2 years of MIPS. Because 2019 is the first MIPS payment year, we do not have any actual final score for MIPS eligible clinicians to use for purposes of defining an additional performance threshold under the methodology proposed above. Therefore, we proposed to establish the additional performance threshold at the 25th percentile of the range of possible final scores above the performance threshold. For example, if the performance threshold is 60, then the range of possible final scores above the performance threshold would be 61-100. The 25th percentile of those possible values is 70. We intended to publish the additional performance threshold with the performance threshold prior to the performance period.

    The following is a summary of the comments we received regarding our proposal to establish the additional performance threshold at the 25th percentile of the range of possible final scores above the performance threshold for exceptional performance.

    Comment: One commenter expressed support for CMS's proposal to set the additional performance threshold in 2019 at the 25th percentile of the range of possible scores above the performance threshold.

    Response: As we discussed in section II.E.7.c.(1) of this final rule with comment period, we are relying on the special rule under section 1848(q)(6)(D)(iii) of the Act to establish the performance threshold at 3 points for the transition year of MIPS (2019 MIPS payment year). As a result, we are not finalizing our proposal to establish the additional performance threshold at the 25th percentile of the range of possible final scores above the performance threshold. With a performance threshold set at 3 points, the range of total possible points above the performance threshold is 4 to 100 points. The 25th percentile of that range is 27.3 points, which is less than one third of the possible 100 points in the MIPS final score. We do not believe it would be appropriate to lower the additional performance threshold to 27.3 points, as we do not believe a final score of 27.3 points demonstrates exceptional performance by a MIPS eligible clinician. Under section 1848(q)(6)(C) of the Act, a MIPS eligible clinician with a final score at or above the additional performance threshold will receive an additional MIPS payment adjustment factor and may share in the $500,000,000 available for the year under section 1848(q)(6)(F)(iv) of the Act. We believe these additional incentives should only be available to those clinicians with very high performance on the MIPS measures and activities. Therefore, we are relying on the special rule under section 1848(q)(6)(D)(iii) of the Act to set the additional performance threshold at 70 points for the transition year (MIPS payment year 2019), which is higher than the 25th percentile of the range of the possible final scores above the performance threshold as proposed. We took into account the data available and the modeling described in section II.E.7.c.(1) to estimate final scores based on 2015 PQRS data and used the distribution of quality performance category scores to determine an appropriate additional performance threshold for the transition year (MIPS payment year 2019). In our model using historical 2015 PQRS participation, a final score of 70 points was higher than the mean, but less than the median final score. We believe 70 points is appropriate because it requires a MIPS eligible clinician to submit data for and perform well on more than one performance category (except in the event the advancing care information measures are not applicable and available to a MIPS eligible clinician). Generally, a MIPS eligible clinician could receive a maximum score of 60 points for the quality performance category, which is below the 70-point additional performance threshold. In addition, 70 points is at a high enough level that MIPS eligible clinicians have to submit quality data in order to achieve this target. For example, if a MIPS eligible clinician gets a perfect score for the improvement activities and advancing care information performance categories, but does not submit quality measures data, then the MIPS eligible clinician will only receive 40 points (0 points for quality + 15 points for improvement activities + 25 points for advancing care information), which is below the additional performance threshold. We believe the additional performance threshold at 70 points maintains the incentive for excellent performance while keeping the focus on quality performance.

    Comment: One commenter requested clarification on how IHS/Tribally-operated facilities can qualify for an additional positive MIPS payment adjustment for exceptional performance.

    Response: MIPS eligible clinicians that are part of IHS/Tribally-operated facilities are able to earn an additional MIPS payment adjustment factor if their final score is at or above the additional performance threshold of 70 points. These clinicians are subject to the same rules for MIPS participation that apply to other MIPS eligible clinicians.

    Comment: One commenter recommended that CMS provide exceptional performance bonuses to MIPS eligible clinicians who demonstrate improvement, not just high achievement, in subsequent performance periods after the first performance period.

    Response: We do not have authority to distribute the $500 million available under section 1848(q)(6)(F)(iv) of the Act for exceptional performance for any reason other than for final scores at or above the additional performance threshold.

    After consideration of the public comments, we are codifying at § 414.1305 the definition of additional performance threshold as the numerical threshold for a MIPS payment year against which the final scores of MIPS eligible clinicians are compared to determine the additional MIPS payment adjustment factors for exceptional performance. We are also finalizing at § 414.1405(d) that an additional performance threshold will be specified for each of the MIPS payment years 2019 through 2024. Specifically, the additional performance threshold for the 2019 MIPS payment year is 70 points.

    d. Scaling/Budget Neutrality

    Section 1848(q)(6)(F)(i) of the Act provides, for positive MIPS payment adjustment factors for MIPS eligible clinicians whose final score is above the performance threshold under paragraph (D)(i) for such year, the Secretary shall increase or decrease such adjustment factors by a scaling factor (not to exceed 3.0) to ensure that the budget neutrality requirement of clause (ii) is met. Stated generally, budget neutrality as required by section 1848(q)(6)(F)(ii) of the Act means the estimated increase in the aggregate allowed charges resulting from the application of positive MIPS payment adjustment factors under section 1848(q)(6)(A) of the Act (after application of the scaling factor) is equal to the estimated decrease in the aggregate allowed charges resulting from the application of negative MIPS payment adjustment factors under section 1848(q)(6)(A) of the Act. Under section 1848(q)(6)(F)(iii) of the Act, budget neutrality requirements shall not apply if all MIPS eligible clinicians receive final scores for a year that are below the performance threshold under paragraph (D)(i) for such year, or if the maximum scaling factor (3.0) is applied for a year. We are codifying at § 414.1405(b)(3) that a scaling factor not to exceed 3.0 may be applied to positive MIPS payment adjustment factors to ensure budget neutrality such that the estimated increase in aggregate allowed charges resulting from the application of the positive MIPS payment adjustment factors for the MIPS payment year equals the estimated decrease in aggregate allowed charges resulting from the application of negative MIPS payment adjustment factors for the MIPS payment year.

    e. Additional Adjustment Factors

    Section 1848(q)(6)(C) of the Act requires, for each of the years 2019 through 2024, the Secretary to specify an additional MIPS payment adjustment factor for each MIPS eligible clinician whose final score for a year is at or above the additional performance threshold established under paragraph (D)(ii) for that year. This additional adjustment factor is required to take the form of a percentage and to be determined by the Secretary such that MIPS eligible clinicians with higher final scores above the additional performance threshold receive higher additional MIPS payment adjustment factors. Section 1848(q)(6)(F)(iv)(I) of the Act provides, in specifying the additional adjustment factors under paragraph (C) for each applicable MIPS eligible clinician for a year, the Secretary shall ensure that the estimated aggregate increase in payments under Medicare Part B resulting from the application of such additional adjustment factors shall be equal to $500,000,000 for each year beginning with 2019 and ending with 2024. We refer to the $500,000,000 increase in payments as aggregate incentive payments. Section 1848(q)(6)(F)(iv)(II) of the Act provides that the additional adjustment factor for each applicable MIPS eligible clinician shall not exceed 10 percent, which may result in an aggregate increase in payments that is less than $500,000,000 as described in subclause (I).

    To be consistent with the MIPS payment adjustment factors under section 1848(q)(6)(A) of the Act, we proposed to apply a linear sliding scale where MIPS eligible clinicians with a final score at the additional performance threshold would receive 0.5 percent additional adjustment factor and MIPS eligible clinicians with a final score equal to 100 would receive a 10 percent maximum additional adjustment factor. Similar to the adjustment factor, we would apply a scaling factor that is greater than 0 and less than or equal to 1.0 if needed to ensure distribution of the $500,000,000 increase in payments. The scaling factor must be greater than 0 to ensure that MIPS eligible clinicians with higher final scores receive a higher additional adjustment factor. The scaling factor cannot exceed 1.0; the 10 percent maximum additional adjustment factor could only decrease and not increase because section 1848(q)(6)(F)(iv)(II) of the Act provides that the additional adjustment factor shall not exceed 10 percent. We proposed the starting point for the additional adjustment factor at 0.5 percent for a final score at the additional performance threshold because this would provide a large enough incentive for MIPS eligible clinicians to strive for the additional performance threshold, while still providing the opportunity for a positive slope on the linear sliding scale. If we are unable to achieve a linear sliding scale starting at 0.5 percent (because the estimated aggregate increase in payments for a year would exceed $500 million), then we proposed to lower the starting percentage for a final score at the additional performance threshold until we are able to create the linear sliding scale with a scaling factor greater than 0 and less than or equal to 1.0. A MIPS eligible clinician with a final score that is below the additional performance threshold would not be eligible for an additional adjustment factor. We requested comments on these proposals.

    The following is summary of the comments we received regarding the additional adjustment factor.

    Comment: One commenter expressed support for CMS's proposal to set the starting point for the additional adjustment factor at 0.5 percent; however, a couple commenters did not believe this should be considered a large enough incentive for eligible clinicians to strive to reach the additional threshold, particularly for physicians without a significant amount of Medicare business. One of the commenters requested an explanation for why CMS would use a 0.5 percent adjustment factor for MIPS clinicians above the additional performance threshold.

    Response: We would like to note that the additional adjustment factor could range from 0.5 percent up to 10 percent, depending on the scaling factor. As the final score increases, the additional adjustment factor increases. We started at 0.5 percent as that is the annual update for the PFS for 2019 and we believed this was a reasonable starting point that would allow a positive slope for the additional adjustment factor.

    Comment: One commenter was concerned with funding bonuses for the Quality Payment Program given that the program needs to be budget neutral.

    Response: Under section 1848(q)(6)(F) of the Act, budget neutrality is only required with respect to the MIPS payment adjustment factors under section 1848(q)(6)(A), not the additional MIPS payment adjustment factors for exceptional performance under section 1848(q)(6)(C) of the Act.

    After consideration of the public comments, we are finalizing our proposal at § 414.1405(d)(1), MIPS eligible clinicians with a final score at or above the additional performance threshold receive an additional MIPS payment adjustment factor for exceptional performance on a linear sliding scale such that an additional adjustment factor of 0.5 percent is assigned for a final score at the additional performance threshold and an additional adjustment factor of 10 percent is assigned for a final score of 100, subject to the application of a scaling factor as determined by CMS, such that the estimated aggregate increase in payments resulting from the application of the additional MIPS payment adjustment factors for the MIPS payment year shall not exceed $500,000,000 for each of the MIPS payment years 2019 through 2024.

    f. Application of the MIPS Payment Adjustment Factors

    Section 1848(q)(6)(E) of the Act provides that for items and services furnished by a MIPS eligible clinician during a year (beginning with 2019), the amount otherwise paid under Part B for such items and services and MIPS eligible clinician for such year, shall be multiplied by 1 plus the sum of the MIPS payment adjustment factor determined under section 1848(q)(6)(A) of the Act divided by 100, and as applicable, the additional MIPS payment adjustment factor determined under section 1848(q)(6)(C) of the Act divided by 100. We would apply the adjustment factors in accordance with section 1848(q)(6)(E) of the Act.

    We requested comment on our proposals.

    The following is summary of the comments we received regarding our proposal to apply the MIPS payment adjustment factors for items and services furnished by a MIPS eligible clinician during a year in accordance with section 1848(q)(6)(E) of the Act.

    Comment: One commenter requested clarification as to how the MIPS payment adjustment will be made, either in a lump sum at the end of the year or reflected in each claim paid. Another commenter suggested the payment be one lump sum.

    Response: MIPS payments will not be made in a lump sum, but applied as an adjustment on a per claim basis.

    Comment: A few commenters requested further clarification on whether the base rate factored into the MIPS adjustment calculation includes the MIPS adjustment rate.

    Response: The adjustment will be based upon the amount otherwise paid for the item or service under Part B.

    Comment: One commenter requested that CMS clarify whether Part B drug payments will be affected by MIPS payment adjustments. Commenter observed that in previous programs (PQRS, EHR Incentive Program (Meaningful Use), and Value-based Payment Modifier) the payment adjustments were only made to the services paid under the Medicare PFS, which included administration of Part B drugs, but not the cost of the actual drugs. Commenter would like verification that this policy will continue under the Quality Payment Program.

    Response: We appreciate the comment and note that we did not address this issue in the proposed rule. We will consider this issue and intend to provide clarification in the future.

    Comment: Commenter requested guidance on whether Medicare Advantage plans would build in MIPS adjustments to their payment rates to non-contracted providers, as MA plans are currently required to pay non-contracted providers the same rates as they receive under FFS. Commenter stated that if adjustments must be factored in to non-contracted provider payment rates, it will be critical for CMS to provide plans with timely and complete data on adjustments to ensure payment accuracy.

    Response: Medicare Advantage rates are set through a separate process, and payment policies will be addressed in the Advance Notice and Rate Announcement for that program.

    After consideration of the public comments, we are finalizing our proposed application of the MIPS payment adjustment factors at § 414.1405(e). For each MIPS payment year, the MIPS payment adjustment factor, and if applicable the additional MIPS payment adjustment factor, are applied to Medicare Part B payments for items and services furnished by the MIPS eligible clinician during the year.

    g. Example of Adjustment Factors

    Figure A of the proposed rule, provided an example of how various final scores would be converted to an adjustment factor and potentially an additional adjustment factor, using the statutory formula. We direct readers to 81 FR 28276 for an illustration of the proposed policies.

    Figure A in this final rule with comment period shows an illustrative picture based on the final policies. In Figure A, the performance threshold is 3 points. The applicable percentage is 4 percent for 2019. The adjustment factor is determined on a linear sliding scale from zero to 100, with zero being the lowest negative applicable percentage (negative 4 percent for 2019), and 100 being the highest positive applicable percentage. However, there are two modifications to this linear sliding scale. First, there is an exception for a final score between 0 and 1/4 of the performance threshold (0 and 0.75 for the 2019 payment year). All MIPS eligible clinicians with a final score in this range would receive the lowest negative applicable percentage (negative 4 percent for 2019). Second, the linear sliding scale line for the positive MIPS adjustment factor is adjusted by the scaling factor (which is determined by the formula described in section II.E.7.d. of this final rule with comment period.). If the scaling factor is greater than 0 and less than or equal to 1.0, then the adjustment factor for a final score of 100 would be less than or equal to 4 percent. If the scaling factor is above 1.0, but less than or equal to 3.0, then the adjustment factor for a final score of 100 would be higher than 4 percent. Only those MIPS eligible clinicians with a final score equal to 3 points (which is the performance threshold in this example) would receive a neutral MIPS payment adjustment. Because our final policies have set the performance threshold at 3 points, we anticipate that the scaling factor would be less than 1.0 and the payment adjustment for MIPS eligible clinicians with a final score of 100 points would be less than 4 percent.

    Figure A of this final rule with comment period illustrates an example slope. In this example, the scaling factor for the adjustment factor is 0.214, which is much lower than 1.0. MIPS eligible clinicians with a final score equal to 100 would have an adjustment factor of 0.856 percent (4.0 percent × 0.214).

    The additional performance threshold is 70 points. An additional adjustment factor of 0.5 percent starts at the additional performance threshold and increases on a linear sliding scale up to 10 percent times a scaling factor that is greater than 0 and less than or equal to 1.0. In Figure A of this final rule with comment period, the example scaling factor for the additional adjustment factor is 0.1523. Therefore, MIPS eligible clinicians with a final score of 100 would have an additional adjustment factor of 1.523 percent (10 percent × 0.1523). The total adjustment for a MIPS eligible clinician with a final score equal to 100 would be 1 + 0.00856 + 0.01523 = 1.02379, for a total positive MIPS payment adjustment of 2.379 percent. Note that in calculating payment adjustments, we will not round any numbers until the final step of the process. After we have calculated the total adjustment for a MIPS eligible clinician, we will round the percentage upward or downward to one decimal point. Thus, a total adjustment of 1.02379 will be rounded to a positive payment adjustment of 2.4 percent.

    ER04NO16.007 Note:

    The adjustment factor for final score values above the performance threshold is illustrative. For MIPS eligible clinicians with a final score of 100, the adjustment factor would be 4 percent times a scaling factor greater than 0 and less than or equal to 3.0. The scaling factor is intended to ensure budget neutrality, but cannot be higher than 3.0. The additional adjustment factor is also illustrative. The additional adjustment factor starts at 0.5 percent and cannot exceed 10 percent. MIPS eligible clinicians at or above the additional performance threshold will receive the amount of the adjustment factor plus the additional adjustment factor.

    The final MIPS payment adjustments would be determined by the distribution of final scores across MIPS eligible clinicians and the performance threshold. More MIPS eligible clinicians above the performance threshold means the scaling factors would decrease because more MIPS eligible clinicians receive a positive MIPS payment adjustment. More MIPS eligible clinicians below the performance threshold means the scaling factors would increase because more MIPS eligible clinicians would have negative MIPS payment adjustments and relatively fewer MIPS eligible clinicians receive positive MIPS payment adjustments.

    We requested comment on these examples, but we did not receive any comments on them. We have however provided in Table 31 a summary of the MIPS payment adjustments based on different final scores.

    Table 31—Illustration of Point System and Associated Adjustments in Transition Year Final score points MIPS adjustment 0-0.75 Negative 4 percent
  • (Note: We anticipate that this range will comprise mostly of MIPS eligible clinicians with a final score of 0.)
  • 0.76-2.9 Negative MIPS payment adjustment greater than negative 4 percent and less than 0 percent on a linear sliding scale. (Note: We do not anticipate many MIPS eligible clinicians to fall into this range.) 3.0 0 percent adjustment. 3.1-69.9 Positive MIPS payment adjustment ranging from greater than 0 percent to 4 percent × a scaling factor to preserve budget neutrality, on a linear sliding scale. 70.0-100 Positive MIPS payment adjustment AND additional MIPS payment adjustment for exceptional performance. (Additional MIPS payment adjustment starting at 0.5 percent and increasing on a linear sliding scale to 10 percent multiplied by a scaling factor.)

    We have provided the following examples to demonstrate to readers how the MIPS calculations and performance threshold of 3 points will operate for various performance scenarios.

    Example 1: A solo practitioner is a low performer who reports one measure/activity in each performance category. For quality scoring, the MIPS eligible clinician submits 1 quality measure instead of the required 6 measures. Under our finalized scoring approach, we allow all MIPS eligible clinicians to receive a three-point floor per measure in the quality performance category. Under this scenario, the MIPS eligible clinician receives the three-point floor for the one measure submitted and the quality performance category is weighted at 60 percent of the final score. The MIPS eligible clinician's total quality performance category score is 3: (1 measure × 3 points each/total possible points of 60 points) × 60 = 3. We note that we did not include the all-cause hospital readmissions measure in the above quality performance category calculation since it is not applicable to groups of 15 or fewer clinicians, nor to MIPS eligible clinicians reporting as individuals due to reliability concerns.

    As discussed in section II.E.6.a.(4) of this final rule with comment period, different improvement activities scoring rules apply to a solo practitioner (or small group) than apply to groups of 16 or more clinicians. Under these special scoring rules, a solo practitioner who performs one medium-weighted activity receives 20 out of 40 potential points in the improvement activities performance category score, and one who performs one high-weighted activity receives 40 out of 40 of the improvement activities performance category score. The improvement activities performance category score is weighted as 15 percent of the final score. In this example, the MIPS eligible clinician that is a solo practitioner who performs only one medium-weighted activity, which equals 20 out of the 40 possible points, or 50 percent, for the improvement activities performance category score, which has a weight of 15 percent of the final score. The MIPS eligible clinician's total improvement activities performance category score is 7.50 (50 percent × 15 =7.50).

    For advancing care information performance category scoring, the eligible clinician submits the required elements of the base for advancing care information only which is worth 50 percent of the advancing care information performance category score. The advancing care information performance category is worth 25 percent of the final score. In this scenario, the eligible clinician would receive an advancing care information score of (50 percent × 25) =12.5.

    As a result, the total final score = 3 +7.5+12.5= 23.0 points which is above the performance threshold of 3 points.

    Example 2: A MIPS eligible clinician, who is a solo practitioner, receives a 0 for all performance categories except the quality performance category. The MIPS eligible clinician submits four quality measures, instead of the required six measures. Under the finalized scoring approach, we allow all MIPS eligible clinicians to receive a three-point floor per submitted measure in the quality performance category. Under this scenario, the MIPS eligible clinician receives the three-point floor for each of the four measures submitted and the quality performance category is weighted at 60 percent of the final score. Since the MIPS eligible clinician has received 0 in each of the other categories. The MIPS eligible clinician's total final score is: (four measures × 3 points each/total possible points of 60 points) x 60 percent performance category weight = 12 points. The final score = 12 points (12 points for quality + 0 points for improvement activities + 0 points advancing care information) which is above the performance threshold. We note that we did not include the all-cause hospital readmissions measure in the above calculation since it is not applicable to groups 15 or fewer clinicians, nor MIPS eligible clinicians reporting as individuals due to reliability concerns.

    Example 3: A MIPS eligible clinician, a high performer who is a solo practitioner, performs two medium-weighted activities in improvement activities and submits five measures with high performance and one measure with slightly above average performance. This clinician does not report in the advancing care information performance category and receives a 0 score for the category. For quality scoring, under this scenario, we assume for purposes of illustration and ease of understanding that the MIPS eligible clinician receives 10 points for each of the measures submitted with high performance, and 6 points for the other measure submitted. The quality performance category is weighted at 60 percent of the final score. The MIPS eligible clinician's quality score is: (five measures x 10 points each + 1 measure × 6 points each/total possible points of 60 points) × 60 = 56 points. We note that we did not include the all-cause hospital readmissions measure in the above calculation since it is not applicable to groups with 15 or fewer clinicians and MIPS eligible clinicians reporting as individuals due to reliability concerns.

    As discussed in section II.E.6.a.(4) of this final rule with comment period, different improvement activities scoring rules apply to a solo practitioner (or small group) than apply to groups of 16 or more clinicians. Under these special scoring rules, a solo practitioner who performs one medium-weighted activity receives 20 out of 40 potential points in the improvement activities performance category score, and one who performs one high-weighted activity receives 40 out of 40 of the improvement activities performance category score. The improvement activities performance category score is weighted as 15 percent of the final score. In this example, the MIPS eligible clinician performs two medium-weighted activities, which equals 40 out of 40 points or 100 percent for the improvement activities performance category score, which has a weight of 15 percent of the final score. The MIPS eligible clinician's total improvement activities performance category score is 15 (40/40 × 15=15).

    Under this scenario, the MIPS eligible clinician's final score is 56 for the quality performance category score +15 for the improvement activities performance category score + 0 for advancing care information performance category score = 71 points which is above the additional performance threshold of 70.

    Example 4: A MIPS eligible clinician in a group with 20 MIPS eligible clinicians, reports as a group, and only submits data for the improvement activities performance category. This group also has sufficient case volume to be measured for the readmission measure and in our hypothetical example, has poor performance and receives 3 points for the readmission measure. In this scenario, the improvement activities special scoring rules do not apply since the MIPS eligible clinician is in a group of 20 MIPS eligible clinicians and is reporting as a group. The MIPS eligible clinician performs only one high activity for the improvement activities performance category. For improvement activities scoring for groups of more than 15 clinicians, all groups who perform one medium activity receive 10 out of 40 points for the improvement activities score, and those who perform each high activity receive 20 points toward the improvement activities score. The improvement activities score is weighted as 15 percent of the final score. In this example, the MIPS eligible clinician performs only one high activity, achieves 20 out of 40 possible points of the improvement activities score, which has a weight of 15 percent of the final score. In addition, even though the group did not submit quality measures to the quality performance category information, the group is measured on the readmission measure because the group has submitted improvement activities as a group. As explained above, the group achieves only 3 points on the readmission measure and therefore has a quality score equal to 3 out 70 points. The group has 0 for the advancing care information category. The eligible clinician's total final score is (3/70 quality performance category score × 60 percent for quality performance category weight) + (20/40 improvement activities performance category score × 15 percent improvement activities performance category weight) + (0 advancing care information quality score × 25 percent advancing care information performance category weight) = [(4.3 percent × 60 percent) + (50 percent × 15 percent) + (0 percent × 25 percent)] × 100 = 10.1 points, which is above the performance threshold of 3.

    We cannot guarantee that establishing the performance threshold of 3 for the transition year will always provide a positive MIPS payment adjustment for MIPS eligible clinicians; however, it does provide more opportunities for MIPS eligible clinicians to participate and become familiar with MIPS. In addition, the additional adjustment factor provides incentives for MIPS eligible clinicians to strive for good performance.

    8. Review and Correction of MIPS Final Score a. Feedback and Information to Improve Performance

    Through the MIPS and APMs RFI, we solicited comment on various questions related to performance feedback under section 1848(q)(12) of the Act, such as what type of information should be contained in the performance feedback data, how often the feedback should be made available, and who should be able to access the data. Several commenters stated that it would be beneficial if the performance feedback under MIPS contained all the data that contributes to an EP's final score and any MIPS adjustment. Further, several commenters suggested that performance feedback allow for interactive use of the data. Commenters supported frequent availability of such data and many noted that a minimum of quarterly feedback data would be preferred. Commenters also noted that access to PQRS feedback reports currently was a challenge and some suggested that the EPs should be able to control who can access the feedback reports.

    (1) Performance Feedback (a) MIPS Eligible Clinicians

    Under section 1848(q)(12)(A)(i) of the Act, as added by section 101(c)(1) of the MACRA, we are at a minimum required to provide MIPS eligible clinicians with timely (such as quarterly) confidential feedback on their performance under the quality and cost performance categories beginning July 1, 2017, and we have discretion to provide such feedback regarding the improvement activities and advancing care information performance categories.

    Beginning July 1, 2017, we proposed to include information on the quality and cost performance categories in the performance feedback. Within these performance categories, we proposed to use fields similar (that is, quality and cost) to those currently available in the Quality and Resource Use Reports (QRURs). Since the QRURs already provide information on quality and cost we believe this is a good starting point for the data fields to be included in the performance feedback. Additional information on the current QRURs can be found at https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/PhysicianFeedbackProgram/2015-QRUR.html.

    The first performance feedback is due on July 1, 2017. As this is prior to us having received any MIPS data, we proposed to initially provide feedback to MIPS eligible clinicians who are participating in MIPS using historical data set(s), as available and applicable. For example, these historical data set(s) could be a baseline report, using data based off performance that occurred in CY 2015 or CY 2016 for applicable and available quality and cost data. Since 2017 is the first MIPS performance period (as finalized in section II.E.8.a.), we do not anticipate receiving the first set of data for MIPS until 2018 (see 81 FR 28181). At a minimum for the transition year, we proposed to provide performance feedback on an annual basis since the first performance feedback, required on July 1, 2017 would be based on historic data set(s). As the program evolves, and we can operationally assess/analyze the MIPS data, we may consider in future years providing performance feedback on a more frequent basis, such as quarterly. Section 1848(q)(12)(A)(i) of the Act requires the performance feedback to be provided “timely” (such as quarterly), which is our goal as MIPS evolves. In addition, we solicited comments on whether we should include first year measures in the performance feedback, meaning new measures that have been in use for less than 1 year, regardless of submission methods. The reasoning behind first-year measures potentially not being reported is we need to review the data from the measures before these data are incorporated into performance feedback, as we want to ensure the data we are providing in the performance feedback is useful and actionable for our stakeholders. We requested comments on these proposals.

    In future years and as the program evolves, we intend to seek comment on the template, including but not limited to the data fields, for performance feedback. While section 1848(q)(12)(A)(i) of the Act only requires us to provide performance feedback for the quality and cost performance categories, we understand that the improvement activities and advancing care information performance categories are important MIPS data. Commenters to the MIPS and APMs RFI noted that CMS should consult with stakeholders to ensure this performance feedback is useful before these data are provided to MIPS eligible clinicians. Therefore, we may consider including feedback on the performance categories of improvement activities and advancing care information in future years. Further, before we consider adding improvement activities and advancing care information data to the performance feedback we would like to engage in stakeholder outreach to understand what data fields might be helpful and actionable for MIPS eligible clinicians. Regarding the MIPS final score, this is something we are targeting to provide annually as part of the performance feedback as the program evolves. As technically feasible, we are also planning to provide data fields such as the final score and each of the four performance categories in future performance feedback once MIPS data become available. In addition, we plan to explore the possibility of including the MIPS payment adjustment factor (and, as applicable, the additional MIPS payment adjustment factor) in future performance feedback. We solicited comment on the frequency with which this performance feedback should be provided, considerations for including improvement activities and advancing care information, and data fields that should be included in the performance feedback as this program evolves.

    The following is a summary of the comments we received regarding our proposal to provide annual performance feedback on the quality and cost performance categories starting July 1, 2017, which would be based on historic/baseline information and include fields similar to QRURs.

    Comment: Some commenters were not in support of providing performance feedback. However, the majority of commenters supported providing performance feedback. Some commenters agreed with the proposal to provide initial feedback starting on July 1, 2017 based on historical data for the quality and cost performance categories.

    With regard to the frequency of providing performance feedback, commenters' suggestions ranged from annually to 6 weeks of the performance period. The majority of commenters stated that annual feedback would not provide timely information or frequent feedback, due to the long look-back periods hindering the ability for improvements of care. Many commenters supported real-time feedback to eligible clinicians and groups, and suggested making feedback available during the performance periods so clinicians could correct errors in a timely fashion. The majority of comments supported quarterly feedback from CMS, some commenters noting this should begin in 2017. One commenter requested that CMS adopt a requirement that eligible clinicians be furnished quarterly feedback on the advancing care information performance category during the performance period.

    One commenter stated that 6 months is the ideal target to provide feedback, to allow for unavoidable claim run-out and review processes. While some commenters supported a monthly performance feedback so adjustments could be made in workflow to improve performance. Another commenter noted that performance feedback should be provided no later than 45 days following the end of a performance period. One commenter requested that performance feedback be available and accessible upon request. One commenter recommended that CMS allow eligible clinicians to choose if they want to receive more current feedback, such as quarterly.

    Another commenter recommended that performance feedback be provided prior to the end of the performance period. Other commenters suggested that the final performance feedback is provided no later than October 1 of the reporting year. Another commenter expressed that performance feedback to eligible clinicians would only be effective if it would come in time to make meaningful changes to the practice, and that subsequently July 1 was too late in the year for feedback.

    Some commenters believed there is value in submitting data more frequently (for example, an iterative process where practices and vendors submit data routinely); but if CMS intends to provide feedback after eligible clinicians submit their data and not on a frequent basis, then eligible clinicians should not be required to submit data more frequently.

    Another commenter recommended an approach that allows for timely, actionable feedback, such as the Bundled Payments for Care Improvement (BPCI) model, which offers monthly data files and quarterly reconciliation reports with subsequent true-ups.

    Response: As we indicated in the proposed rule, our goal is to provide even more timely feedback under MIPS as the program evolves. We do note that there are a number of challenges with providing feedback more frequently than annually, namely that for the MIPS performance period, under our final policies in this rule data will be received on an annual basis for the majority of submission mechanisms. However, as noted in section II.E.4., we will, if technically feasible, allow for submissions during the performance period. In that section we note that having more frequent data submissions is a preliminary step on being able to issue more timely feedback. We will provide the first performance feedback on the quality and cost performance categories by July 1, 2017. We believe that additional steps need to be taken both internally by CMS and through external stakeholder outreach/engagement to move towards a more frequent data submission process, which will enable CMS to provide more timely or real-time feedback. Additionally, we do not currently have the ability to provide feedback more frequently than annually as data will be submitted to CMS by clinicians and their third party intermediaries on an annual basis. However we will take this comment into future consideration as we develop the processes to provide more frequent feedback, including what frequency requirements should be placed around the submission requirements.

    Comment: Some commenters requested clarification on whether CMS would provide clinician and/or TIN specific feedback about quality during the reporting year.

    Response: We can only provide feedback on performance as often as data are reported to us; for MIPS, this will be an annual basis for all quality submission mechanisms except for claims and administrative claims. As noted in section II.E.4. we will, if technically feasible, allow the submission of data more frequently throughout the year which would allow us to enable the generation of additional feedback that is accurate and meaningful to MIPS eligible clinicians.

    Comment: One commenter requested that CMS expand the type of data available to clinicians on the cost performance category. One commenter believed that cost data should be provided to eligible clinicians on a rolling quarterly basis. A few commenters requested more frequent performance feedback for cost, and that cost information be available to clinicians as soon as possible during the performance period, and to keep the attribution process transparent.

    One commenter noted that clinicians need real-time information, including attribution for cost to perform well and achieve the Quality Payment Program's goals. One commenter requested that CMS provide attributed cost beneficiary lists and other data to eligible clinicians that can provide timely and actionable insights to organizations on a quarterly basis. Another commenter recommended making available information about cost in local specialists in performance feedback to inform referral decisions which can impact the cost measure performance. Some commenters recommended CMS to provide patient-level claims data for each cost performance measure so clinicians can understand specific care pathways and referral patterns that drive unnecessary expenditures. Another commenter suggested allowing clinicians to drill down to the un-aggregated patient level for performance feedback for cost.

    Some commenters recommended that CMS provide the ability for eligible clinicians and organizations to run real-time cost measure reports on the CMS Web site, as waiting for CMS to publish mid-year or even quarterly reports does not provide sufficient time to design and implement improvement interventions.

    One commenter encouraged CMS to provide cost performance feedback that makes it possible for the data to be incorporated into other reporting and analytics tools the clinician might be using and allows the clinician to monitor their scores throughout the reporting period.

    Response: We do intend to provide performance feedback on cost measures, as further described in section II.E.5.e. of this final rule with comment period. As technically feasible, we will provide performance feedback on the measures specified for the cost performance category. We also plan to provide feedback on episode-based measures, as we believe this information will be useful to eligible clinicians, even though some of these episode-based measures have not been adopted for the cost performance category for the CY 2017 performance period, but could be used in future years if proposed through rulemaking (see II.E.5.e. of this rule). Additionally, some of these measures will be released in the 2015 S-QRURs that will be available in October 2016. We are still determining the formatting and details of that data. We will publish the cost measures specifications and attribution methodology on our Web site. We also agree the goal of performance feedback will be to provide as frequently-as-is-meaningful feedback to MIPS eligible clinicians regarding the cost performance category, and this is what we are working toward in the future as we build the web-based application for performance feedback distribution.

    Additionally, section 1848(q)(12)(B)(i) of the Act, states that beginning July 1, 2018, the Secretary shall make available to MIPS eligible clinicians information about the items and services for which payment is made under Title 18 that are furnished to individuals who are patients of MIPS eligible clinicians by other suppliers and providers of services. This information may be made available through mechanisms determined appropriate by the Secretary. We agree this information would be useful to eligible clinicians, and are therefore targeting to include this information in the performance feedback beginning July 1, 2018.

    Comment: One commenter indicated that CMS needs to create performance feedback that shows quality and cost at the measure level and change in performance over time in order for information to be used in performance improvement. Another commenter suggested that CMS provide transparency on quality measurement data at both the individual and group level.

    Response: We agree providing performance feedback that shows quality and cost at the measure level would be useful to MIPS eligible clinicians, and we plan to include this data beginning July 1, 2018. As technically feasible we intend to incorporate improvement information into the performance feedback, when available.

    Comment: One commenter requested clarification on if the QRUR would still be utilized under MIPS in the same way it is being utilized for PQRS now. Some commenters were concerned about using the QRURs as the template for MIPS performance feedback, expressing their belief that QRURs were not clear in the feedback being provided, actionable on the eligible clinician's behalf, or inclusive of data that would allow the eligible clinician to compare and improve against the performance thresholds. One commenter recommended improvements to the content and accessibility of supplemental QRURs to encourage familiarity with cost performance data and the clinical episodes that will be attributed to a clinician or group. One commenter suggested the QRUR be supplemented with additional information on topics such as beneficiary attribution characteristics. Another commenter requested that CMS encourage clinicians to access performance feedback to supplement the information they receive from CMS on their Medicare Fee for Services claims.

    One commenter requested that CMS continue to provide timely mid-year and end-of-year QRURs to eligible clinicians in order for them to receive timely feedback about their performance and payment adjustments under MIPS. Some commenters supported quicker and broader access to performance scores and “feedback reports” such as those provided to clinicians as part of the Physician Feedback Program (QRURs), and the Medicare Shared Savings Program for ACOs for quality improvement purposes. One commenter suggested the QRURs be provided on a quarterly basis moving forward with the Quality Payment Program so the information is timely for performance feedback.

    One commenter noted concerns with the implementation feasibility of getting performance feedback out for mid-year performance given past experience with the PQRS and QRURs, and urged CMS to make the investments needed in resources and systems to ensure timely feedback.

    Response: Under section 1848(n)(11) of the Act, as added by section 101(d)(3) of the MACRA, reports under the Physician Feedback Program (in other words, the QRURs) shall not be provided after December 31, 2017, and will be succeeded by the MIPS performance feedback under section 1848(q)(12) of the Act. The QRURs have provided information on quality and cost measure performance as well as the beneficiary and clinician-level data underlying and driving the measures; therefore, while we believe this is a good starting point for performance feedback under the MIPS, we do not anticipate using the same format as the QRURs for future years of the Quality Payment Program. We will continue to engage in user research with front-line clinicians and other stakeholders to ensure we are providing the performance feedback data in a user-friendly format, and that we are including the data most relevant to clinicians.

    Comment: Many commenters suggested feedback be included on all four performance categories, so eligible clinicians could know how they are doing in each performance category. Some commenters recommended that CMS use its discretion to expand the performance feedback to relay information on improvement activities and advancing are information.

    Response: We agree that all four performance categories may be beneficial to include in performance feedback. For the first performance feedback, as we proposed, only quality and cost will be provided. We will continue to work with stakeholders on the best way to include all four performance categories in performance feedback. A summary of comments received regarding future considerations for including improvement activities and advancing care information, and data fields that should be included in the performance feedback as this program evolves can be found below in section II.E.8.a.(7) of this final rule with comment period.

    Comment: One commenter expressed support for providing more frequent real-time feedback to eligible clinicians on administrative claims-based measures. Another commenter believed that CMS should make claim-level data for all potential beneficiaries available to practices with MIPS eligible clinicians.

    Response: We will be providing performance feedback on these types of measures, as applicable. We also agree the goal of performance feedback will be to provide as frequently-as-is-meaningful feedback to clinicians, and this is what we are working toward in the future as we build the web-based application and work with registries, EHRs, and QCDRs for performance feedback.

    Comment: One commenter believed that CMS does not need to create a new feedback reporting system, but should instead focus on improving the current system.

    Response: We agree, and will continue working with stakeholders to improve the future performance feedback for the Quality Payment Program.

    Comment: Some commenters requested that eligible clinicians who are not required, but who report voluntarily, receive the same access to performance feedback as MIPS eligible clinicians.

    One commenter requested that CMS expedite the performance feedback process so that partial‐year data on performance in the transition year of the MIPS is available to physicians prior to July 1, 2018—and preferably prior to January 1, 2018.

    Response: We have considered the comments received and will take them into consideration in the future development of performance feedback through separate notice-and-comment rulemaking.

    After consideration of the comments, we are finalizing that we will use the QRUR released on September 26, 2016 (referred to as the 2015 Annual QRUR) as the first MIPS performance feedback provided under section 1848(q)(12)(A)(i) of the Act, which will contain quality and cost data. The September 2016 QRURs are available and can be accessed at https://portal.cms.gov/wps/portal/unauthportal/home/. We encourage physicians and physician groups to access their report and review the quality and cost information to prepare for the Quality Payment Program. To note, this report will not contain data regarding the final score or payment adjustment for the Quality Payment Program, that information is not yet available and therefore will be provided in future performance feedback. Further, we may have MIPS eligible clinicians that will not have historical data available, through the September 2016 QRUR, to produce performance feedback. For those eligible clinicians we will not be able to produce performance feedback, until these eligible clinicians submit data through the Quality Payment Program. Additionally, to note for MIPS eligible clinicians and groups, the Quality Payment Program will produce performance feedback as long as quality data is submitted or at least one patient and is attributed to a MIPS eligible clinician or group for cost or quality measurement.

    Lastly, we note that these QRURs are produced at the TIN level, which is the level for applying adjustments under the VM program. We recognize that assessments under MIPS may be conducted at either the individual or group level, and that payment adjustments will be made at the TIN/NPI level; therefore, QRURs may not provide sufficient detail for those clinicians who are currently assessed at the TIN level under the VM, but who may choose to be assessed at the individual level under the Quality Payment Program. To address this issue, we intend, prior to the 2018 performance period, to provide as much feedback as technically possible to clinicians at the individual level. Since at this time CMS will not have performance data for the 2017 performance period (as that data is not yet available), we will not be able to provide feedback on that data. We intend to look into providing feedback to clinicians on the data it does have available, for instance, on claims based cost data or claims based outcome measures.

    The September 26, 2016 QRURs show how physician groups and physician solo practitioners performed in 2015 on quality and cost measures relative to national benchmarks and indicate whether physicians will receive an upward, neutral or downward adjustment under the VM in 2017. The QRURs also contain important information about care delivered to Medicare beneficiaries that can be used to better understand and improve quality and cost performance under the VM including information about hospitalizations and other providers that can be used to improve quality and better coordinate care.

    By utilizing an already existing report, that provides quality and resource use (for example, cost) feedback, we intend to focus resources on continued user testing with front-line clinicians and other stakeholders and development of new and improved methods and mechanisms for performance feedback, including but not limited to those suggested in these comments. We are utilizing an existing report because it does not make sense for us to create a duplicative report containing the same information and provide it to clinicians beginning July 1, 2017, which would only confuse clinicians by continuing to use the same data as provided in the September 26, 2016 QRUR. Additionally, no clinicians would have submitted data for the Quality Payment Program to us before July 1, 2017. Therefore, we intend to invest our resources in creating an easy to understand and meaningful performance feedback for the Quality Payment Program beginning July 1, 2018.

    We note, however, that we expect to provide the 2016 annual QRUR in early fall 2017 that will show how groups and solo practitioners performed in 2016 on the quality and cost measures used to calculate the 2018 VM, as well as their 2018 VM payment adjustment. The 2016 annual QRUR will be the last annual QRUR provided to groups and solo practitioners, as the VM program is sunsetting. We believe the 2016 annual QRUR is important to provide ongoing feedback to clinicians and groups to support their successful transition to the Quality Payment Program.

    Further we note that in the next performance feedback, we intend to provide performance feedback for MIPS data collected in 2017. This data could potentially include all applicable data reflecting CY 2017 performance, including data on the quality and cost performance categories; as well as, data regarding the final score and payment adjustment. This reflects our commitment to providing as timely information as possible to eligible clinicians in order to help them predict their performance in MIPS. CMS intends for this performance feedback to be available in the new format for the 2017 performance period by summer 2018, after the 2017 reporting closes. For updates and more information, please see QualityPaymentProgram.cms.gov.

    In addition, we solicited comments on whether we should include first year measures in the performance feedback, meaning new measures that have been in use for less than 1 year, regardless of submission methods. We also solicited comments on including the final score in performance feedback as the program evolves. The following is a summary of the comments we received.

    Comment: Some commenters encouraged CMS to provide information on its performance feedback on first year MIPS measures, so that eligible clinicians can determine their performance on these measures before they are scored on them. The commenter stated that whether or not feedback on first year QCDR measures should be reported may have to take into consideration such factors as the number of clinicians reporting on a measure and other concerns, and should be resolved in conjunction with the QCDR sponsor. Another commenter noted that while CMS may be unsure how to analyze first year measures, it is important for CMS to provide as much data as possible in the performance feedback, as long as such data are not shared publically or used to evaluate performance.

    Response: We understand the rationale that by providing first year measures in performance feedback, MIPS eligible clinicians may get a better sense of how they are performing on those measures. We need to review the data from the first year measures before these data are incorporated into performance feedback, as we want to ensure the data we are providing in the performance feedback is useful and actionable for our stakeholders. After reviewing data submitted for the first MIPS performance period and working with stakeholders on user experience testing, we will consider including first year measures in the performance feedback.

    For detailed information regarding first year measures and public reporting on Physician Compare, we refer commenters to section II.E.10. of this final rule with comment period.

    Comment: One commenter believes that CMS should provide feedback every 45 days instead of every 6 months in regard to negative, zero, or positive MIPS payment adjustment status.

    Response: As noted in the proposed rule (81 FR 28277), regarding the MIPS final score, this is something we are targeting to provide annually as part of the performance feedback as the program evolves. As technically feasible, we are also planning to provide data fields such as the final score and each of the four performance categories in future performance feedback once MIPS data becomes available. We note that we have not committed to providing feedback every 6 months, though we are working to increase the frequency of feedback we can provide.

    We have considered the comments received and will take them into consideration in the future development of performance feedback through separate notice-and-comment rulemaking.

    (b) MIPS APM Entities

    In the proposed rule, we proposed that MIPS eligible clinicians who participate in MIPS APM Entities would receive performance feedback, as technically feasible (81 FR 28247). A summary of comments on those proposals can be found in section II.E.5.h.(16) of this final rule with comment period.

    (2) Mechanisms

    Under section 1848(q)(12)(A)(ii) of the Act, the Secretary may use one or more mechanisms to make performance feedback available, which may include use of a web-based portal or other mechanisms determined appropriate by the Secretary. For the quality performance category, described in section 1848(q)(2)(A)(i) of the Act, the feedback shall, to the extent an eligible clinician chooses to participate in a data registry for purposes of MIPS (including registries under sections 1848(k) and (m) of the Act), be provided based on performance on quality measures reported through the use of such registries. For any other performance category (that is, cost, improvement activities, or advancing care information), the Secretary shall encourage provision of feedback through qualified clinical data registries (QCDRs) as described in sections 1848(m)(3)(E) of the Act.

    We understand that the PQRS and VM programs have employed various communication strategies to notify health care clinicians of the availability of their PQRS feedback reports and QRURs, respectively, through the CMS portal. However, many health care clinicians are still unaware of these reports and/or have difficulty accessing their reports in the portal. Further, we are aware that some health care clinicians perceive the current reports as complex and often difficult to understand; while others find the QRURs, and the drill down data included in them on the Medicare beneficiaries they serve, very useful. We are continuing to work with stakeholders to improve the usability of these reports. As we transition to MIPS, we are committed to ensuring that eligible clinicians are able to access their performance feedback, and that the data are easy to understand while providing information that will help drive quality improvement. We proposed to initially make performance feedback available using a CMS designated system, such as a web-based portal; and if technically feasible perhaps an interactive dashboard. As further discussed in the proposed rule (81 FR 28280), we also proposed to leverage additional mechanisms such as health IT vendors, registries, and QCDRs to help disseminate data/information contained in the performance feedback to eligible clinicians, where applicable. At this time, we believe that these additional mechanisms will only be able to provide information on the quality performance category for MIPS in regard to performance feedback.

    We plan to coordinate with third party intermediaries such as health IT vendors and QCDRs as MIPS evolves to enable additional feedback to be sent on the cost, advancing care information and improvement activities performance categories. We solicited comment on this for future rulemaking.

    Comments received through the MIPS and APMs RFI noted issues associated with access to the current feedback reports for PQRS. Specifically, comments were received noting issues with Enterprise Identity Management (EIDM) and access to the portal to view PQRS feedback reports. Commenters also noted the need for a mechanism to be put in place to notify EPs when their PQRS feedback report is available. We proposed to use the information contained in the provider or supplier's Medicare enrollment records, and stored in the Provider Enrollment, Chain, and Ownership System (PECOS), as the system of records for eligible clinicians' contact information that should be used when the MIPS performance feedback is available. It is therefore critical that eligible clinicians ensure that their Medicare enrollment records (especially in regard to phone and email contact information) are updated, meaning current, on a consistent basis in PECOS. If more than one email address is listed, then the email address that should be used for communication should be designated. We also intend to provide education and outreach on how to access performance feedback. We solicited comment on additional means that could be used to notify or contact MIPS eligible clinicians and groups when their performance feedback is available.

    The following is a summary of the comments we received regarding our proposal to provide performance feedback through a CMS designated system (such as a web-based portal or interactive dashboard), and to leverage additional mechanisms such as health IT vendors, registries, and QCDRs to help disseminate data/information contained in the performance feedback to eligible clinicians, where applicable.

    Comment: Commenters stated the feedback should be easy/clear to understand and easy to access, with helpful education and outreach. Some commenters suggested the process to access feedback should be streamlined and less complicated.

    Many commenters recommended an interactive web-based dashboard for feedback delivery that provides data in real-time to eligible clinicians, at least on a quarterly basis. Some commenters recommended the display of such a dashboard show performance feedback through graphics. A few commenters recommended to not implement performance feedback for the quality and cost performance categories until CMS has had a chance to bring online a web portal where MIPS eligible clinicians can log in and see their final score. The commenters explained that without understanding how they are scoring versus their peers under MIPS, many may fall inadvertently to the bottom of the quality or cost performance categories.

    One commenter recommended that CMS work with health IT vendors to develop a real-time feedback dashboard that can be incorporated into health IT products, such as EHRs, as eligible clinicians will not know where they are relative to the performance threshold on an annual basis until after the close of the performance period. While another commenter recommended that, to the extent it is feasible, CMS consider partnering with registry vendors to integrate reports in registry interfaces, enabling those eligible clinicians reporting via an EHR or QCDR to view performance feedback in a dashboard setting that is familiar to them.

    Some commenters suggested that CMS create an electronic interactive tool for eligible clinicians to quickly gauge their progress by calculating scores, which can help eligible clinicians identify measures that are applicable to their practice. Another commenter noted that an important aspect for clinicians and groups in small, rural and underserved areas are intuitive tools to easily calculate their MIPS score, whether this tool is embedded within the health IT vendor, registry, or available on the CMS Web site. The commenter also stated that this must be a robust tool which would allow clinicians and groups the ability to securely visualize external data such as aggregate claims data used to calculate episode measures.

    One commenter recommended performance feedback be available online and in a timely fashion, ideally in the way that the same information would be available to the public, but well in advance of publication. One commenter suggested CMS leverage the My Quality Net Web site to provide performance feedback to clinicians, since hospitals and other clinicians are already accustomed to using it for federal quality reporting programs.

    Another commenter recommended that CMS create a clinician portal that will allow eligible clinicians and other clinicians to estimate their payment adjustment.

    Some commenters requested that performance feedback provide the ability to drill down for use by individual physicians.

    Response: We agree performance feedback should be clear, easy to understand, and provided to eligible clinicians in a user-friendly format (for example, web-based interactive dashboard). In the future, we intend to provide functionality for an interactive experience for performance feedback. As we build the web-based application for performance feedback, we will continue working with stakeholders (for example, as part of usability testing) to ensure the user experience is accounted for when building this system. If technically feasible, we will work toward incorporating a means to drill down by individual clinicians for performance feedback. We will take all of these commenters' recommendations into consideration as we develop performance feedback mechanisms. While we cannot speak to the plans of health IT vendors, registries, or other third party intermediaries; we expect to continue working with them, as well as clinicians, specialty organizations, and other stakeholders to promote continued growth in the availability of timely, easy-to-use performance feedback for clinicians through these mechanisms in complement to the feedback that will be available from CMS. Further, since we have not required advance registration for reporting, we note that participation in MIPS will be at the level at which data is submitted to CMS. Thus, if individual data is submitted, feedback will be on the individual level; if group data is submitted, feedback will be at the group level.

    Comment: Some commenters suggested a process be included for physicians to request and implement revisions when performance feedback data are incorrect. Another commenter suggested being allowed to resubmit claims that were incorrectly submitted, as by the time feedback was provided historically in the PQRS and VM programs it was too late and the practice was subject to downward adjustments to payments.

    Response: We intend to build in a process for updates/revisions needed for performance feedback, which would be separate from the targeted review process as described in further detail in section II.E.8.c. of this final rule with comment period. We note that as described in section II.E.5. of this rule we do not have the ability to allow for claims to be resubmitted only for the reason of appending a quality data code.

    Comment: One commenter recognized that while the goal is to provide quarterly performance feedback, the feedback might not be issued until the first half of a year because historically in the PQRS program most registries do not open or accept data submission until the second quarter of the performance period.

    Another commenter agreed with utilizing vendors, such as registries, to communicate performance feedback in real-time so that performance can be monitored at any time. While another commenter recommended that CMS continue to evaluate and work with vendors to determine how health IT vendors and QCDRs can be leveraged to provide more ongoing performance feedback to clinicians, as the goal being an agile method of analyzing performance without manual entry or mistake. One commenter requested that CMS leverage advanced electronic reporting mechanisms to reduce the long feedback turnaround time in claims-based systems and to provide performance data on improvement activities and advancing care information in addition to quality and cost.

    One commenter recommended that CMS provide third party intermediaries access to clinician performance feedback for the clinicians for whom they are submitting information for in order to allow third party intermediaries to validate and troubleshoot any issues with the data.

    One commenter suggested that CMS allow clinicians to elect to receive their performance feedback through a Regional Healthcare Innovative Collaborative (RHIC) that are able to provide a multi-payer perspective.

    Response: In future years of the program, we plan to leverage additional pathways such as collaborative efforts with health IT vendors, registries, and QCDRs to help disseminate data/information contained in the performance feedback to eligible clinicians, where applicable. We will look to increase feedback to third party intermediaries in the Quality Payment Program; and will continue working with stakeholders as we move toward implementing this functionality. We also direct these commenters to the third party data submission section (II.E.9.) of this final rule with comment period.

    Comment: Some commenters suggested individual eligible clinicians should be able to access their performance feedback independently, instead of having to access through a group. One commenter suggested performance feedback also be available to practice administrators (to view all NPIs at the TIN level, as opposed to each individual eligible clinician) and related staff. Some commenters suggested that the performance feedback also be available to practice staff designated by the eligible clinician. Some commenters believed that the EIDM process to access performance feedback should be re-evaluated, noting practices of all sizes (solo and 2+ for eligible clinicians) only should need one EIDM account to view performance feedback, as well as, be allowed to submit data for the practice. Another commenter requested that CMS make the log-in process for accessing performance feedback more user-friendly; as currently it is overly complicated with cumbersome password requirements that reset at short intervals; which limit access to the current PQRS feedback reports and QRURs.

    One commenter agreed with offering clinicians the option to receive performance feedback through one channel, and requested that CMS make this a priority for future performance feedback years. The commenter also recommended that as part of this initiative, CMS work with stakeholders toward creating a channel for eligible clinicians to view their performance on both quality and cost measures across all (or multiple) payers. The commenter also noted that it is critical for eligible clinicians to have access to a resource that provides them with a complete picture of their practice across all payers.

    Other commenters stated that many clinicians are unaware of the current QRURs or have had trouble accessing them, noting difficulty with the login process which they believed was being unnecessarily complicated, not always clear who has access, and those that have access are not usually front-line clinicians. The commenter strongly encouraged CMS to push performance feedback out to clinicians as opposed to waiting for clinicians to access the feedback.

    Response: We agree the process to access performance feedback should be easy and streamlined. While we have taken steps to streamline the current PQRS feedback reports and QRURs, more could be done. We intend for MIPS eligible clinicians to be able to access their performance feedback independently through a web-based application. Since performance feedback will contain secure data, we recognize the need to balance access with maintaining security. We intend to continue the efforts made under the VM program, to engage physicians and encourage and assist them to access their performance feedback. We will take the comments into account and continue working with stakeholders as we build the CMS designated system for performance feedback.

    Comment: One commenter requested that performance feedback data be provided without charge.

    Response: As is done currently with the PQRS feedback reports and QRURs, performance feedback will also be provided through a CMS designated system, with no charge to the eligible clinician.

    After consideration of the comments we are finalizing these polices as proposed. In future years of the program, performance feedback will continue to be available through a CMS designated system, which we intend to be a web-based application. The intent is that in the next performance feedback, anticipated to be released around July 1, 2018, this feedback will be the first in the anticipated new dashboard format. It will be provided via the new Quality Payment Program portal and we intend to leverage additional mechanisms such as health IT vendors, registries, and QCDRs to help disseminate data/information contained in the performance feedback to eligible clinicians, where applicable. As we have stated previously, we will continue to engage in user research with front-line clinicians to ensure we are providing the performance feedback data in a user-friendly format, and that we are including the data most relevant to clinicians. For updates and more information, please see QualityPaymentProgram.cms.gov.

    Additionally, we did not receive comments on our proposal to use the information contained in the provider or supplier's Medicare enrollment records, and stored in the Provider Enrollment, Chain, and Ownership System (PECOS), as the system of records for eligible clinicians' contact information that should be used when the MIPS performance feedback is available. Therefore, we are finalizing this policy as proposed.

    We also sought comment for future rulemaking on coordinating with third party intermediaries such as health IT vendors and QCDRs as MIPS evolves to enable additional feedback to be sent on the cost, advancing care information and improvement activities performance categories. We did not receive comments on additional feedback that could be sent through third party intermediaries. We plan to work with third party intermediaries as we continue to develop the mechanisms for performance feedback, to see where we may be able to develop and implement efficiencies for the Quality Payment Program. Any regulatory changes would be made through future notice-and-comment rulemaking.

    (3) Use of Data

    Under section 1848(q)(12)(A)(iii) of the Act, for purposes of providing performance feedback, the Secretary may use data, for a MIPS eligible clinician, from periods prior to the current performance period and may use rolling periods in order to make illustrative calculations about the performance of such professional. We believe “illustrative calculations” means an interim, snap shot in time of performance, or perhaps a “dry-run” of the data including measure rates. This would provide an indication of how a MIPS eligible clinician might be performing, but would not be conclusive. Since MIPS will not likely have comparable data until year 3 of the program, these “illustrative calculations” could be based on historical data sets available to CMS until actual data for MIPS is available.

    We did not request comments in this section, but did receive a comment which is summarized below.

    Comment: One commenter believes that if CMS is able to make “illustrative calculations” in advance of a performance year, then CMS should be able to provide eligible clinicians with performance feedback quarterly in advance of the performance year for all four performance categories.

    Response: As we noted in the proposed rule (81 FR 28277-28278), we believe “illustrative calculations” means an interim, snap shot in time of performance, or perhaps a “dry-run” of the data including measure rates, based on historical data available. This would provide an indication of how a MIPS eligible clinician might be performing, but would not be conclusive. Since MIPS will not likely have comparable data until year 3 of the program, these “illustrative calculations” could be based on historical data sets available to us until actual data for MIPS is available. Also, as noted previously in this section of this final rule with comment period the goal is to provide future performance feedback on a quarterly basis, and once technically feasible to include all four performance categories in the performance feedback.

    We have considered the comments received and will take them into consideration in the future development of performance feedback through separate notice-and-comment rulemaking.

    (4) Disclosure Exemption

    As stated under section 1848(q)(12)(A)(iv) of the Act, feedback made available under section 1848(q)(12)(A) of the Act shall be exempt from disclosure under 5 U.S.C. 552 (the Freedom of Information Act) (FOIA).

    We did not request comments in this section, but we received the following comment:

    Comment: One commenter expressed support for the disclosure exemption for MIPS performance feedback under the Freedom of Information Act.

    Response: As noted in the proposed rule (81 FR 28278), section 1848(q)(12)(A)(iv) of the Act provides that feedback made available under section 1848(q)(12)(A) of the Act shall be exempt from disclosure under FOIA.

    (5) Receipt of Information

    Section 1848(q)(12)(A)(v) of the Act, states that the Secretary may use the mechanisms established under section 1848(q)(12)(A)(ii) of the Act to receive information from professionals. This allows for expanded use of the feedback mechanism to not only provide feedback on performance to eligible clinicians, but to also receive information from professionals.

    We intend to explore the possibility of adding this feature to the CMS designated system, such as a portal, in future years under MIPS. This feature could be a mechanism where MIPS eligible clinicians can send their feedback (that is, if they are experiencing issues accessing their data, technical questions about their data, etc.) to us. We appreciate that MIPS eligible clinicians may have questions regarding the information contained in their performance feedback. To assist MIPS eligible clinicians, we intend to establish resources, such as a helpdesk or offer technical assistance, to help address questions with the goal of linking these resource features to the CMS designated system, such as a portal.

    Additionally, we solicited comment on the types of information eligible clinicians would like to send to us via this mechanism.

    The following is a summary of the comments we received.

    Comment: Some commenters recommended a prompt and transparent notification process when errors or inconsistencies are identified on the performance feedback so that errors can be remedied or targeted review requests may occur in a timely manner. Another commenter suggested that a mechanism would be created for eligible clinicians to receive comprehensive periodic feedback or updates from CMS as to how they are performing before each performance period ends.

    Other commenters requested that CMS guarantee firm turnaround times for performance feedback, and offer teleconferences to work with eligible clinicians in reviewing the patient data. While some commenters urged CMS to devote the necessary resources, including staff, to help clinicians and administrators interpret the performance feedback (for example, helpdesk).

    One commenter recommended that CMS provide technical assistance to eligible clinicians to help understand performance feedback (for example, more practical and specific tips in the help documents for education and outreach, especially as this is a new program).

    Response: We appreciate that MIPS eligible clinicians may have questions regarding the information contained in their performance feedback. To assist MIPS eligible clinicians, we intend to establish resources, such as the Quality Payment Program Service Center (for example, helpdesk) or offer technical assistance, to help address questions with the goal of linking these resource features to the CMS designated system, such as a web-based application. We also intend to explore the possibilities of adding a mechanism to receive information from eligible clinicians, to a web-based application. These suggestion will be taken into consideration for the future development of performance feedback.

    Comment: Some commenters suggested using the IHS/Tribal/Urban Indian list serve to notify MIPS eligible clinicians and groups when their performance feedback is available.

    Response: We agree and will implement this suggestion into the education and outreach planned for performance feedback.

    We have considered the comments received and will take them into consideration in the future development of performance feedback through separate notice-and-comment rulemaking.

    (6) Additional Information—Type of Information

    Section 1848(q)(12)(B)(i) of the Act, states that beginning July 1, 2018, the Secretary shall make available to MIPS eligible clinicians information about the items and services for which payment is made under Title 18 that are furnished to individuals who are patients of MIPS eligible clinicians by other suppliers and providers of services. This information may be made available through mechanisms determined appropriate by the Secretary, such as the proposed CMS designated system that would also provide performance feedback. Section 1848(q)(12)(B)(ii) of the Act specifies that the type of information provided may include the name of such providers, the types of items and services furnished, and the dates items and services were furnished. Historical data regarding the total, and components of, allowed charges (and other figures as determined appropriate by the Secretary) may also be provided. We solicited comment on the type of information MIPS eligible clinicians would find useful and the preferred mechanisms to provide such information, as well as, arrangements that should be in place regarding these data (that is, eligible clinicians sharing data). We also solicited comment as to whether additional information regarding beneficiaries attributed to a MIPS eligible clinician under the cost performance category or information about which MIPS eligible clinician(s) beneficiaries to whom a given MIPS eligible clinician provides services were attributed would be useful feedback in regards to quality improvement efforts.

    The following is a summary of the comments we received.

    Comment: Some commenters suggested performance feedback should include patient-level data in order to aid eligible clinicians to improve quality and cost. One commenter stated it would be easier to provide timely performance feedback to eligible clinicians if smaller statistically relevant sample sizes were reported instead of all Medicare patient data.

    Response: As stated above, section 1848(q)(12)(B) of the Act does require patient information to be made available to MIPS eligible clinicians starting July 1, 2018. These suggestions will be taken into consideration as we implement this provision through future rulemaking.

    We have considered the comments received and will take them into consideration in the future development of performance feedback through separate notice-and-comment rulemaking.

    (7) Performance Feedback Template

    The performance feedback under section 1848(q)(12)(A) of the Act is meant to be meaningful and usable to eligible clinicians. In an effort to ensure these data are tailored to the needs of eligible clinicians, we solicited comment through the MIPS and APMs RFI and received numerous comments regarding overall format of the performance feedback template. Suggestions were made on what this feedback should include for MIPS. We intend to collaborate with stakeholders outside of notice-and-comment rulemaking on how the performance feedback should look for MIPS; as well as, what data elements would be useful for eligible clinicians.

    We solicited comment on the fields that should be included in the performance feedback template for MIPS eligible clinicians. The following is a summary of the comments we received.

    Comment: Some commenters supported the idea of a standardized performance feedback template, and encouraged CMS to engage with stakeholders and non-physician practitioners to obtain feedback about the template. One commenter suggested that CMS should consider a number of mechanisms to receive input from stakeholders and provide opportunities to learn about the performance feedback tools in development—for example, the Agency should consider hosting Open Door Forums (ODFs), and ensuring that detailed, comprehensive instructional materials are easily available online. One commenter suggested CMS revisit the MIPS LEAN Design Team materials from the CMS Quality Summit in December 2015. Commenters also suggested that CMS should include clear disclaimers about the limitations of the data.

    One commenter suggested that other information could be used in performance feedback by providing data on alternatives to the items or services provided that would have been more cost effective while delivering the same quality of care. Some commenters requested that CMS provide individual eligible clinician and group performance feedback in order to help eligible clinicians determine whether to continue reporting with the group or change to individual reporting.

    Another commenter recommended that performance feedback include improved transparency, and additional data points for each reported measure. One commenter recommended that the performance feedback would show both scoring and decile placement for individual eligible clinicians across the areas scored. One commenter believes that all reported measures should be included in performance feedback, and every field that contributed to the score should be included as well. Some commenters suggested performance feedback include data fields that would assist with identification of patients served, costs, outcomes, where and what type of care was provided, quality of care for patients, and care coordination activities and needs; and functionality to compare (for example, regionally and nationally) directly against other eligible clinicians. One commenter suggested more relevant non-patient facing specialties be included in performance feedback.

    Some commenters suggested performance feedback include the place of service (POS) codes, geography (including state and Medicare locality), health system NPI, the subpart NPI where the services were delivered, and the NPI of the entity receiving assignment for professional services; as well as the ability to include additional identifiers if needed in the future to account for specialties. While another commenter recommended performance feedback include information for suggested areas where the eligible clinician can improve, which promotes quality and helps eligible clinicians avoid penalties. Other commenters recommended that the cause for a penalty be clearly articulated.

    Some commenters suggested performance feedback include as much data as possible as long as it is easy to understand, and recommended options for the format of performance feedback. Some commenters recommended a basic report containing the following information: performance threshold to date, where the clinician stands in performance, current possible payment adjustments (with exact reasoning for negative payment adjustment in order to improve for future reporting), and a roadmap to improve performance and avoid a downward adjustment. Some commenters recommended CMS use one comprehensive document for the MIPS performance feedback. Some commenters suggested a second report be included in the performance feedback on a more granular level, and contain MIPS specific components. Commenters also suggested that CMS put certain data in supplemental materials (for example, advancing care information) or appendices so that it does not detract from the main report for performance feedback.

    One commenter suggested that CMS issue an advancing care information experience report similar to the annual PQRS Experience Report with as much information as possible, including reporting experiences by specialty. The commenter noted that CMS could include information on whether each objective was met/not met for the base score; performance data on the objectives being assessed for the performance score; and whether an eligible clinician or group earned bonus points for each measure reported under the Public Health and Clinical Data Registry Reporting objective other than the Immunization Registry Reporting measure. Another commenter recommended that feedback for the advancing care information category include the objectives in which the practice attested for the previous reporting period and the points attributed to those objectives for purposes of calculating the composite score.

    A commenter recommended that CMS provide aggregate information by specialty to medical societies, as specialty societies do not have access to QRUR information at the individual clinician level or in aggregate, so they cannot provide meaningful analysis of current cost measures and assistance to clinician members. Another commenter requested that CMS provide additional data to support performance improvement efforts, because while the QRUR provides some ability to drill down into the data, the reports only provide patient-level expenditure data at the aggregate level compared to national benchmarks. While another commenter noted that performance feedback should include sufficient details on what patients and care have been attributed to the clinician and what other clinicians have partnered in that care.

    One commenter requested detailed performance feedback highlighting options for improvement activities, discussing incorrect reporting, and include geographical components to allow eligible clinicians to review geographical variations in care processes to acclimate eligible clinicians to this new reporting category. Another commenter recommended that performance feedback for improvement activities categories be provided as soon as possible, and that the feedback from CMS should confirm that eligible clinicians have met the requirement by using a nationally accredited, certified patient-centered medical home or the degree to which they have met the improvement activities requirement through high- and medium-weighted improvement activities. One commenter believed that for improvement activities performance feedback, CMS could include information on how many and which activities were completed; the method of data submission used to submit improvement activities information; and, in the future, information on improvement relative to prior years. In addition, the commenter suggested that CMS should provide cumulative data about which improvement activities are being reported across MIPS as well as within each specialty designation. Another commenter recommended that electing to receive the performance feedback should also count as an improvement activity.

    Some commenters suggested that CMS should make available performance feedback to eligible clinicians on their high-utilization patients in as close to real time as possible or provide practices with reports similar to the Hospital Readmission Reductions Program. Another commenter requested that CMS provide files to clinician practices similar to what are provided to hospitals for the Medicare Spend Per Beneficiary measure that is part of Hospital Value-Based Purchasing.

    Some commenters suggested that performance feedback be available via paper reports. Another commenter suggested that performance feedback be provided in an importable form such as a worksheet as opposed to a PDF file, which would allow the eligible clinician more options when reviewing with other tools already in use by the eligible clinician. While other commenters noted performance feedback should be provided in a format that allows eligible clinicians to sort, analyze, and review.

    Response: We agree with commenters about continually improving the usability of performance feedback, and will continue doing stakeholder outreach with the goal that the template for performance feedback will be available in a usable and user-friendly format, and different options are considered before the performance feedback is displayed in a web-based application to MIPS eligible clinicians. We will work with stakeholders to consider the best means for providing improvement activities and advancing care information in future performance feedback.

    We intend to do as much as we can of the development of the template for performance feedback by working with the stakeholder community in a transparent manner. We think this will both encourage stakeholder commentary and make sure we end up with the best possible format(s) for feedback. CMS intends for this performance feedback to be available in the new format on the 2017 performance period by summer 2018, after the 2017 reporting closes.

    We have considered the comments received and will take them into consideration in the future development of performance feedback through separate notice-and-comment rulemaking.

    b. Announcement of Result of Adjustments

    Section 1848(q)(7) of the Act requires that under the MIPS, the Secretary shall, not later than 30 days prior to January 1 of the year involved, make available to MIPS eligible clinicians the MIPS payment adjustment factor (and, as applicable, the additional MIPS payment adjustment factor) applicable to the MIPS eligible clinician for items and services furnished by the professional for such year. The Secretary may include such information in the confidential feedback under section 1848(q)(12) of the Act.

    If technically feasible, we proposed to include the MIPS payment adjustment factor and, as applicable, the additional MIPS payment adjustment factor (collectively referred to as the “MIPS payment adjustment factors”) in the performance feedback for eligible clinicians provided under section 1848(q)(12)(A) of the Act. If it is not technically feasible to provide this information in the performance feedback, we proposed to make it available through another mechanism as determined appropriate by the Secretary (such as a portal or a CMS designated Web site) and solicited comment on mechanisms that might be appropriate. The first announcement will be available no later than December 1, 2018 to meet statutory requirements. We requested comment on these proposals.

    The following is summary of the comments we received regarding our proposal to include the MIPS payment adjustment factors in the performance feedback, if technically feasible.

    Comment: One commenter suggested that performance feedback should include a potential MIPS payment adjustment factor based on current performance or alternatively a tool to run “what if” scenarios regarding the clinician's adjustment.

    Response: If technically feasible, we proposed (81 FR 28164) to include the MIPS payment adjustment factors in the performance feedback for eligible clinicians. We appreciate these suggestions, and we will take this into consideration in the development of performance feedback.

    Comment: A few commenters expressed concern that 30 days would not be enough time to respond to the announcement of the result of adjustments. One commenter requested a minimum of 90 days instead, while other comments suggested a 120 day notice to allow clinicians the ability to plan financially.

    Response: We agree with the commenters and would like to publish this information as early as possible to allow clinicians more time to review and understand the adjustments that will be applied to their payments. We will take this into consideration as we plan for the first announcement, which will be available no later than December 1, 2018 to meet statutory requirements.

    Comment: One commenter recommended that CMS notify clinicians as soon as feasible regarding payment adjustments to allow practices to prepare for downward adjustments to payments. Commenter recommended that CMS consider providing the adjustment results via letter and through the performance feedback if possible, especially in the beginning years of the program.

    Response: As noted in the proposed rule (81 FR 28278), the first announcement will be available no later than December 1, 2018 to meet statutory requirements. We will take these suggestions into consideration as we prepare for the first announcement for the adjustment factors.

    After consideration of the comments we are finalizing the policy as proposed that if technically feasible we will include the MIPS payment adjustment factors in the performance feedback. If it is not technically feasible to include the MIPS payment adjustment factors in the performance feedback, we will notify MIPS eligible clinicians through guidance documents or other program communication channels as to when and how this information will be announced prior to the statutory deadline of December 1, 2018. As discussed above, in future years of the program, performance feedback will be available via a CMS designated system, which we intend to be a web-based application. We also anticipate the announcement of the adjustment factors will be available via a web-based application as well. Additionally, please see section II.E.8.c. for final polices for requesting a targeted review.

    c. Targeted Review

    Section 1848(q)(13)(A) of the Act requires the establishment of a process under which a MIPS eligible clinician or group may seek an informal review of the calculation of the MIPS payment adjustment factor (or factors) applicable to such MIPS eligible clinician or group for a year.

    We recognize that a principled approach to requesting and conducting a targeted review is required under the MACRA to minimize burdens on MIPS eligible clinicians or groups and ensure transparency under MIPS. We also believe it is important to retain the flexibility to modify MIPS eligible clinicians' or groups' final score or MIPS payment adjustment based on the results of targeted review. This will lend confidence to the determination of the final score and MIPS payment adjustments, as well as, providing finality for the MIPS eligible clinician or group after the targeted review is completed. It will also minimize the need for claims reprocessing. We proposed an approach below that outlines the factors that we would use to determine if a targeted review may be conducted. In keeping with the statutory direction that this process be “informal,” we have attempted to minimize the associated burden on the MIPS eligible clinician to the extent possible.

    In accordance with section 1848(q)(13)(A) of the Act, we proposed at § 414.1385 to adopt a targeted review process under MIPS wherein a MIPS eligible clinician or group may request we review the calculation of the MIPS payment adjustment factor under section 1848(q)(6)(A) of the Act and, as applicable, the calculation of the additional MIPS payment adjustment factor under section 1848(q)(6)(C) of the Act applicable to such MIPS eligible clinician or group for a year. Because this review will be limited to the calculation of the MIPS payment adjustment factor and, as applicable, the additional MIPS payment adjustment factor, we anticipate we may find it necessary to review data related to the measures and activities and the calculation of the final score according to the defined methodology. The following are examples of circumstances under which a MIPS eligible clinician or group may wish to request a targeted review. This is not a comprehensive list of circumstances:

    • The MIPS eligible clinician or group believes that measures or activities submitted to us during the submission period and used in the calculations of the final score and determination of the adjustment factors have calculation errors or data quality issues. These submissions could be with or without the assistance of a third party intermediary; or

    • The MIPS eligible clinician or group believes that there are certain errors made by us, such as performance category scores were wrongly assigned to the MIPS eligible clinician or group (for example, the MIPS eligible clinician or group should have been subject to the low-volume threshold exclusion and should not have received a performance category score).

    We believe that a fair targeted review request process requires accessibility to all MIPS eligible clinicians or groups within a reasonable period of time and provides electronic and telephonic communication for questions regarding the targeted review process, as well as for the actual request for review and receipt of the decision on that request. The targeted review process will use the same Quality Payment Program Service Center (referred to as the “help desk” in the proposed rule) support mechanism as is provided for MIPS as a whole.

    We further proposed at § 414.1385 to adopt the following general process for targeted reviews under section 1848(q)(13)(A):

    • A MIPS eligible clinician or group electing to request a targeted review may submit their request within 60 days (or a longer period specified by us) after the close of the data submission period. All requests for targeted review must be submitted by July 31 after the close of the data submission period or by a later date that we specify in guidance.

    • We will provide a response with our decision on whether or not a targeted review is warranted. If a targeted review is warranted, the timeline for completing that review may be dependent on the number of reviews requested (for example, multiple reviews versus a single review by one MIPS eligible clinician or group) and general nature of the review.

    • As this process is informal and the statute does not require a formal appeals process, we will not include a hearing process. The MIPS eligible clinician or group may submit additional information to assist in their targeted review at the time of request. If we or our contractors request additional information from the MIPS eligible clinician or group, the supporting information must be received from the MIPS eligible clinician or group by us or our contractors within 10 calendar days of the request. Non-responsiveness to the request for additional information will result in the closure of that targeted review request, although another review request may be submitted if the targeted review submission deadline has not passed.

    • Since this is an informal review process and given the limitations on review under section 1848(q)(13)(B) of the Act, decisions based on the targeted review will be final, and there will be no further review or appeal.

    If a request for targeted review is approved, the outcome of such review may vary. For example, we may determine that the clinician should have been excluded from MIPS, re-distribute the weights of certain performance categories within the final score (for example, if a performance category should have been weighted at zero), or recalculate a performance category score in accordance with the scoring methodology for the affected category, if technically feasible.

    We requested comments on these proposals.

    The following is summary of the comments we received regarding our proposals for a targeted review process.

    Comment: Several commenters supported the inclusion of a targeted review process for MIPS eligible clinicians and groups who believe that CMS has assigned them an incorrect final score or MIPS payment adjustment. Another commenter believed that it is critical that MIPS eligible clinicians have a means to request a review of their MIPS payment adjustment factor. The commenter suggested that CMS put into place a process that is physician friendly and does not automatically assume that the physician is incorrect.

    Response: We agree with the commenters that the process should be “physician friendly.” To accomplish this, we have worked to make our process for submitting a targeted review simple and not overly burdensome on the MIPS eligible clinician and groups or their practices. The request for a targeted review will be based on the MIPS eligible clinician's or group's MIPS payment adjustment factor(s) for a year. We recommend that MIPS eligible clinicians and groups review this information prior to submitting a request for targeted review. For CMS to perform a full review, supporting documentation from the MIPS eligible clinician or group demonstrating why they believe their MIPS payment adjustment factor(s) is inaccurate is critical.

    Comment: A few commenters requested that CMS provide a mechanism where a MIPS eligible clinician or group can contest a negative MIPS payment adjustment if the MIPS eligible clinician believes he or she was inappropriately scored under any given MIPS performance category.

    Response: We agree with the commenter and note that for instances where a MIPS eligible clinician believes the underlying data used to calculate a performance category score is inaccurate due to data quality or calculation errors, a targeted review may be requested. MIPS eligible clinicians and groups may submit a request for targeted review if they believe their negative MIPS payment adjustment factor for a year is inaccurate.

    Comment: A few commenters requested that CMS establish a meaningful review and appeals processes. One commenter noted that the proposed targeted review process does not include a hearing or an opportunity for reconsideration.

    Response: We agree and believe the targeted review process we proposed and are finalizing would allow for meaningful review. We note however that section 1848(q)(13)(A) of the Act describes the review process as “targeted” and “informal,” and on that basis, we do not believe a hearing or a second level of review/appeals process is warranted; therefore all decisions under the targeted review process will be final.

    Comment: A few commenters proposed that CMS should establish an appeals process through which MIPS eligible clinicians or groups can challenge measures' applicability if the MIPS eligible clinician or group does not agree that the measures identified by CMS as being applicable to their practice are appropriate.

    Response: We intend to provide detailed performance feedback to the MIPS eligible clinicians or groups that will identify which measures were calculated as part of their final score, as well as which measures were calculated for informational purposes only. We do not anticipate that MIPS eligible clinicians or groups would have their final score derived based on measures that were not applicable to them, however in circumstances where, after reviewing the feedback provided, if the MIPS eligible clinician or group believes there is an error made by CMS they may file a targeted review request. We refer the commenter to the performance feedback section at section II.E.8.a. of this final rule with comment period, and the MIPS final score methodology in section II.E.6. of this final rule with comment period for more information related to our final policies.

    Comment: One commenter requested an improved targeted review process under MIPS as compared to the current informal review processes under PQRS. The commenter also noted that the communication from CMS notifying clinicians and practices of their payment adjustments under PQRS has been vague and needs to be customized to each MIPS eligible clinician and group. The commenter recommended that notifications informing MIPS eligible clinicians and groups of a MIPS payment adjustment or low final score should also contain information on the reason for the determination. Another commenter requested that CMS work with stakeholders to identify ways to improve the timeliness of the review process by automating processes, providing additional guidance, and seeking additional resources if necessary.

    One commenter stated that clinician experiences with the informal review processes in PQRS and the physician value-based payment modifier have been frustrating. Further, it has been difficult to understand why requests for review were denied. The commenter suggested that CMS create a transparent, effective review process.

    Response: We agree with the commenters that to the fullest extent possible the communications to MIPS eligible clinicians or groups should be customized to each MIPS eligible clinician or group wherever possible. We also agree that the targeted review process should be as streamlined and automated as possible. We do note however that all targeted review determinations will be made on a case by case basis, which significantly limits the potential automation of the process. We appreciate the recommendation for improvements to the targeted review process and will take the recommendations into consideration as we further develop the targeted review processes.

    Additionally, we regret the frustrations stakeholders have had under the PQRS and VM informal review processes. Under those processes, we provided reasons for our decisions about the requests for informal review we received. Under the MIPS targeted review process, we intend to continue to provide MIPS eligible clinicians or groups with our reasons for granting or denying a request for review, and we will make an effort to provide additional clarifications of our reasons, if needed.

    Comment: One commenter noted that improvements in Quality Payment Program Service Center support must be made for high quality support. The commenter stated that under PQRS the Help Desk was responsive; however, often times they could not provide comprehensive information as they had limited data available. Commenters also requested that CMS adequately staff the Quality Payment Program Service Center during the review period to respond to questions and direct MIPS eligible clinicians and groups through the process. Another commenter requested providing a mechanism other than calling the Quality Payment Program Service Center to obtain answers to potential targeted review questions in order to reduce the number of targeted reviews that will be filed.

    Response: We appreciate requests for improvements to the Quality Payment Program Service Center. We would also like to note that we will continuously review and implement improvements in the future, such as the commenters' recommendations for increases in staffing levels during surge periods such as the targeted review timeframe. In addition to contacting the Quality Payment Program Service Center, we anticipate that the Quality Payment Program Web site (QualityPaymentProgram.cms.gov) will allow MIPS eligible clinicians or groups to received additional information concerning their targeted reviews such as the ability to receive status updates. Lastly, in regard to other mechanisms available to obtain additional information, we would encourage the commenter to review all applicable information available on the Quality Payment Program Web site QualityPaymentProgram.cms.gov (for example, contact information for the Quality Payment Program Service Center, FAQs for targeted review, etc.), as well as join relevant education and outreach meetings, such as the National Provider Calls.

    Comment: Several commenters suggested that CMS increase the time period for MIPS eligible clinicians or groups to respond to CMS's or its contractors' requests for additional information. The commenters noted the current 10 calendar day proposal does not account for the time it takes to process such a request, understand the required actions, and gather requested supporting evidence. Further the commenters noted that it does not provide room for error and would result in a closed targeted review request. Several commenters suggested that CMS give MIPS eligible clinicians or groups at least 60 days to respond to requests if CMS or its contractors request additional information from the MIPS eligible clinician. A few commenters recommended that CMS allow at least 20 business days for submission of additional information. While another commenter requested CMS allow 30 business days to respond to requests for additional information. One commenter requested that exceptions be allowed where this timeline may not be feasible. The commenter acknowledged CMS may not be able to broaden these timelines for the first performance period if CMS implements a later start date, but requested that CMS consider if other program modifications—such as lowering the data submission thresholds, removing certain problematic measures, assessing the number of appeals, and streamlining program requirements—will help reduce the number of delays in processing requests for targeted review.

    Response: We note that when we refer to “days,” we generally mean “calendar days” unless otherwise indicated. We appreciate the commenters' concerns and based on public comments received we will modify this timeframe from 10 days to 30 days. This response timeframe is designed to create open communication between us and the MIPS eligible clinician or group during the targeted review period, while ensuring that we receive all appropriate supporting documentation available to ensure a timely decision can be rendered. We would like to note that this 30 day timeframe for responding to requests for additional information from CMS is not intended for clarifying questions between CMS and the requestor, rather this response timeframe is for requests for additional supporting documentation such as copies of claims, supporting extracts from the MIPS eligible clinicians' EHR, etc. We also may grant extensions for responding to requests for additional information on a case by case basis if we believe there are extenuating circumstances.

    Comment: A few commenters asked for more information to be made available for the targeted review process. The commenters requested a timeframe in which CMS would complete these reviews. One commenter requested clarity on whether these reviews would be completed on a rolling basis, as requests were received, or whether all reviews would take place after the July 31 deadline. The commenters recommended the process for MIPS eligible clinicians or groups to dispute the MIPS final score attributed to them should be straightforward including a point of contact, rubric for reviewing performance, supporting documentation to facilitate reviews, estimated timeframes, and identification of the responsibilities of each party. Further, the commenters stated the burden on MIPS eligible clinicians or groups to collect and present the information needed to dispute a final score should be mitigated. Another commenter also suggested CMS explore multiple strategies for disseminating this information, including FAQs, flowcharts and dedicated Quality Payment Program Service Center personnel.

    Response: We appreciate the request for more information. Requests for targeted reviews will be processed on a first come first served basis as requests are received. We agree with the commenters that the process for filing a request for targeted review should be a straightforward process. We intend to publish additional materials such as timelines and toolkits to ease the burden on the targeted review process. Additional information on the targeted review process will be available at QualityPaymentProgram.cms.gov.

    Comment: One commenter recommended that MIPS eligible clinicians or groups have a formalized mechanism by which they can dispute erroneous information in areas such as reported data for measures, performance scores for MIPS categories, and the final score.

    Response: The targeted review process is the mechanism whereby MIPS eligible clinicians can request a review of their MIPS payment adjustment factor, and as applicable their additional MIPS payment adjustment factor. The MIPS payment adjustment factor is determined based on the final score, which includes the scores for each of the MIPS performance categories. Perceived errors related to the MIPS payment adjustment factor calculations can be addressed in the request for targeted review.

    Comment: A few commenters suggested that CMS provide a fair and transparent process for MIPS eligible clinicians or groups to appeal findings in performance feedback. One commenter noted that in general, the power is far greater for CMS to audit and potentially recover money than it is for a MIPS eligible clinician or group to seek an informal review. The commenter believed there should be a more equal power balance between CMS and MIPS eligible clinicians with regard to targeted review.

    Response: We refer readers to section II.E.8.a. of this final rule with comment period for information on policies we are finalizing in regard to performance feedback. We believe the relative performance that we provide through performance feedback will provide MIPS eligible clinicians the fair and transparent process and information they need to track performance and to learn about their quality and resource utilization performance. Our goal is to provide stakeholders with a fair and transparent process for requesting a targeted review. The Quality Payment Program Web site, QualityPaymentProgram.cms.gov, will allow MIPS eligible clinicians or groups to get additional information concerning their targeted reviews.

    Furthermore, we would like to note that for the MIPS, targeted review, data validation, and audits are separate and distinct processes. Request for targeted reviews are an optional process available to MIPS eligible clinicians and groups, and a request for targeted review has no bearing on the initiation of a data validation and audit request. Lastly, data validation and audit requests do not initiate targeted reviews.

    Comment: Commenters recommended CMS and its contractors coordinate with third party intermediaries when contacting MIPS eligible clinicians or groups for information under a targeted review. Commenters recommended that CMS continue to allow MIPS eligible clinicians and groups to submit information for informal review without the fear of an additional penalty by CMS or its contractors.

    Response: We agree with the commenters that if a MIPS eligible clinician or group uses a third party intermediary for data submission, the third party intermediary should be able to provide any necessary supporting documentation with the consent of the MIPS eligible clinician or group. We also would like to note that MIPS eligible clinicians or groups will not be penalized for filing a request for targeted review. Depending on the findings of the targeted review, it is possible that MIPS eligible clinicians' or groups' final score may be adjusted, which could potentially lead to a modification to their MIPS payment adjustment.

    Comment: One commenter requested that CMS clarify that a representative of a group may request a targeted review for the entire group and that reviews do not need to be evaluated at the MIPS eligible clinician or group level since MIPS eligible clinicians reporting under the MIPS group reporting option will have the same final score and adjustment factors.

    Response: We agree with the commenter. Authorized representatives of groups may file targeted reviews on behalf of their group members.

    Comment: One commenter requested that CMS, through the notice and comment rulemaking process, work with MIPS eligible clinicians or groups to define what other circumstances would merit a targeted review.

    Response: In the proposed rule (81 FR 28279), we have provided examples of instances where a MIPS eligible clinician or groups may want to request a targeted review, but as we noted, it was not a comprehensive list of circumstances. We would encourage all MIPS eligible clinicians or groups who believe a targeted review of their MIPS payment adjustment factor or additional MIPS payment adjustment factor is warranted to submit a request for review.

    Comment: A few commenters requested that for the first few years of MIPS the scope of what would be considered an appropriate issue for targeted review should be broadened, and recommended that requests for targeted reviews should be approved for all MIPS eligible clinicians or groups who request them. Another commenter suggested that CMS should not have the ability to deny requests for targeted review.

    Response: Section 1848(q)(13)(A) of the Act constrains the scope of the targeted review process to the calculation of the MIPS payment adjustment factor and the additional MIPS payment adjustment factor. We will not broaden the scope of review beyond what is described in the statute. Additionally, we cannot automatically approve targeted review requests, we must review each request to make a decision based on the information received. We may also deny requests for targeted review if the request is duplicative of another request or if the request for targeted review is outside the statutory parameters or limitations mentioned in this rule.

    Comment: One commenter recommended that CMS provide MIPS eligible clinicians or groups who request a targeted review and are denied a justification for the denial. The commenter further recommended that there should be a second level of review for requests that are denied, and information regarding the number of reviews requested and the number of reviews that are granted each year should be made public.

    Response: If a request for review is denied, we intend to provide a reason for the denial in our communication to the MIPS eligible clinician or group who submitted the request. An example of why a request may be denied is if it is filed after the close of the targeted review period. Section 1848(q)(13)(A) of the Act describes the review process as “targeted” and “informal,” and on that basis, we do not believe a second level of review or an appeals process is warranted. Additionally, since this is a targeted review process and given the limitations on review under section 1848(q)(13)(B) of the Act, decisions based on the targeted review will be final, and there will be no further review or appeal.

    Additionally, we will continue to review the consideration to publically post information regarding the number of reviews requested and number of reviews that are granted each year.

    Comment: One commenter noted that while MIPS eligible clinicians or groups are able to request a targeted review of their MIPS reporting, it is entirely up to CMS's discretion as to whether such a request is granted. The commenter stated that this discretion should not be permitted, especially since, according to the proposed rule, if a MIPS eligible clinician or group is found to have submitted inaccurate data, CMS would reopen, revise, and recoup any resulting overpayment.

    Response: Section 1848(q)(13)(A) of the Act constrains the scope of targeted review to the calculation of the MIPS payment adjustment factor and the additional MIPS payment adjustment factor. We will not grant requests for review outside of the scope specified by the statute. As previously mentioned in this section of the final rule with comment, for the MIPS, targeted review, data validation, and audits are separate and distinct processes. We refer readers to sections II.E.8.e. and II.E.9.f. of this final rule with comment period for more information regarding data validation and audits and auditing of third party intermediaries submitting MIPS data, respectively.

    Comment: One commenter recommended that CMS adjust its appeals process for organizations that serve underserved populations.

    Response: We assume that the commenter is referring to the targeted review process and appreciate the recommendation to adjust the targeted review process for organizations that serve underserved populations. We also recognize that many of the MIPS eligible clinicians or groups who service these populations may have limited resources.

    Comment: Numerous commenters believe a formal appeals process is needed because an informal review process may not be protective enough to ensure MIPS eligible clinicians and groups have the opportunity to correct misinformation that may adversely impact their Medicare payments. One commenter noted that an appeals process is needed in situations where MIPS eligible clinicians or groups are unfavorably scored at no fault of their own. Another commenter requested that CMS institute a formal appeals process through which MIPS eligible clinicians can submit information to a contractor during a 30-day window. Commenters also recommended an appeals process with two levels of appeal, an expedited informal review and a final reconsideration. Commenters urged CMS to develop automated and streamlined appeals process.

    Response: We believe the targeted review process affords MIPS eligible clinicians a sufficient opportunity to identify errors related to the calculation of their MIPS payment adjustment factor. Section 1848(q)(13)(A) of the Act describes the review process as “targeted” and “informal,” and on that basis, we do not believe a second level of review or an appeals process is warranted. Additionally, as noted previously we agree with the commenters that the MIPS targeted review process should be as streamlined and automated as possible. We do note however that all targeted review determinations will be made on a case by case basis, which significantly limits the potential automation of the process. Lastly, in regards to the commenters' request for two levels of review, while we can appreciate the advantages that a two-level review process provides, we believe that the two-level review process would significantly delay the timing of decisions rendered to MIPS eligible clinicians.

    Comment: Several commenters requested clarification on the time period for informal review and receipt of scores. The commenters stated that 60 days is too short to review, understand, and test/audit the data. Some commenters noted that within 60 days after the close of the data submission period, most MIPS eligible clinicians will not know if they should request a review until they receive information about what their MIPS payment adjustment will be. Instead, the commenters recommended a minimum of 90 days after the close of the data submission period.

    Several commenters proposed that the timeframe to request a targeted review should be based on when the MIPS eligible clinician or group receives performance feedback and MIPS payment adjustment factors from CMS, not 60 days from the close of the data submission period. Another commenter suggested that most MIPS eligible clinicians or groups would prefer to request targeted reviews after performance feedback is released or at the beginning of a MIPS payment adjustment year. The commenter further suggested that CMS develop a timeline for targeted review that anticipates the needs of MIPS eligible clinicians or groups.

    A few commenters noted that the 60-day deadline to submit a targeted review request may be inadequate, because MIPS eligible clinicians or groups may not have the data necessary to determine whether a targeted review is needed until performance feedback is received and analyzed. One commenter requested that MIPS eligible clinicians or groups get as much time as needed to submit to the targeted review process.

    Several commenters had concerns about the targeted review process timeline and urged CMS to allow for review requests on rolling basis from data submission deadline until a minimum 90 days after performance feedback and MIPS payment adjustment information is provided. Another commenter believed that CMS should allow MIPS eligible clinicians or groups at least 45 days to review its reports before requesting a targeted review. In addition, the commenter stated that CMS should allow test submissions in order for MIPS eligible clinicians or groups and third party intermediaries to identify any issues prior to final submission.

    Response: Based on numerous commenters' feedback we are modifying our proposed July 31st deadline for submission of a targeted review request. We are finalizing a 60-day period to submit a request for targeted review, which begins on the day CMS makes available the MIPS payment adjustment factor, and if applicable the additional MIPS payment adjustment factor, for the MIPS payment year and ends on September 30 of the year prior to the MIPS payment year or a later date specified by CMS. We agree with the commenters that prior to submitting a request for targeted review, MIPS eligible clinicians should have the opportunity to review their MIPS payment adjustment factor(s), their performance feedback, and make an informed decision about whether they want to request a targeted review. As noted prior, we intend to publish additional information such as timelines and toolkits on the targeted review process at QualityPaymentProgram.cms.gov.

    In regards to the request to allow for “test submissions,” as noted in section II.E.9. of this final rule with comment period, we intend to provide testing tools prior to the beginning of the submission process to reduce any data errors associated with data submissions.

    Comment: A few commenters stated that CMS should provide MIPS eligible clinicians or groups with two separate deadlines for informal review requests: an initial deadline whereby MIPS eligible clinicians can submit the request in time to have the error corrected before it affects payments, as well as a February 28th final deadline, which would both provide MIPS eligible clinicians or groups with an incentive to resolve a majority of payment issues in advance of claims processing, while still allowing MIPS eligible clinicians or groups adequate time to correct any inaccurate adjustments noticed in the first few payment periods of a new calendar year. Another commenter recommended that CMS give MIPS eligible clinicians or groups as much time as possible to submit targeted review requests since the adjustments will take time to understand and MIPS eligible clinicians or groups will simultaneously be working to report data for the subsequent performance period. Further, CMS should amend the informal review request forms to include fields that allow MIPS eligible clinicians to provide unique situational details, as well as upload supporting documentation. No requests for review should be rejected “automatically.” Rather, CMS should consider all review requests on a case-by-case basis, taking into account the unique circumstances of each request. Finally, the agency should make the targeted review decisions in a much more transparent manner.

    Response: We appreciate the detailed suggestions for modifications to any request forms and will incorporate as feasible and appropriate. We do note however, that it is not feasible to allow for two targeted review periods, nor is it feasible to allow for the period to occur through February 28 of the MIPS payment year. We are required to begin adjusting MIPS eligible clinicians or groups' claims for items and services furnished beginning January 1 of the MIPS payment year and cannot hold claims processing, nor is it desirable to re-process a large volume of claims. If we were to have multiple reviews, it would mean significant amounts of claims re-processing for MIPS eligible clinicians and ultimately disrupts their practice and creates confusion to their patients. Rather, as discussed above, we believe that a 60-day period to request targeted review is sufficient. We also appreciate the recommendations to provide as much time as possible for MIPS eligible clinicians or groups submitting targeted review requests. We believe that the targeted review period of 60 days after the MIPS payment factors are available provides sufficient opportunities for MIPS eligible clinicians or groups to request a targeted review.

    Comment: Several commenters noted that MIPS eligible clinicians or groups should not be penalized due to data errors outside their control. Further, commenters stated that circumstances when third party intermediaries fail to successfully submit data completely or accurately need to be considered, and MIPS eligible clinicians should not be penalized. Commenters also stated, based on previous informal review processes, MIPS eligible clinicians or groups were unfairly penalized due to third party intermediary errors. Commenters urged consideration for a two-fold approach to allow groups and MIPS eligible clinicians, who in good faith tried to submit data but were unsuccessful due to third party intermediary issue, to participate in MIPS. The two-fold approach includes: (1) The ability to resubmit correct data within a reasonable timeframe with evidence of good faith attempt; and (2) if resubmission is not feasible, a hold harmless policy from any penalty. Another commenter suggested that MIPS eligible clinicians or groups should not be unfairly penalized due to inactions or errors of external parties, including third party intermediaries and CMS itself, and should have the right to file an informal review request for reasons beyond their control at any point throughout the payment year and be retroactively reimbursed for all improper adjustments.

    Response: We understand the concerns regarding third party intermediaries. As a general matter, the contractual agreement or other arrangement between a MIPS eligible clinician or groups and a third party intermediary is not within our control. We suggest that MIPS eligible clinicians or groups work with their third party intermediaries to ensure data is submitted timely and accurately. MIPS eligible clinicians or groups may be able to seek recourse against their third party intermediaries if significant issues or problems arise. We would like to note that at this time, we do not allow for resubmission of data. Rather, we use the original submitted data to evaluate a request for targeted review. We continue to express that MIPS eligible clinicians or groups are ultimately responsible for the data that are submitted by their third party intermediaries and expect that MIPS eligible clinicians or groups are ultimately holding their third party intermediaries accountable for accurate reporting. We will continue to explore the operational feasibility of allowing data resubmissions for subsequent years of the MIPS through future notice and comment rulemaking.

    After consideration of the comments, we are finalizing policies as proposed, except for changes specifically discussed to reflect the modified policy for the submission deadline to request a targeted review from July 31 to September 30 of the year prior to the MIPS payment year or a later date specified by CMS, as well as the change for the timeframe whereby MIPS eligible clinician or group may submit additional information to assist in their targeted review at the time of request from 10 days to 30 days.

    Specifically, we are finalizing at § 414.1385(a) that MIPS eligible clinicians or groups may request a targeted review of the calculation of the MIPS payment adjustment factor under section 1848(q)(6)(A) of the Act and, as applicable, the calculation of the additional MIPS payment adjustment factor under section 1848(q)(6)(C) of the Act applicable to such MIPS eligible clinician or group for a year. The process for targeted reviews is:

    (1) MIPS eligible clinicians and groups have a 60-day period to submit a request for targeted review, which begins on the day CMS makes available the MIPS payment adjustment factor, and if applicable the additional MIPS payment adjustment factor, for the MIPS payment year and ends on September 30 of the year prior to the MIPS payment year or a later date specified by CMS.

    (2) CMS will respond to each request for targeted review timely submitted and determine whether a targeted review is warranted.

    (3) The MIPS eligible clinician or group may include additional information in support of their request for targeted review at the time the request is submitted. If CMS requests additional information from the MIPS eligible clinician or group, it must be provided and received by CMS within 30 days of the request. Non-responsiveness to the request for additional information may result in the closure of the targeted review request, although the MIPS eligible clinician or group may submit another request for targeted review before the deadline.

    (4) Decisions based on the targeted review are final, and there is no further review or appeal.

    d. Review Limitation

    Section 1848(q)(13)(B) of the Act, as added by section 101(c)(1) of the MACRA, provides there shall be no administrative or judicial review under sections 1869 and 1878 of the Act, or otherwise of the following:

    • The methodology used to determine the amount of the MIPS payment adjustment factor and the amount of the additional MIPS payment adjustment factor and the determination of such amounts;

    • The establishment of the performance standards and the performance period;

    • The identification of measures and activities specified for a MIPS performance category and information made public or posted on the Physician Compare Internet Web site of the CMS; and

    • The methodology developed that is used to calculate performance scores and the calculation of such scores, including the weighting of measures and activities under such methodology.

    We proposed at § 414.1385 to implement these provisions as written in the statute.

    We would reject any requests for targeted review under section 1848(q)(13)(A) of the Act that focus on the areas precluded from review under section 1848(q)(13)(B) of the Act. We requested but did not receive any comments on this proposal.

    Therefore, we are finalizing § 414.1385 as proposed.

    e. Data Validation and Auditing

    Our experience with the PQRS, VM and Medicare EHR Incentive Programs, has demonstrated the value of data validation and auditing as an important part of program integrity, which is necessary to ensure valid, reliable data. The current voluntary data validation process for PQRS and the audit process for the Medicare EHR Incentive Program are multi-step processes. We communicate the types of data elements that may be included for data validation across multiple Web sites and our documents. This includes defining specific data that may be abstracted from the CEHRT, as well as other documented records.

    As we begin the MIPS, our strategy is to combine our past program integrity processes of the data validation process used in PQRS, and the auditing process used in the Medicare EHR Incentive Program into one set of requirements for MIPS eligible clinicians and groups, which we refer to as “data validation and auditing.” Based on our need for valid and reliable data on which to base a MIPS eligible clinician's or group's payment, we proposed certain requirements for MIPS eligible clinicians and groups submitting data for the 2017 performance period (see section II.E.4. of the proposed rule) under MIPS. Further, we proposed at § 414.1390 to selectively audit MIPS eligible clinicians on a yearly basis, and that if a MIPS eligible clinician or group is selected for audit, the MIPS eligible clinician or group would be required to do the following in accordance with applicable law:

    • Comply with data sharing requests, providing all data as requested by us or our designated entity. All data must be shared with us or our designated entity within 10 business days or an alternate timeframe that is agreed to by us and the MIPS eligible clinician or group. Data would be submitted via email, facsimile, or an electronic method via a secure Web site maintained by us.

    • Provide substantive, primary source documents as requested. These documents may include: Copies of claims, medical records for applicable patients, or other resources used in the data calculations for MIPS measures, objectives and activities. Primary source documentation also may include verification of records for Medicare and non-Medicare beneficiaries where applicable.

    We proposed that we would monitor MIPS eligible clinicians and groups on an ongoing basis for data validation, auditing, program integrity issues and instances of non-compliance with MIPS requirements. If a MIPS eligible clinician or group is found to have submitted inaccurate data for MIPS, we proposed that we would reopen, revise, and recoup any resulting overpayments in accordance with the rules set forth at §§ 405.980 (re-opening rules), 450.982 and 450.984 (revising rules); and 405.370 and 405.373 (recoupment rules). It is important to note that at § 405.980(b)(3) there is an exception whereby we have the authority to re-open at any time for fraud or similar fault. If we re-open the initial determination we must revise it, and send out a notice of the revised determination under § 450.982. We also proposed that we would recoup any payments from the MIPS eligible clinician by the amount of any debts owed to us by the MIPS eligible clinician and likewise, we would recoup any payments from the group by the amount of any debts owed to us by the group. We also note that we would need to limit each such data validation and audit request to the minimum data necessary to conduct validation.

    We proposed all MIPS eligible clinicians and groups that submit data to us electronically must attest to the accuracy and completeness to the best of their knowledge of any data submitted to us. This attestation would occur prior to any electronic data submissions, via a Web site maintained by us.

    We requested comments on these proposals, and the following is a summary of the comments we received.

    Comment: CMS received several comments regarding the details for audit. Commenters believed the proposed rule provided insufficient detail regarding payer responsibility and recommended that CMS provide greater detail and clarity around the auditing of contracts and any obligations or responsibilities payers will have as part of the auditing process. Other commenters requested that CMS, through the rulemaking process, address additional details about audits such as how audit contractors are compensated, how samples are chosen, and frequency of audits.

    Response: We appreciate the requests to address details of the audit process; however, the process will be addressed through subregulatory guidance and, as noted in the proposed rule, we will selectively audit on an annual basis.

    Comment: Commenters supported CMS' adding an independent audit with an appeals process to ensure due process is upheld.

    Response: We appreciate commenters' support. However, we do want to note that there will not be a separate appeals for MIPS eligible clinicians outside of the targeted review process described in the preceding section of this final rule with comment period.

    Comment: One commenter believed that it is important to institute rigorous independent (third party) validation and verification procedures to ensure accuracy and completeness of self-reported data. The commenter requested that validation requirements be similar to the requirements placed on Medicare Advantage plans and other government healthcare programs.

    Response: Validation requirements will be provided to a MIPS eligible clinician or group in advance of an audit.

    Comment: One commenter expressed concern that data validation processes will not address key systematic flaws in medical data collection reporting and evaluation such as honest data entry errors or intentional misrepresentation of a MIPS eligible clinician's performance. The commenter further recognized that the volume of data in MIPS may make it difficult to achieve accuracy in the data collection and reporting processes as well.

    Response: We are concerned about data entry errors and its contribution to MIPS eligible clinicians' performance. We intend to thoroughly review all errors that are identified during data validation with careful consideration given to inadvertent and episodic data entry errors.

    Comment: Commenters supported CMS's proposal to use only one set of auditing requirements for the MIPS program, as commenters believed this would reduce administrative burden and provide a unified approach to MIPS. Commenters also stated support for streamlining the auditing process.

    Response: We thank the commenters and appreciate the support for a singular set of audit requirements and streamlining the auditing process.

    Comment: Commenters believed that direct onsite auditing would be too burdensome for single MIPS eligible clinicians, small, and many midsize primary care organizations. Commenters proposed that no onsite auditing be performed for the first 2 performance periods until CEHRT developers and CMS can publish the details of how such audits will be conducted. Commenter suggested that: (1) Within these first 2 years, MIPS eligible clinicians or groups could volunteer to participate in 'beta' site testing of the proposed audit methodology and be given 'bonus' MIPS points added to their final score; or (2) Single solo practitioner organizations could be exempt from the onsite auditing requirement indefinitely, providing they show they have current support contracts in effect with both the CEHRT developer and/or third party quality organizations that assist the MIPS eligible clinician or group in maintaining compliance within the MIPS program requirements.

    Response: Consistent with upholding the public trust in stewarding the Medicare Trust Fund, all MIPS eligible clinicians and groups that are scored under MIPS are required to respond to all audit requests and audit requirements will be provided in advance to selected MIPS eligible clinicians and groups. Since exemptions and other testing or audit methodologies suggested by the commenters are not consistent with equitable scoring, CMS has identified distinct audit requirements for third party entities and auditing of eligible clinicians and groups. The audits of third party entities or intermediaries, if employed by an eligible clinician or group, are defined separately at § 414.1400(j). MIPS eligible clinicians or groups will be audited on the provisions of care that contributed to their ability to report on an activity or measure. CMS will further make every effort to reduce reporting burdens for MIPS eligible clinicians and groups during audits.

    Comment: A few commenters suggested that CMS clarify whether it or another entity will be the primary lead on data validation and auditing and the specific documents and data that must be available to pass an audit. Further, commenters requested CMS provide additional details regarding their methods for conducting audits, including what instructions or requirements other entities conducting the audits will be provided.

    Response: Data validation and auditing remain under our control and authority, although as we have done for other programs, we may engage a contractor for certain aspects of the data validation and audit processes. Additional information identifying CMS contracted auditing entities and instructions regarding data validation and auditing will be provided through subregulatory guidance.

    Comment: One commenter strongly recommended that CMS provide significant education to physicians about how the program operates, including the review and auditing procedures.

    Response: We appreciate the recommendation and will provide education to MIPS eligible clinicians or groups about the data validation and audit processes. We will provide audit notices, audit instructions, and examples of data and charts needed for the validation of the provision of care attributable to the measures, objectives, and activities on which the MIPS eligible clinicians or group submitted data.

    Comment: One commenter stated that they share provider concerns related to validation of data from other payers.

    Response: We appreciate the concern about data from other payers. Please note that validating data will be reviewed on a case by case basis and additional information will be provided through subregulatory guidance. During the transition year, data from other payers will be used for informational purposes to improve future validation efforts. Data from other payers will not be the only source of data used to make final determinations on whether an eligible clinician or group passes or fails an audit in the transition year. As noted previously, data sources for validation and audits include any primary source documents such as medical charts and other documents that are attributable to any measure or activity reported by an eligible clinician or group.

    Comment: One commenter agreed that reporting patient data across all payers is important and believes that more time should be given to clinicians to synchronize data for non-Medicare patients.

    Response: We appreciate comments identifying time needed to synchronize data and the potential reporting burden it may have on MIPS eligible clinicians or groups. We will review time requirements and extensions on a case by case basis for MIPS eligible clinicians or groups that require additional time during audits covering non-Medicare patients.

    Comment: One commenter encouraged CMS to be consistently transparent in communicating with groups and in verification of their status. The commenter recommended that groups be able to address any inaccuracies or other issues in a transparent, timely fashion.

    Response: Any MIPS eligible clinicians or groups that are subject to data validation and audits will have the ability to have any questions regarding their status addressed. We will make every effort to communicate the status of any audits conducted with the affected MIPS eligible clinician or group in a transparent, and timely fashion.

    Comment: One commenter supported attestation of data submission accuracy and completeness and suggested that attestation be incorporated into the submission process, rather than through a separate portal.

    Response: We proposed all MIPS eligible clinicians and groups that submit data to us electronically must, to the best of their knowledge, attest to the accuracy and completeness of any data submitted to us (81 FR 28280). We also proposed that this attestation would occur prior to any electronic data submissions, via a Web site maintained by us (81 FR 28280). However, after review of the comments, we are not finalizing this policy as proposed. We agree with the commenter and intend to build any attestation requirements related to data accuracy into the submission process, as technically feasible. We believe building any attestation requirements into the submission process will ease the burden for the MIPS eligible clinicians and groups to submit this type of data to us.

    Comment: One commenter agreed with the strategy to combine the data validation process used in PQRS and the auditing process used in the Medicare EHR Incentive Program. The commenter agreed that MIPS eligible clinicians and groups must attest to the accuracy and completeness of any data submitted to CMS prior to any electronic data submissions to CMS.

    Response: Thank you for your support to combine the data validation process used in PQRS and the auditing process used in the Medicare EHR Incentive Program. We intend to establish a unified data validation process across the performance categories for MIPS to conserve time and efforts for eligible clinicians and groups.

    Comment: Several commenters requested that CMS release an audit guide to create more specific guidance prior to the beginning of the performance period so that MIPS eligible clinicians or groups know what documents and what formats would be required for auditing purposes. Further, commenters requested that CMS provide detailed information about how to be prepared for an audit, with descriptions of evidence. Commenters also recommended that CMS have sufficient resources to staff a Quality Payment Program Service Center and develop support materials to guide MIPS eligible clinicians and practice administrators through the review and audit process.

    One commenter expressed concern that audit documentation requirements are not specified in the proposed rule because commenter believed such requirements in the past were not published until after the beginning of the performance year.

    Response: Audit specifications will be provided through subregulatory guidance and MIPS eligible clinicians or groups selected for data validation and audits will be provided instructions and examples of documents required. Please note that documents that should be retained for data validation and audit would be primary source data and files, such as medical records and charts, demonstrating the provision of care consistent with what is reported during the performance period that is being validated or audited. Written communication documents that identify CMS contracted auditing entities, and audit response instructions will be provided through subregulatory guidance to assist eligible MIPS eligible clinicians and groups through the review and audit process. Please note, the Quality Payment Program Service Center is not the appropriate resource for MIPS eligible clinicians, groups or any staff, such as practice administrators, undergoing data validation and audits. CMS intends to utilize contracted auditing entities with sufficient staff to support and assist any eligible clinician, group, or staff, responding to an audit.

    Comment: Commenters suggested a clear delineation of the expected audit and oversight of the program for both MIPS eligible clinicians or groups and third party intermediaries to ensure that everyone is prepared with the proper documentation for audits. Commenters were not able to identify how specifications on reporting are to be conducted.

    Response: Specifications for reporting requirements during the audit will be provided to MIPS eligible clinicians and groups in advance of an audit. Please note that audits of third party intermediaries, if employed by an eligible clinician or group, are defined separately at § 414.1400(j). MIPS eligible clinicians or groups will be audited on the provisions of care that contributed to their ability to report on an activity or measure.

    Comment: A few commenters requested that CMS clearly define audit documentation for each of the MIPS measures and provide specific guidance regarding what data will be required and acceptable for attestation and audit purposes. The commenters suggested that specific audit guidelines and audit preparation instructions be a part of this implementation.

    Response: Since the MIPS measures and activities have numerous and well defined requirements, we do not believe specific audit documentation requirements for each measure and activity would be useful. Audit documentation will be addressed with MIPS eligible clinicians and groups that are selected for audit. Instructions for completing the audit and examples of documents required, such as medical charts and files and other primary source documents, will be provided to the MIPS eligible clinicians and groups during the initial notice. MIPS eligible clinicians and groups should retain copies of medical records, charts, reports and any electronic data utilized to determine which measures and activities were applicable and appropriate for their scope of practice and patient population for reporting under MIPS for up 10 years after the conclusion of the performance period to prepare for verification in the event they are selected for an audit. This record retention timeframe aligns with the record retention timeframes already in place for APMs either established in regulation or included in participation agreements. CMS may request any records or data retained for the purposes of MIPS for up to 6 years and 3 months.

    Comment: Commenters requested that CMS specify in the final rule with comment period what type of audit requests a MIPS eligible clinician will have to respond to and specifically requested clarification on what would be needed to show they have implemented improvement activities.

    Response: We assume that the commenters' request for clarification on “type of audit requests” is seeking clarification on what mode of communication we will use for audit requests. We will use varying mechanisms, which may include mail, email or phone calls. MIPS eligible clinicians or groups will have to respond to all data validation and audit requests. Please note data validation and audits of the quality performance category for the transition year will examine a set of medical charts to verify that the encounters were reported accurately and meet quality measurement requirements. Data validation and audits of the other performance categories will be conducted in future years and additional information on data validation and audits of such categories will be provided through subregulatory guidance.

    Comment: Commenters requested that prior to performing any audit for data validation, CMS provide MIPS eligible clinicians, facilities, and Medicare Administrative Contractors with guidance on how MIPS eligible clinicians or groups and facilities should document MIPS eligible clinicians' performance in source documents.

    Response: Guidance on how primary source documents will be used in data validation and audits will be provided to selected MIPS eligible clinicians or groups in advance of an audit.

    Comment: Several commenters supported auditing, but suggested that CMS set clear deadline expectations on both sides, and suggested that a 10-business day deadline for MIPS eligible clinicians or groups may not be feasible in all circumstances. The commenters suggested limiting burden on MIPS eligible clinicians or groups by allowing 30 days after a request is made and identifying the methodology to select MIPS eligible clinicians or groups for audit. Another commenter requested 45 days to respond to audit requests. While another commenter recommended 20 business days for a MIPS eligible clinician and 30 business days for a group. One commenter noted that this would prevent practices from being inadvertently penalized and remove the possibility that additional data requests would be inappropriately used by contractors as a tool to “manage” their workload. Another commenter recommended that absent any suspicion of wrongdoing, the timeframe for audits should be extended to 30 days, and CMS should consider reimbursement for time and effort required to meet the data submission requirements.

    Response: We appreciate the commenters' concerns and recommendations and note that we are revising the proposed 10 business day timeframe for compliance with data sharing requests to a 45-day timeframe. We note that when we refer to “days,” we generally mean “calendar days,” unless otherwise specified. We believe this timeframe is sufficient as this aligns with the post-payment audit timeframe employed by the Center for Program Integrity at CMS. We note that the timeframe applies equally to MIPS eligible clinicians and groups to maintain program consistency and the 45 day timeframe extends beyond any recommended dates from public responses provided. We believe that a more generous timeframe will enable both MIPS eligible clinicians or groups to satisfactorily comply with data sharing requests and to fully complete an audit in a manner that is consistent with the practices already established in the Medicare program. Please note that those subject to data validation and audits for the transition year will be based on a random selection from both MIPS eligible clinicians and groups, without consideration for suspicions of wrongdoing. We will take the commenter's suggestion to provide reimbursement for time and effort required to meet data submission requirements into consideration. Details regarding any reimbursement will be communicate through subregulatory guidance.

    Comment: Commenters urged CMS to initially take an educational as opposed to a punitive approach to audits and reviews, allowing CMS to collect and analyze “common errors” and publish “lessons learned” about the MIPS program so MIPS eligible clinicians, medical societies, and others can improve the chances of success under MIPS. Another commenter suggested that CMS approach auditing of this new program as an education tool to correct past mistakes.

    Response: We appreciate the recommendation for a gradual audit process and enhanced education for MIPS eligible clinicians or groups. We also appreciate the recommendation to publish common errors and lessons learned from data validation and audits. We will provide examples of correct and incorrect documentation needed to educate and instruct MIPS eligible clinicians or groups identified and selected for audits and data validation and we will consider publishing additional documents in future years as the program matures. We also appreciate the feedback to use data validation and audits as an educational tool. Please note that during the transition year, the data validation and audit process will include education and support for MIPS eligible clinicians and groups selected for an audit.

    Comment: Commenters had concerns with the proposal to “reopen, revise, and recoup any resulting overpayments” if a MIPS eligible clinician or group is found to have submitted inaccurate data for MIPS. Further, several commenters stated that audits and reviews should encourage education and the ability to learn from past mistakes rather than penalizing and recouping payments.

    Response: We acknowledge the request to not make audits punitive. However, the proposal to pursue reopening and recoupment of payments is supported by our current authority to reopen and revise payment determinations, and to recoup any Medicare overpayments resulting from the submission of inaccurate data that is submitted. We note that any recoupments of funds are not penalties; they are payment corrections. We routinely pursue recoupments based on identified overpayments that have been made.

    Comment: One commenter recommended that CMS create an audit report that would detail areas in which MIPS eligible clinicians and groups did well and those where improvement was needed. The commenter further suggested that the report be organized by medical specialty and practice size.

    Response: We cannot determine if the commenter is requesting a public audit report or a private audit report created for those MIPS eligible clinicians or groups that are selected for the audit. In the latter scenario, we intend to provide a report of specific feedback to those MIPS eligible clinicians or groups that are selected for an audit based on the result of the audit and our findings. In the former scenario, we appreciate the benefit of a public audit report and will therefore take this recommendation into consideration. We will further consider organizing the report, if provided, in a manner that most appropriately informs MIPS eligible clinicians and groups and will consider organizing the information by specialty and practice size.

    Comment: One commenter suggested that CMS allow any group which, upon audit, has submitted inaccurate data to correct and resubmit the data before any revisions or recoupments would occur.

    Response: We appreciate the request for groups selected for audit to have the ability to resubmit. However, please note that resubmission of data for recalculation during an audit is not technically feasible at this time. Furthermore, requests for recalculation for data errors would require a targeted review request, which will operationally occur before any audit and data validation processes begin. Since data validation and audits occur separately and after the completion of the performance period, the reporting period, and the targeted review period, we expect any MIPS eligible clinician or group to provide the most accurate and complete data as possible to CMS.

    Comment: Commenters supported a process whereby the MIPS scoring and penalties levied accurately reflect the true practice environment, but still had questions about the audit process.

    Response: Thank you for your support of the scoring process and penalties under MIPS. We recognize commenters have questions about the audit process. Please note that audit notification, materials, examples and instructions will be provided to any MIPS eligible clinician or group selected by CMS for data validation and audit.

    After consideration of the comments, we are finalizing the data validation policies as revised in this final rule with comment period. Specifically, we are finalizing § 414.1390 as proposed to selectively audit MIPS eligible clinicians and groups on a yearly basis, and that if a MIPS eligible clinician or group is selected for audit, the MIPS eligible clinician or group will be required to do the following in accordance with applicable law and timelines CMS establishes:

    • Comply with data sharing requests, providing all data as requested by CMS or our designated entity. All data must be shared with CMS or our designated entity within 45 days of the data sharing request, or an alternate timeframe that is agreed to by CMS and the MIPS eligible clinician or group. Data will be submitted via email, facsimile, or an electronic method via a secure Web site maintained by CMS.

    • Provide substantive, primary source documents as requested. These documents may include: copies of claims, medical records for applicable patients, or other resources used in the data calculations for MIPS measures, objectives and activities. Primary source documentation also may include verification of records for Medicare and non-Medicare beneficiaries where applicable.

    We are also finalizing that we will perform ongoing monitoring of MIPS eligible clinicians and groups on an ongoing basis for data validation, auditing, program integrity issues and instances of non-compliance with MIPS requirements. If a MIPS eligible clinician or group is found to have submitted inaccurate data for MIPS, we are finalizing that we would reopen and revise the determination in accordance with the rules set forth at §§ 405.980 (re-opening rules), 450.982 and 450.984 (revising rules); and we would collect any overpayment in accordance with §§ 405.370 and 405.373 (recoupment rules). It is important to note that at § 405.980(b)(3) there is an exception whereby we have the authority to re-open at any time for fraud or similar fault. If we re-open the initial determination we must revise it, and send out a notice of the revised determination under § 450.982. We also are finalizing our approach to recoup improper payments from the MIPS eligible clinician by the amount of any debts owed to us by the MIPS eligible clinician and likewise, we would recoup any payments from the group by the amount of any debts owed to us by the group. We also note that we would limit each data validation and audit request to the minimum data necessary to conduct validation. Based on comments received, we intend to use data validation and audits as an educational opportunity for MIPS eligible clinicians and groups; therefore, during the transition year, the data validation and audit process will include education and support for MIPS eligible clinicians and groups selected for an audit.

    Lastly, we are finalizing that all MIPS eligible clinicians and groups that submit data to us electronically must attest to the best of their knowledge that the data submitted to us is accurate and complete.

    9. Third Party Data Submission

    One of our strategic goals in developing MIPS includes developing a program that is meaningful, understandable, and flexible for participating MIPS eligible clinicians. One way we believe this will be accomplished is through flexible reporting options to accommodate different practices and make measurement meaningful. We believe this goal can be accomplished by allowing MIPS eligible clinicians the flexibility of using third party intermediaries to collect or submit data on their behalf. In this section, we are specifying the criteria that must be met to be approved by CMS as a third party intermediary. For purposes of this section, the use of the term “third party” refers to a qualified registry, a QCDR, a health IT vendor that obtains data from a MIPS eligible clinician's CEHRT, or a CMS-approved survey vendor.

    In the PQRS program, quality measures data may be collected or submitted by third party vendors on behalf of an individual EP or group by: (1) A registry; (2) a QCDR; or (3) an EHR vendor that obtains data from an EP's CEHRT; or (4) a CMS-approved survey vendor. We proposed at § 414.1400(a)(1) that MIPS data may be submitted by third party intermediaries on behalf of a MIPS eligible clinician or group by: (1) A qualified registry; (2) a QCDR; (3) a health IT vendor; or (4) a CMS-approved survey vendor. Furthermore, we proposed at § 414.1400(a)(3) that third party intermediaries must meet all the criteria designated by us as a condition of their qualification or approval to participate in MIPS as a third party intermediary. As proposed at § 414.1400(a)(3)(ii), all submitted data must be submitted in the form and manner specified by us.

    The following is a summary of the comments we received regarding our proposed definition of third party data intermediaries.

    Comment: Some commenters suggested that registries are foundational to population health management, as registries foster care improvement, inform participants on needed focus areas, highlight performance areas for improvement, and identify which patients require interventions. The commenters also stated that registries are already in use by ACOs, and that under the proposal, MIPS eligible clinicians may satisfy the proposal's quality data reporting criteria by using data that is already being submitted to a clinical registry or to an ACO. Thus, the commenters expressed support for QCDR use under the proposal, as this reporting mechanism enhances the importance of existing registries that already seek to deliver high quality and high value care, and additionally streamlines reporting criteria for MIPS eligible clinicians.

    Response: We appreciate the commenters' support for inclusion of qualified registries.

    Comment: Some commenters requested that CMS recognize that changes to QCDRs, registries, and EHRs require significant financial resources and time to plan, incorporate, and test. The commenters added there must be ample notice in the rulemaking process for QCDRs, registries, and developers to plan and adequately meet these changes. The commenters encouraged CMS to establish a program update calendar to identify annual data management updates or reprogramming that is recurring and make an effort to adjust regulatory implementation dates to spread out the data collection, modifications, or updates so that they do not all occur during the last quarter of the calendar year.

    Response: We aim to minimize changes to criteria whenever possible because we understand that implementing these changes can in certain instances be a lengthy process. However, at this time, we cannot provide a specific update calendar. We will adopt changes to the criteria through future rulemaking as necessary. We anticipate that as we gain experience under the MIPS, we will be able to establish a schedule or cycle of updates through future rulemaking.

    Comment: Another commenter recommended that small practices be allowed to use PPRNET (Primary Care Practice Research Network—a practice based research network and QCDR) to help them submit measures for MIPS, and possibly other metrics.

    Response: MIPS eligible clinicians may choose from several data submission mechanisms. If PPRNET satisfies the QCDR criteria and is approved by CMS as a third party intermediary, then it will be a data submission option available for those MIPS eligible clinicians who choose to use it.

    After consideration of the comments, we are finalizing our policies as proposed. Specifically, we are finalizing at § 414.1400(a)(1) that MIPS data may be submitted by third party intermediaries on behalf of a MIPS eligible clinician or group by: (1) A qualified registry; (2) a QCDR; (3) a health IT vendor; or (4) a CMS-approved survey vendor. Additionally, we are finalizing at § 414.1400(a)(3) that third party intermediaries must meet all the criteria designated by CMS as a condition of their qualification or approval to participate in MIPS as a third party intermediary. Lastly, we are finalizing at § 414.1400(a)(3)(ii), all submitted data must be submitted in the form and manner specified by CMS.

    a. Qualified Clinical Data Registries (QCDRs)

    Section 1848(q)(1)(E) of the Act requires the Secretary to encourage the use of QCDRs under section 1848(m)(3)(E) of the Act in carrying out MIPS. Section 1848(q)(5)(B)(ii)(I) of the Act requires the Secretary, under the final score methodology, to encourage MIPS eligible clinicians to report on applicable measures for the quality performance category through the use of certified EHR technology and QCDRs. Section 1848(q)(2)(B)(iii)(II) of the Act requires that the improvement activities subcategories specified by the Secretary include population management, such as monitoring health conditions of individuals to provide timely health care interventions or participation in a QCDR. Section 1848(q)(12)(A)(ii) of the Act requires the Secretary to encourage the provision of performance feedback through QCDRs.

    Section 1848(m)(3)(E)(i) of the Act requires the Secretary to establish requirements for an entity to be considered a QCDR, which must include a requirement that the entity provide the Secretary with such information, at such times, and in such manner, as the Secretary determines necessary to carry out section 1848(m) of the Act. Section 1848(m)(3)(E)(iv) of the Act requires the Secretary to consult with interested parties in carrying out section 1848(m)(3)(E) of the Act.

    Currently, the QCDR reporting mechanism provides a method to satisfy PQRS requirements based on satisfactory participation. We proposed that entities interested in becoming a QCDR for MIPS go through a qualification process. This includes the QCDR meeting the definition of a QCDR, self-nomination criteria, and the criteria of a QCDR, including the deadlines listed below. This qualification process allows us to ensure that the entity has the capability to successfully report MIPS eligible clinicians' data to us and allows for review and approval of the QCDR's proposed non-MIPS quality measures. We intend to compile and post a list of entities that we “qualify” to submit data to us as a QCDR for purposes of MIPS on a Web site maintained by us.

    Section 1848(q)(1)(E) of the Act encourages the use of QCDRs in carrying out the MIPS. Although section 1848(q)(5)(B)(ii)(I) of the Act specifically requires the Secretary to encourage MIPS eligible clinicians to use QCDRs to report on applicable measures for the quality performance category and section 1848(q)(12)(A)(ii) of the Act requires the Secretary to encourage the provision of performance feedback through QCDRs, the statute does not specifically address usage of QCDRs for the other MIPS performance categories. Although we could limit the usage of QCDRs to assessing the quality performance category under MIPS and providing performance feedback, we believe it would be less burdensome for MIPS eligible clinicians if we expand the QCDRs capabilities. By allowing QCDRs to report on the quality, advancing care information, and improvement activities performance categories we would alleviate the need for individual MIPS eligible clinicians and groups to use a separate mechanism to report data for these performance categories. It is important to note that no data will need to be reported for the cost performance category since these measures are administrative claims-based. Therefore, we proposed at § 414.1400(a)(2) to expand QCDRs' capabilities by allowing QCDRs to submit data on measures, activities, or objectives for any of the following MIPS performance categories:

    • Quality;

    • Improvement activities; or

    • Advancing care information, if the MIPS eligible clinician or group is using CEHRT.

    We believe this approach would permit a single QCDR to report on the quality, advancing care information, and improvement activities performance category requirements for MIPS and should mitigate the risks, costs, and burden of MIPS eligible clinicians having to report multiple times to meet the requirements of MIPS.

    We proposed to define a QCDR at § 414.1305 as a CMS-approved entity that has self-nominated and successfully completed a qualification process to determine whether the entity may collect medical or clinical data for the purpose of patient and disease tracking to foster improvement in the quality of care provided to patients. Examples of the types of entities that may qualify as QCDRs include, but are not limited to, regional collaboratives and specialty societies using a commercially available software platform, as appropriate.

    The following is a summary of the comments we received regarding our proposals on the definition of a QCDR and the performance categories for which a QCDR is allowed to submit data on behalf of MIPS eligible clinicians.

    Comment: Several commenters agreed with the proposal to allow third party entities, such as QCDRs, to submit data for the categories of quality, advancing care information, and improvement activities. The commenters believed allowing MIPS eligible clinicians to use a single, third party data submission method reduces the administrative burden on MIPS eligible clinicians, facilitates consolidation, and standardization of data from disparate EHRs and other systems, and enables the third parties to provide timely, actionable feedback to MIPS eligible clinicians on opportunities for improvement in quality and value. Some commenters requested that CMS quickly release additional guidance to QCDRs regarding the capabilities that would be necessary to report the range of performance categories for the transition year of MIPS. One commenter believed this would allow for streamlined data submission and more complete feedback to MIPS eligible clinicians through QCDRs.

    Response: We intend to finalize our proposal that QCDRs will have the flexibility to submit data on behalf of any of the following performance categories: Quality, improvement activities, and advancing care information. In addition, we intend to release additional guidance to third party intermediaries regarding the submission standards that QCDRs would need to comply with for data submissions across the performance categories. We will publish this information at QualityPaymentProgram.cms.gov prior to the beginning on the performance period.

    Comment: Another commenter was pleased that CMS understood the potential and value of QCDRs and included QCDRs as a reporting option across several of the MIPS components and improvement activities. For those performance categories where QCDR reporting is an option, such as improvement activities and advancing care information performance categories, the commenter requested that CMS outline specifics as soon as possible to ensure registry technology vendors can meet the needs of MIPS eligible clinicians selecting the MIPS pathway.

    Response: We thank the commenter for their support of our proposals. In addition, we intend to release additional guidance to third party intermediaries regarding the submission standards that QCDRs would need to comply with for data submissions across the performance categories. We will publish this information at QualityPaymentProgram.cms.gov prior to the beginning of the performance period.

    Comment: A few commenters did not support the criterion that QCDRs must have the capability to submit for all performance categories. The commenters believed that while this could reduce burden for MIPS eligible clinicians, choosing to support one or more performance categories is a business decision and should not be regulated and would limit the MIPS eligible clinician's choice of QCDRs in the early years of MIPS, as not all third party entities would necessarily be able to meet the criteria for submittal for all three performance categories.

    Response: We would like to explain that we did not propose to require that QCDRs must have the capability to submit data for all performance categories, rather we proposed that they would have the option to do so. We agree with the commenters that requiring all QCDRs to be able to submit data for all performance categories is premature until stakeholders such as third party entities gain experience under the MIPS.

    Comment: Another commenter agreed with and requested that CMS reinforce its definition of a QCDR as one that collects medical or clinical data for the purpose of patient and disease tracking to foster improvement in the quality care provided to patients. The commenter requested that CMS notify QCDRs are soon as possible if a non-MIPS QCDR measure will not be renewed in future years for other reason.

    Response: We appreciate the commenters' support for our proposed definition of a QCDR. We would like to explain that QCDRs that have been previously approved under the PQRS program will need to self-nominate and confirm their ability to meet the requirements of a QCDR under the MIPS. We are not able to “grandfather” any existing QCDRs over from the PQRS program to the MIPS program. We do anticipate however that the overwhelming majority of QCDRs that were able to meet the requirements under PQRS will be able to meet the requirements under MIPS. Furthermore, we anticipate that the non-PQRS measures that QCDRs had approved under PQRS, would in most instances be approved as non-MIPS measures, if the QCDR chooses to submit these measures for approval to CMS.

    After consideration of the comments received, we are finalizing the QCDR policies as proposed. Specifically, we are finalizing at § 414.1400(a)(2) to expand QCDRs' capabilities by allowing QCDRs to submit data on measures, activities, or objectives for any of the following MIPS performance categories:

    • Quality;

    • Improvement activities; or

    • Advancing care information, if the MIPS eligible clinician or group is using CEHRT.

    Additionally, we are finalizing to define a QCDR at § 414.1305 as a CMS-approved entity that has self-nominated and successfully completed a qualification process to determine whether the entity may collect medical or clinical data for the purpose of patient and disease tracking to foster improvement in the quality of care provided to patients. Examples of the types of entities that may qualify as QCDRs include, but are not limited to, regional collaboratives and specialty societies using a commercially available software platform, as appropriate.

    (1) Establishment of an Entity Seeking To Qualify as a QCDR

    We proposed at § 414.1400(c) the establishment of a QCDR entity is required as follows: for an entity to become qualified for a given performance period as a QCDR, the entity must be in existence as of January 1 of the performance period for which the entity seeks to become a QCDR (for example, January 1, 2017, to be eligible to participate for purposes of performance periods beginning in 2017). The QCDR must have at least 25 participants by January 1 of the performance period. These participants do not need to be using the QCDR to report MIPS data to us; rather, they need to be submitting data to the QCDR for quality improvement.

    The following is a summary of the comments we received regarding the establishment of an entity seeking to qualify as a QCDR.

    Comment: Some commenters proposed that in lieu of having an establishment requirement that aligns with the start of the performance period that CMS should consider options for making other aspects of the eligibility criteria more flexible.

    Response: We believe that a QCDR should be established and collecting quality data at the time of self-nomination. Having a process to accept data and report it by the time the entity self-nominates reduces the chance that the entity will not be able to successfully submit their MIPS eligible clinician's data during the data submission period.

    Comment: One commenter supported the proposal for the establishment of a QCDR entity, particularly that participants do not need to be using the QCDR to report MIPS data to CMS, but need to be submitting data to the QCDR for quality improvement. The commenter believed this would allow registries hosted by non-physician clinician groups to obtain QCDR certification despite lack of inclusion of such clinicians in the definition of a MIPS eligible clinician in the initial years of MIPS.

    Response: We agree and thank the commenter for their support. We note that registries hosted by non-physician clinician groups may satisfy the QCDR criteria and be approved by CMS as a third party intermediary.

    Comment: Some commenters requested clarification regarding what defines a participant (for example, reporting entity (group or clinician) or individual clinicians) in the requirement that a QCDR needs to have 25 participants by January 1 of the performance period.

    Response: We would like to note that a “participant” is a MIPS eligible clinician. Therefore, we require the QCDR to have 25 MIPS eligible clinicians by January 1 of the performance period to become qualified for a given performance period as a QCDR.

    Comment: Other commenters stated that the expectations for QCDR self-nomination may be unrealistic for new endeavors. Specifically, requiring a QCDR to have at least 25 participants by January 1 of the performance period assumes the existence of the registry prior to self-nominating as a QCDR. Consequently, a registry would have to be in existence, based on its own structural requirements and specifications, before it could self-nominate. The commenter appreciated the decision to require self-nomination on an annual basis and supports the information criteria that CMS proposes for self-nomination. The commenter believed the data submission criteria outside the self-nomination process are too restrictive and should be revised and that the final rule with comment period should include appeals, grievance, and corrective action processes.

    Response: We require at least 25 participants in an effort to ensure that potential QCDRs have experience in data collection and calculation capabilities. For appeals, MIPS has a targeted review process please refer to section II.E.8.c. of this final rule with comment period for more information

    Comment: One commenter requested that CMS consider potential approval of the Scientific Registry of Transplant Recipients (SRTR) as a QCDR. The commenter also requested CMS work with the transplant community and assist in overcoming current barriers related to QCDR technical requirements. Other commenters requested that when CMS compiles the list of entities qualified to submit data as a QCDR, that CMS accept the Indian Health Service (IHS) Resource and Patient Management System (RPMS) as a qualified entity. The commenter requested CMS work with IHS to ensure that the RPMS is capable of meeting MIPS reporting criteria.

    Response: We would like to explain that while we will consider all entities that seek to qualify as a QCDR, we cannot conclude that a particular entity is capable of meeting our criteria in advance of the qualification process. It is important to note that an entity must meet the criteria in § 414.1400(c) and be approved by CMS to qualify as a QCDR. We will develop further subregulatory guidance, including through tribal consultation to address issues raised by entities that want to be QCDRs.

    After consideration of the comments received on the establishment of an entity seeking to qualify as a QCDR we are finalizing the policies as proposed. Specifically, we are finalizing at § 414.1400(c) the establishment of a QCDR entity is required as follows: for an entity to become qualified for a given performance period as a QCDR, the entity must be in existence as of January 1 of the performance period for which the entity seeks to become a QCDR (for example, January 1, 2017, to be eligible to participate for purposes of performance periods beginning in 2017). The QCDR must have at least 25 participants by January 1 of the performance period. These participants do not need to be using the QCDR to report MIPS data to us; rather, they need to be submitting data to the QCDR for quality improvement.

    (2) Self-Nomination Period

    For the 2017 performance period we proposed at § 414.1400(b) a self-nomination period from November 15, 2016 until January 15, 2017. For future years of the program, starting with the 2018 performance period, we proposed to establish the self-nomination period from September 1 of the prior year until November 1 of the prior year. Entities that desire to qualify as a QCDR for the purposes of MIPS for a given performance period would need to self-nominate for that year and provide all information requested by us at the time of self-nomination. Having qualified as a QCDR in a prior year does not automatically qualify the entity to participate in MIPS as a QCDR in subsequent performance periods. For example, a QCDR may choose not to continue participation in the program in future years, or the QCDR may be precluded from participation in a future year due to multiple data or submission errors as noted below. Finally, QCDRs may want to update or change the measures or services or performance categories they intend to provide. As such, we believe an annual self-nomination process is the best process to ensure accurate information is conveyed to MIPS eligible clinicians and accurate data is submitted to MIPS.

    We proposed to require other information (described below) of QCDRs at the time of self-nomination. If an entity becomes qualified as a QCDR, they will need to sign a statement confirming this information is correct prior to listing it on their Web site. Once we post the QCDR on our Web site, including the services offered by the QCDR, we will require the QCDR to support these services or measures for its clients as a condition of the entity's qualification as a QCDR for purposes of MIPS. Failure to do so will preclude the QCDR from participation in MIPS in the subsequent year.

    The following is a summary of the comments we received regarding the self-nomination period.

    Comment: A few commenters agreed with the self-nomination period for the 2017 performance period.

    Response: We appreciate the support from the commenters.

    Comment: Some commenters opposed the proposed deadlines for QCDR self-nomination of January 15, 2017 for the 2017 performance period and November 1 for the 2018 performance period and beyond. Specifically, the commenters stated that if CMS finalizes a performance period for the 2019 MIPS payment adjustment of January 1, 2017 through December 31, 2017, the commenters requested that CMS extend the QCDR self-nomination deadline to March 31, 2017. Other commenters opposed the data proposed for QCDR self-nomination given the timing that regulations will be finalized and recommended extending the deadline to 3 months following the start of the performance period for the 2019 MIPS payment adjustment. Another commenter requested that CMS extend the self-nomination deadline to February 28, 2017. The commenter stated that QCDRs need additional time to determine that their systems will work with or can be updated to accommodate new MIPS requirements.

    Response: We acknowledge the short timeline, but our intention was to complete the QCDR approval process as early as possible to allow MIPS eligible clinicians the most time in choosing the QCDR they intend to use. We note, that while QCDRs that have previously been approved under the PQRS program do need to self-nominate for consideration under the MIPS, we anticipate that the overwhelming majority of the existing QCDRs will be able to meet the requirements finalized here. The requirements to qualify as a QCDR have been fairly consistent since we started using QCDRs under the PQRS program.

    After consideration of the comments received on the self-nomination period we are finalizing the policies as proposed. Specifically, for the 2017 performance period we are finalizing at § 414.1400(b) a self-nomination period from November 15, 2016 until January 15, 2017. For future years of the program, starting with the 2018 performance period, the self-nomination period must occur from September 1 of the prior year until November 1 of the prior year. Entities that desire to qualify as a QCDR for the purposes of MIPS for a given performance period would need to self-nominate for that year and provide all information requested by us at the time of self-nomination.

    We are finalizing our proposal to require other information (described below) of QCDRs at the time of self-nomination. All self-nomination information must be submitted to [email protected]. If technically feasible we will accept self-nomination information via a web-based tool we will provide any further information on the web-based tool at QualityPaymentProgram.cms.gov. If an entity becomes qualified as a QCDR, they will need to sign a statement confirming this information is correct prior to listing it on their Web site. Once we post the QCDR on our Web site, including the services offered by the QCDR, we will require the QCDR to support these services or measures for its clients as a condition of the entity's qualification as a QCDR for purposes of MIPS. Failure to do so will preclude the QCDR from participation in MIPS in the subsequent year.

    (3) Information Required at the Time of Self-Nomination

    We proposed that a QCDR must provide the following information to us at the time of self-nomination to ensure that QCDR data is valid:

    • Organization Name (Specify Sponsoring Organization name and software vendor name if the two are different. For example, a specialty society in collaboration with a software vendor).

    • MIPS performance categories (that is, categories for which the entity is self-nominating. For example, quality, advancing care information, or improvement activities).

    • Performance Period.

    • Vendor Type (for example, qualified clinical data registry).

    • Provide the method(s) by which the entity obtains data from its customers for each performance category for which it is approved: claims, web-based tool, practice management system, CEHRT, other (please explain). If a combination of methods (Claims, web-based tool, Practice Management System, CEHRT, or other) is utilized, the entity should state which method(s) it utilizes to collect data (for example, performance numerator and denominator).

    • Indicate the method the entity will use to verify the accuracy of each TIN/NPI it is intending to submit (for example, National Plan and Provider Enumeration System (NPPES), CMS claims, tax documentation).

    • Describe the method that the entity will use to accurately calculate performance rates for quality measures based on the appropriate measure type and specification. For composite measures or measures with multiple performance rates, the entity must provide us with the methodology the entity uses to calculate these composite measures and measures with multiple performance rates. The entity should be able to report to us a calculated composite measure rate if applicable.

    • Describe the method that the entity will use to accurately calculate performance data for improvement activities and advancing care information based on the appropriate parameters or activities.

    • Describe the process that the entity will use for completion of a randomized audit of a subset of data prior to the submission to us (for all performance categories the QCDR is submitting data on, that is, quality, improvement activities, and advancing care information, as applicable). Periodic examinations may be completed to compare patient record data with submitted data or ensure MIPS quality measures or other performance category (improvement activities, advancing care information) activities were accurately reported and performance calculated based on the appropriate measure specifications (that is, accuracy of numerator, denominator, and exclusion criteria) or performance category requirements.

    • Provide information on the entity's process for data validation for both individual MIPS eligible clinicians and groups within a data validation plan. For example, for individuals it is encouraged that 3 percent of the TIN/NPIs submitted to us by the QCDR be sampled with a minimum sample of 10 TIN/NPIs or a maximum sample of 50 TIN/NPIs. For each TIN/NPI sampled, it is encouraged that 25 percent of the TIN/NPI's patients (with a minimum sample of five patients or a maximum sample of 50 patients) should be reviewed for all measures applicable to the patient.

    • Provide the results of the executed data validation plan by May 31 of the year following the performance period. If the results indicate the QCDR's validation reveals inaccuracy or low compliance provide to CMS an improvement plan. Failure to implement improvements may result in the QCDR being placed in a probationary status or disqualification from future participation.

    • For non-MIPS quality measures, if the measure is risk-adjusted, the QCDR is required to provide details to us on their risk adjustment methodology (risk adjustment variables, and applicable calculation formula) at the time of the QCDR's self-nomination. The QCDR must submit the risk adjusted results to us when submitting a risk-adjusted measure on behalf of the QCDR's MIPS eligible clinicians for the performance period.

    The following is a summary of the comments we received regarding information required at the time of self-nomination for QCDRs.

    Comment: A few commenters stated that the self-nomination process for QCDRs does not allow flexibility to update or change information in response to a CMS review or a previous years' experience. The commenters believed that CMS should review measures and the validation strategy after the prior year's measure submission or CMS should have a process for allowing submission of modifications.

    Response: We agree that timely feedback regarding the possible elimination of non-MIPS measures is important and are committed to providing this information to QCDRs as early as possible. We cannot wait for the data to be sent in for the prior year's submissions before finalizing QCDR information for the next performance period as doing so would mean that we could not publish the list of qualified QCDRs before the performance period. For example, this would mean that for the 2018 year, we would not be able to publish the list of QCDRs until summer 2018, which we believe is too late within the performance period for MIPS eligible clinicians to make their selection. That is, half of the performance period would have transpired before the list of qualified entities was publically posted.

    Comment: One commenter requested that CMS remove the requirement for annual self-nomination when significant changes have not been made to the QCDR.

    Response: We will take this under consideration as we develop the criteria for QCDRs in future years.

    Comment: One commenter sought clarification from CMS regarding any changes to the QCDR self-nomination criteria for 2017 and beyond.

    Response: Any changes to the self-nomination criteria for QCDRs would be addressed in future rulemaking.

    Comment: Several commenters believed it was important to note that QCDR criteria for data validation plans are sufficient to ensure accuracy of data.

    Response: We agree with the commenters' input.

    Comment: Another commenter supported the criterion to have QCDRs submit risk-adjusted measure results.

    Response: We appreciate the commenters' support.

    Comment: One commenter stated that MIPS eligible clinicians should not be held responsible for errors or delays by third party intermediaries. The commenters stated CMS should require testing and provide data validation on data submitted to EHR vendors and QCDRs that is submitted to CMS. The commenters stated CMS should then inform MIPS eligible clinicians about any errors found through the data validation process.

    Response: We are in the process of refining the testing process to facilitate accurate reporting. However, we note that MIPS eligible clinicians are ultimately responsible for the data that are submitted by their third party intermediary and we expect that they are holding their third party intermediary accountable for accurate reporting. Additionally, we plan to have a probation and disqualification process for QCDRs with high error rates, as discussed below, in this final rule with comment period, in the section entitled “Probation and Disqualification of a Third Party Intermediary.” While we do not want to remove any QCDRs from participation, it is imperative that we (and MIPS eligible clinicians) receive accurate and actionable data.

    Comment: Another commenter stated May 31 is too soon to provide the results of the executed data validation plan of the year following the performance period. A June 30 timeframe would be better.

    Response: We appreciate the commenter's concerns. However, we believe it is important to allow MIPS eligible clinicians time to select a QCDR before the performance period. Part of continued participation in the program for QCDRs is for CMS to review the data validation execution reports. As such, this date is needed for earlier publication of the following program year's QCDRs.

    Comment: Another commenter stated it is critical that CMS work with QCDRs to ensure that CMS can accept formats that allow each registry to demonstrate the unique features of its data, especially embedded risk adjustment.

    Response: We encourage all non-MIPS measures be risk-adjusted (where appropriate) but it is up to the QCDR and measure developer or owner to define the risk-adjustment elements and methodology for the measure.

    Comment: One commenter requested that CMS: (1) Disclose publicly that the criteria for non-MIPS measures must meet to be approved; (2) articulate the circumstances under which a QCDR may be approved, but not its specialty-specific measures; and (3) delineate the practical implications for a QCDR that is approved through the self-nomination process when its non-MIPS measures are not.

    Response: Approval of non-MIPS measures is part of the QCDR approval process. In cases where NO non-MIPS measures are approved but the QCDR is approved, the QCDR can elect to participate in the program reporting any MIPS measures the QCDR so chooses. We have included, in this final rule with comment period, in the section below entitled “Identifying Non-MIPS Quality Measures” the elements that will factor into CMS' non-MIPS measure approval process.

    Comment: Some commenters recommended that CMS allow a 3-year period of automatic measure approval through the QCDR self-nomination process.

    Response: We do not believe that measures should be automatically approved for use by a QCDR for 3 years. As the science changes or the evidence evolves, measures may need to be updated or changed altogether. Additionally, we do not guarantee that a QCDR will be a qualified entity for 3 years. The QCDR's tenure in the program is dependent on the QCDR's desire to continue participating in the program and meeting the criteria for the program (including submitting accurate data and measure results).

    After consideration of the comments received on the information required at the time of self-nomination for QCDRs we are finalizing the policies as proposed. Specifically, a QCDR must provide the information described above to us at the time of self-nomination to ensure that the QCDR data is valid.

    (4) QCDR Criteria for Data Submission

    In addition, we proposed that a QCDR must perform the following functions:

    • For measures under the quality performance category and as proposed at § 414.1400(a)(4)(i), if the data is derived from CEHRT, the QCDR must be able to indicate this data source.

    • QCDRs must provide complete quality measure specifications including data elements to us for non-MIPS quality measures intended for reporting from CEHRT.

    • QCDRs must provide a plan to risk adjust (if appropriate for the measure) the non-MIPS quality measures data for which it collects and intends to transmit to us and must submit the risk-adjusted results (not the non-risk adjusted rates), to CMS. The risk adjustment methodology (formula and variables) must be integrated with the complete quality measure specifications. Specifically, for risk-adjusted non-MIPS quality measures, a QCDR is required to provide details to us on their risk adjustment methodology. The data elements used for risk adjustment may vary by measure and measure type. The risk adjustment methodology, including the risk adjustment variables, must be posted along with the measure's specifications on the QCDR's Web site. We believe risk-adjustment for certain outcomes measures is important to account for the differences in the complexities of care provided to different patients. That is, some patients may have additional comorbidities which could affect their response to treatment and subsequently their outcome. Risk adjustment will help offset potential poorer outcomes for those MIPS eligible clinicians caring for sicker patients.

    • QCDRs submitting MIPS quality measures that are risk-adjusted (and have the risk-adjusted variables and methodology listed in the measure specifications) must submit the risk-adjusted measure results to CMS when submitting the data for these measures.

    • Submit quality, advancing care information, or improvement activities data and results to us in the applicable MIPS performance categories for which the QCDR is providing data.

    • A QCDR must have in place mechanisms for the transparency of data elements and specifications, risk models, and measures. That is, we expect that the non-MIPS measures and their data elements (that is, specifications) comprising these measures be listed on the QCDR's Web site unless the measure is a MIPS measure, in which case the specifications will be posted by us.

    • Submit to us data on measures, activities, and objectives for all patients, not just Medicare patients.

    • Provide timely feedback, at least 6 times a year, on all of the MIPS performance categories that the QCDR will report to us. That is, if the QCDR will be reporting on data for the improvement activities, advancing care information, or quality performance category, all results as of the performance feedback date should be included in the information sent back to the MIPS eligible clinician. The feedback should be given to the individual MIPS eligible clinician or group (if participating as a group) at the individual participant level or group level, as applicable, for which the QCDR reports. The QCDR is only required to provide feedback based on the MIPS eligible clinician's data that is available at the time the performance feedback is generated.

    • Possess benchmarking capability (for non-MIPS quality measures) that compares the quality of care a MIPS eligible clinician provides with other MIPS eligible clinicians performing the same quality measures. For non-MIPS measures the QCDR must provide us, if available, data from years prior (for example, 2015 data for the 2017 MIPS performance period) before the start of the performance period. In addition, the QCDR must provide us, if available, with the entire distribution of the measure's performance broken down by deciles. As an alternative to supplying this information to us, the QCDR may post this information on their Web site prior to the start of the performance period, to the extent permitted by applicable privacy laws.

    • QCDRs must comply with any request by us to review the data submitted by the QCDR for purposes of MIPS in accordance with applicable law. Specifically, data requested would be limited to the minimum necessary for us to carry out, for example, health care operations or health oversight activities.

    • Mandatory participation in ongoing support conference calls hosted by us (approximately one call per month), including an in-person QCDR kick-off meeting (if held) at our headquarters in Baltimore, MD. More than one unexcused absence could result in the QCDR being precluded from participation in the program for that year. If a QCDR is precluded from participation in MIPS, the individual MIPS eligible clinician or group would need to find another QCDR or utilize another data submission mechanism to submit their MIPS data.

    • Agree that data inaccuracies including (but not limited to) TIN/NPI mismatches, formatting issues, calculation errors, data audit discrepancies affecting in excess of 3 percent of the total number of MIPS eligible clinicians submitted by the QCDR may result in notations on our qualified QCDR posting of low data quality and would place the QCDR on probation (if they decide to self-nominate for the next program year). If the QCDR does not reduce their data error rate below 3 percent in the subsequent year, they would continue to be on probation and have their listing on the CMS Web site continue to note the poor quality of the data they are submitting for MIPS. Data errors affecting in excess of 5 percent of the MIPS eligible clinicians submitted by the QCDR may lead to the disqualification of the QCDR from participation in the following year's program. As we gain additional experience with QCDRs, we intend to revisit and enhance these thresholds in future years.

    • Be able to submit results for at least six quality measures including one cross-cutting measure and one outcome measure. If an outcome measure is not available, be able to submit results for at least one other high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measures). If no outcome measure is available, then the QCDR must provide a justification for not including an outcome measure.

    • QCDRs may request to report on up to 30 quality measures not in the annual list of MIPS quality measures. Full specifications will need to be provided to us at the time of self-nomination. We will review the quality measures and determine if they are appropriate for QCDR reporting.

    • Enter into and maintain with its participating MIPS eligible clinicians an appropriate Business Associate agreement that provides for the QCDR's receipt of patient-specific data from an individual MIPS eligible clinician or group, as well as the QCDR's disclosure of quality measure results and numerator and denominator data or patient specific data on Medicare and non-Medicare beneficiaries on behalf of MIPS eligible clinicians and groups.

    • Obtain and keep on file signed documentation that each holder of an NPI whose data are submitted to the QCDR, has authorized the QCDR to submit quality measure results, improvement activities measure and activity results, advancing care information objective results and numerator and denominator data or patient-specific data on Medicare and non-Medicare beneficiaries to CMS for the purpose of MIPS participation. This documentation should be obtained at the time the MIPS eligible clinician or group signs up with the QCDR to submit MIPS data to the QCDR and must meet the requirements of any applicable laws, regulations, and contractual business associate agreements. Groups participating in MIPS via a QCDR may have their group's duly authorized representative grant permission to the QCDR to submit their data to us. If submitting as a group, each individual MIPS eligible clinician does not need to grant their individual permission to the QCDR to submit their data to us.

    • Not be owned and managed by an individual locally owned single specialty group (for example, single specialty practices with only one practice location or solo practitioner practices are prohibited from self-nominating to become a qualified QCDR).

    • Be able to separate out and report on all payers including Medicare Part B FFS patients and non-Medicare patients.

    • Provide the measure numbers for the MIPS quality measures on which the QCDR is reporting.

    • Provide the measure title for the MIPS quality measures and improvement activities (if applicable) on which the QCDR is reporting.

    • Report the number of eligible instances (reporting denominator).

    • Report the number of instances a quality service is performed (performance numerator).

    • Report the number of performance exclusions, meaning the quality action was not performed for a valid reason as defined by the measure specification.

    • Comply with a CMS-specified secure method for data submission, such as submitting the QCDR's data in an XML file.

    • Sign a document verifying the QCDR's name, contact information, cost for MIPS eligible clinicians or groups to use the QCDR, services provided, and the measures and specialty-specific measure sets the QCDR intends to report. Once posted, on the QCDR's or CMS Web site, the QCDR will need to support the measures or measure sets confirmed by the QCDR. Failure to do so will preclude the QCDR from participation in MIPS in the subsequent year.

    • Must provide attestation statements during the data submission period that all of the data (quality measures, improvement activities, and advancing care information measures and objectives, if applicable) and results are accurate and complete.

    • For purposes of distributing performance feedback to MIPS eligible clinicians, collect a MIPS eligible clinician's email addresses and have documentation from the MIPS eligible clinician authorizing the release of his or her email address.

    • Be able to calculate and submit measure-level reporting rates or, upon request, the data elements needed to calculate the reporting and performance rates by TIN/NPI and/or TIN.

    • Be able to calculate and submit, by TIN/NPI and/or TIN, a performance rate (that is the percentage of a defined population who receive a particular process of care or achieves a particular outcome based on a calculation of the measures' numerator and denominator specifications) for each measure on which the TIN/NPI or TIN reports or, upon request the Medicare beneficiary data elements needed to calculate the performance rates.

    • Provide the performance period start date the QCDR will cover.

    • Provide the performance period end date the QCDR will cover.

    • Report the number of reported instances, performance not met, meaning the quality actions was not performed for no valid reason as defined by the measure specification.

    • For data validation purposes, provide information on the entity's sampling methodology. For example, it is encouraged that 3 percent of the MIPS eligible clinicians be sampled with a minimum sample of 10 MIPS eligible clinicians or a maximum sample of 50 MIPS eligible clinicians. For each MIPS eligible clinicians sampled, it is encouraged that 25 percent of the MIPS eligible clinicians' patients (with a minimum sample of five patients or a maximum sample of 50 patients) should be reviewed for all measures applicable to the patient.

    • Submit all of the measures (MIPS measures and non-MIPS measures) including specifications for the non-MIPS measures to us on a designated Web page. The measures must address a gap in care. Outcome or other high priority types of measures are preferred. Simple documentation or “check box” measures are discouraged.

    The following is a summary of the comments we received regarding QCDR criteria for data submission.

    Comment: A few commenters stated that some QCDRs were not designed to collect cross-cutting measures. Another commenter requested that CMS remove the requirement that MIPS eligible clinicians reporting the quality performance category via a QCDR must report on one cross-cutting measure and an outcome measure. The commenter believed CMS should provide flexibly in light of the QCDR's specialty focus. Another commenter was concerned that extending the cross-cutting measure requirement to QCDRs would lessen the utility of QCDRs for specialties that do not have directly applicable measures on the cross-cutting measure list, and noted that only one proposed (and problematic) cross-cutting measure was applicable to emergency medicine. Further, the commenter was concerned that the cross-cutting measure requirement threatened to undermine QCDR's original goal of providing specialties flexibility to report on truly meaningful measures that were not tethered to a traditional measure set.

    Response: As discussed in section II.E.5.b.(3) of this final rule with comment period, we have modified our proposal for the quality performance category for the transition year of MIPS. We are removing the requirement to report a cross-cutting measure and finalizing that for the applicable performance period, the MIPS eligible clinician or group would report at least six measures including at least one outcome measure. Due to this modification of criteria in the quality performance category, we are not finalizing the requirement that QCDRs must be able to report on a cross-cutting measure. We do strongly encourage, however, that where appropriate to their clients' scope of practice, these measures be incorporated. It is our expectation that QCDRs would be able to report program measures and their own non-program measures.

    Comment: Several commenters disagreed with the following QCDR criteria for data submission due to privacy concerns: (1) Submit to CMS data on measures, activities, and objectives for all patients, not just Medicare patients; and (2) be able to separate out and report on all payers including Medicare Part B FFS patients and non-Medicare patients. Another commenter sought clarification on why CMS wanted the submission of measures and activities on all patients from QCDRs, not just CMS beneficiaries. The commenters had concerns regarding HIPAA requirements.

    Response: We desire all-payer data for all submission mechanisms, to create a more comprehensive picture of the practice performance. Section 1848(q)(5)(H) of the Act authorizes the Secretary to include, for purposes of quality measurement and performance analysis, data submitted by MIPS eligible clinicians with respect to items and services furnished to individuals who are not Medicare beneficiaries. As discussed in section II.E.5.b. of this final rule with comment period, we are finalizing our proposal to require MIPS eligible clinicians to report all-payer data on quality measures where possible. We would like to explain that QCDRs should be able to supply sufficient information such that CMS, as well as the QCDR, can determine whether a given non-MIPS measure is “topped out,” as discussed in section II.E.5.c. of this final rule with comment period. Additionally, the information received by us is in the aggregate. That is, no personally identifiable health data is provided to CMS by registries or QCDRs.

    Comment: One commenter supported CMS' proposal to use all-payer data for the QCDRs, qualified registry, and EHR submission mechanisms. The commenter recommended that CMS work with RHICs to incorporate multi-payer claims and clinical data into reporting mechanisms, and support regional data aggregators engaged in measurement and public or private reporting. In addition, the commenter recommended that CMS do more to incorporate all-payer data, including enabling data sharing through regional intermediaries.

    Response: We appreciate the commenters' support and will consider the suggestions in future rulemaking.

    Comment: Some commenters stated they are concerned about administrative burden of data collection and measure reporting, especially the infrastructure changes that are necessary to be identified as a QCDR.

    Response: We recognize that for those organizations that choose to become a QCDR there are certain requirements that must be met, which may be construed as burdensome. We would like to explain, however, that there is no requirement for any individual or organization to become or report via a QCDR. We will continue to work with stakeholders to ensure that any of our requirements do not become overly burdensome, but instead provide flexibilities both to the entities seeking to become a QCDR as well as to MIPS eligible clinicians.

    Comment: Several commenters supported CMS' proposal to allow QCDRs to submit data for the quality, advancing care information, and improvement activities performance categories but noted that many QCDRs may only be able to submit data on the quality performance category. One commenter encouraged CMS to retain reporting in the three categories optional for QCDRs in the future. This commenter opposed the proposal to allow health IT vendors and qualified registries to submit data for all the MIPS categories, expressing concerns that this would be an unintended disincentive for these entities to become interoperable and that health IT vendors would have access to enormous amounts of data. Another commenter recommended that CMS provide significant flexibility with timelines to allow translating data into non-MIPS measures for inclusion in QCDRs. The commenter believed requiring QCDRs to submit data for all non-claims based MIPS performance categories will add to the value that they provide, although additional specifics related to submissions for the three categories are needed.

    Response: We appreciate the commenters' support to allow QCDRs to submit data for the quality, improvement activities, and advancing care information performance categories. To explain, we did not propose to require that QCDRs must have the capability to submit data for all performance categories, rather we proposed that they would have the option to do so. We intend to provide flexibility to allow translating data into non-MIPS measures for inclusion in QCDRs.

    Comment: Some commenters were concerned about expanding QCDRs' capabilities by allowing them to submit data on measures, activities, or objectives from quality, improvement activities, and advancing care information performance categories. The commenters were concerned that QCDRs would provide quality measure specifications including data elements for non-MIPS quality measures intended for reporting from CEHRT, thus allowing CMS to collect any data CMS wants by collecting it as a non-MIPS measure.

    Response: We do not specify what measures QCDRs should develop, nor do we require that a specific QCDR be used by MIPS eligible clinicians or even a specific measure within the QCDR be submitted by a particular MIPS eligible clinician. Please reference the criteria for approval of non-MIPS measures discussed in section II.E.9.a.(6) entitled “Identifying Non-MIPS Quality Measures” of this final rule with comment period.

    Comment: Other commenters requested that CMS continue to recognize QCDR-related activities on this list and to allow QCDRs to define specific improvement activities for MIPS eligible clinicians through the already-established QCDR approval process for measures and activities.

    Response: We agree with this comment and will consider this in future program years as we gain more experience with the improvement activities performance category.

    Comment: Several commenters did not endorse mandatory participation in the support calls, nor do they endorse mandatory in-person attendance at the QCDR kick-off meeting in Baltimore, MD, or the proposal that more than one unexcused absence could result in the QCDR or registry being precluded from participation in the program. The commenters believed the proposed data submission, validation, and ongoing auditing criteria are sufficient motivators to encourage QCDRs and qualified registries to utilize the support resources provided.

    Response: We respectfully disagree. We believe mandatory participation in support calls and attendance at the QCDR kick-off meeting are important to help improve the reliability of the data CMS receives for scoring in MIPS. As the number of QCDRs increases and the complexity of the program grows, it may be necessary to have an in-person meeting at CMS central office in Baltimore, MD to convey the necessary information to QCDRs. As such, CMS wants assurance from potential QCDRs that they will attend an in-person meeting yearly if needed.

    Comment: Another commenter requested clarification in the final rule with comment period on whether or not a MIPS eligible clinician's email address and release documentation is a requirement when metrics are being reported at the group level instead of the individual level.

    Response: When reporting at the group level, we require that the QCDR or qualified registry obtain permission to submit data to CMS from the person authorized by the group to make decisions regarding participation in the Quality Payment Program. The QCDR or registry should maintain this documentation for 6 years and 3 months but does not need to send it to CMS unless requested. Similarly, the email of the group's representative should also be collected by the QCDR or qualified registry.

    Comment: A few commenters agreed with the proposal that once the QCDR elects to submit specific measures, they must support those services or measures. The commenters requested that CMS provide clarification that throughout the performance period, QCDRs should be able to add MIPS measures to their lists of available services or measures as they have time to build those throughout the year. The commenters noted that they do not want to be limited to only supporting MIPS measures signed up for in November prior to the performance period, as final lists of available MIPS measures will not be available to the QCDR until that same November prior to the performance period. The commenters further noted that QCDRs will need time to include additional measures released by CMS in the final list of MIPS measures available for the performance period.

    Response: We agree with the commenters and will allow for this flexibility to the fullest extent feasible. QCDRs are still required to submit their MIPS and non-MIPS measures to us by the deadlines of January 15th for the first performance period and by November 1 prior to the performance period for future years for review and approval by us. We will however on a case-by-case basis allow QCDRs and qualified registries to request review and approval for additional MIPS measures throughout the performance period. Any new measures that are approved by us will be added to the information related to the QCDR or qualified registry on the CMS Web site, as technically feasible. We anticipate only being able to update this information on the Web site on a quarterly basis, as technically feasible. We would like to explain that this flexibility would only apply for MIPS measures; QCDRs will not be able to request additions of any new non-MIPS measures throughout the performance period. Lastly, we note that QCDRs will not be able to retire any measures they are approved for during the performance period. Any measures QCDRs wish to retire would need to be retained until the next annual self-nomination process and applicable performance period.

    Comment: Some commenters recommended that CMS further incentivize the creation of specialty-wide registries that ensure data collection efforts are not limited to data from individual EHRs; QCDRs must support whole specialties or disease categories.

    Response: There are no provisions in the statue that expressly allow for this specialty-specific incentive. Furthermore, we believe that specialists should determine if there is a need for a specialty specific QCDR.

    Comment: Other commenters requested that when a QCDR measure steward licenses a measure to another QCDR, the licensed measure does not count toward the 30 non-MIPS QCDR measure limit of the license. The commenters requested CMS provide adequate protections to safeguard any intellectual property associated with a measure steward's risk adjustment methodology, especially in regard to posting non-MIPS QCDR measure specifications.

    Response: It is to the responsibility of the measure owner to address intellectual property safeguard concerns for non-MIPS measure specifications. Licensed non-MIPS measures will still be considered in the total non-MIPS measures allowed for each QCDR that utilizes the measure. The work to incorporate the non-MIPS measure into the CMS system does not change if the measure is reported by one QCDR or more QCDRs.

    Comment: A few commenters stated that CMS should allow QCDRs to give other QCDRs permission to use its measures. The commenters believed sharing measures across QCDRs allows similar types of MIPS eligible clinicians (for instance, those in a particular subspecialty) to report the same measure regardless of their TIN structure. In addition, the commenters stated CMS should request that when sharing these measures, QCDRs collaborate to establish benchmarks.

    Response: We agree with this comment and currently allow and encourage the sharing of non-MIPS measures including benchmarking data, if desired, between QCDRs.

    Comment: Another commenter stated CMS can improve the QCDR submission process by providing more guidance during the validation process, giving feedback on submission accuracy (and making the vendor responsible for submitting corrected data), and providing validation on calculated reporting and performance rates as data submitted (including flagging errors).

    Response: We provide aggregate data issues information to each QCDR, that is, number and types of errors for individual QCDRs each year. We agree with the commenter that providing these additional data elements is beneficial. We are currently exploring ways to determine the operational feasibility of this under the MIPS.

    Comment: A few commenters did not support the proposal to require performance feedback at least six times a year. Rather, the commenter encouraged four performance feedback instead, to allow a greater sample size in each report and additional time to risk-adjust measures. Another commenter stated four performance feedback would allow a greater sample size in each report and additional time to risk-adjust measures.

    Response: While we believe “real-time” feedback should be the goal for QCDRs, we acknowledge the extra burden six performance feedback will place on some entities and, as such, will modify the requirement to four performance feedback per year for the transition year of MIPS. However, please note we intend to increase the number of required performance feedback to six by MIPS payment year 2020 and will propose to require “real time” feedback as soon as it is technically feasible.

    Comment: Some commenters stated that if a QCDR or other entity does not submit accurate data, then the MIPS eligible clinicians using that reporting mechanism should not be penalized and instead should be assessed as “average” for the impacted performance category(ies).

    Response: We do not guarantee that QCDRs will be successful in submitting data to us. MIPS eligible clinicians should carefully consider the reputation of the entity when making their vendor selection. We note that practices are ultimately responsible for the data that are submitted by their third party intermediaries and expect that they are ultimately holding their third party intermediaries accountable for accurate reporting. We are planning to note entities with high data errors on the published list of QCDRs in the future. Please refer to section II.E.6. of this final rule with comment period for further information on scoring.

    Comment: Other commenters stated it would be helpful for CMS to inform stakeholders of calculation errors and anything that does not comply with specifications, such as zero rates, as early as possible. The commenters stated that if testing requires any type of practice audit or request for information from practices for data validation purposes, CMS should inform vendors of any communication to practices so that vendors can work with CMS to ensure that practices understand the purpose of the validation request. In addition, the commenters stated that in advance of, or concurrent with, updates to quality measures, CMS should clearly identify a timeline when testing tools will be available and at what point the version will be “static.” Finally, the commenters stated that suggested milestones should be made available so that health IT vendors can incorporate measure testing into their product's timeline.

    Response: We currently report many types of errors to the submitting entity at the time of submission. Additionally, timelines are reviewed on each support call (monthly leading up to and during the submission window) as well as notification of specifications and the availability of the testing tool.

    Comment: A few commenters stated while CMS provides proposals for third party intermediaries to be disqualified due to data errors, the commenters believed it was important to establish a standardized testing process in the beginning, prior to the data submission period, so the data was as accurate as possible as they are analyzed for the purpose of scoring MIPS eligible clinicians and groups. The commenters stated CMS should offer a voluntary testing window each quarter. The commenters added that vendors that opt to take advantage of this testing window should receive feedback on whether files are transmitted appropriately.

    Response: We currently offer pre-submission testing for QCDRs under PQRS and intend to continue to offer a similar function under MIPS. We cannot currently provide a timeline for availability of this testing function but we do note that it will be made available to QCDRs prior to the submission period. We envision that this testing function will mimic the submission process as closely as technically feasible. We will provide additional details on this testing process through QCDR support calls and at the QualityPaymentProgram.cms.gov.

    Comment: Another commenter requested additional clarity regarding the requirement to provide information on the entity's process for data validation for both individual MIPS eligible clinicians and groups within a data validation plan. While the commenter believed it was reasonable to expect vendors who are also registries to perform quality assurance testing to confirm that calculations are correct and based on the data in the fields being sampled, the proposal suggests a more detailed review of individual patients' charts, which the commenter believed would be impossible for vendors who are receiving only an extract of the fields necessary to calculate measures and not extracting the entire record.

    Response: We do not mandate the specifics of the data validation strategy; rather, we suggest examples of previously accepted plans. It is the responsibility of the entity to ensure the data given to them by the MIPS eligible clinician is both accurate and complete. The attestation statement required at the time of submission requires the entities to stand behind the data they submit. Entities may need to work with their MIPS eligible clinicians to have the needed chart access for data verification.

    Comment: Some commenters supported CMS' efforts to ensure the integrity of data and appreciated the proposal to provide an initial probationary period where the entity is given the opportunity to correct identified issues. However, immediate disqualification could adversely affect entities, such as a QCDR, that, because of lack of experience or an unintentional error, failed to meet data integrity standards. The commenter noted it would also adversely impact the MIPS eligible clinicians who rely on these entities to satisfy federal quality reporting mandates.

    Response: We appreciate the comment and do not want QCDRs to be eliminated from the program, however, we must balance our goal of QCDR inclusion with the need to receive accurate and usable data. Neither the MIPS eligible clinicians nor the MIPS program will benefit from inaccurate data as known inaccurate data cannot be used in the program for payment or calculation of benchmarks. We refer readers to section II.E.9.e. entitled “Probation and Disqualification of a Third Party Intermediary” of this final rule with comment period for more information on probation and disqualification of third party intermediaries.

    Comment: Other commenters requested more transparency concerning the review of non-MIPS measures in QCDRs. The commenters noted that in the past the review of these measures has been conducted with limited input from the measure owners, and with less than 24 hours to formulate a response. The commenters believed with clearer guidance, this process could be more effective at identifying gaps in care.

    Response: We have provided additional clarification, in this final rule with comment period in section II.E.9.a.(6) entitled “Identifying Non-MIPS Quality Measures,” for the criteria we will use in considering measures and their suitability for the MIPS program. As the measures are expected to be fully developed prior to self-nomination, the additional requested information should be readily available to the QCDR. Additionally, QCDRs will not be given less than 24 hours to respond to CMS when initially being asked for measure clarification.

    Comment: A few commenters supported the proposal allowing QCDRs to submit either XML or QRDA formats. The commenters believed that these format determinations were best made by each individual QCDR. The commenters appreciated that CMS is not proposing to require QCDRs to use only QRDA for capturing and transmitting data. The commenters stated that CMS should work with registries and other stakeholders to identify emerging standards that support a more scalable and flexible data reporting format.

    Response: We appreciate the commenters' support. We will continue to work with QCDRs and other stakeholders to identify and improve our data transmission formats and methods.

    Comment: Other commenters supported CMS' proposals to allow QCDRs to define specific improvement activities for specialty and non-patient facing MIPS eligible clinicians through the existing QCDR approval process for measures and activities.

    Response: We appreciate the support.

    Comment: Some commenters opposed the proposal that the QCDR must be able to indicate the data source if the data was derived from CEHRT because it would be difficult to require QCDRs to parse out which data fields are populated from EHRs.

    Response: This information is necessary to give MIPS eligible clinicians additional credit for using CEHRT for the quality performance category. These bonus points are described in more detail in section II.E.6.a.(2)(f) of this final rule with comment period.

    Comment: Other commenters did not believe QCDRs should be held responsible for TIN/NPI mismatches, as QCDRs rely on the MIPS eligible clinicians to provide accurate TIN/NPI information. Rather, the commenters requested that CMS allow QCDRs to run tests similar to SEVT testing, ideally in the middle of the performance period, to allow QCDRs to determine whether TIN/NPI inaccuracies exist.

    Response: We are exploring the technical feasibility of allowing this type of testing under the MIPS.

    Comment: Some commenters supported the requirement for vendors to complete CMS-sponsored submission testing and requests that CMS include in its testing tools and Submission Engine Validation Tool (SEVT) process, validation of data content as well as format.

    Response: We support the commenter's sentiment. As noted previously, we intend to offer a pre-submission testing process that will mimic as closely as possible the MIPS submission process QCDRs would experience.

    Comment: A few commenters stated they supported: (1) Allowing flexible reporting options, such as contracting with third party submitters to report on behalf of QCDR owners and agreed with CMS that third party intermediaries should meet all criteria designated by CMS as a condition of their qualification or approval to participate in MIPS; (2) agreed that requiring the use of the QRDA could continue to be a reporting impediment for XML-based third party submitters; (3) concurred that CMS should be cautious in too quickly moving entities to a probationary phase because of difficulties encountered while making good faith efforts to comply with CMS' complex processes; and (4) believed that aligning CMS processes with ONC certification requirements would be highly preferable to adding an additional CMS process to assure CMS form and manner requirements are met. Other commenters generally agreed with the proposal to require data submission vendors to submit data in the form and manner approved by CMS. In addition, they agreed with the proposal to allow the vendor to submit data for three performance categories through the third party intermediary.

    Response: We agree and appreciate the support. We will monitor readiness, explore areas to streamline, and align electronic clinical quality measure (eCQM) development, testing, certification of products to the eCQM specifications and use of these measures in CEHRT and in reporting. Some QCDRs may choose to certify and may be working toward eCQM development, and CMS and ONC are committed to supporting this effort; however, we recognize that readiness among QCDRs even for MIPS eCQM certification is varied. We recognize that QCDRs may use data other than or in addition to that available from CERHT for their measures. In addition, some QCDRs are already successfully collecting and reporting measures for CMS programs without use of standards-based formats. Therefore, we are not requiring QCDRs be, use, or connect to CEHRT in order to report data under any MIPS performance category.

    Comment: A few commenters supported the proposal regarding QCDRs and other intermediaries providing feedback to participants on quality measures.

    Response: We appreciate the commenters support.

    Comment: Another commenter strongly supported CMS maintaining its current policy for reporting criteria in which QCDRs have a choice regarding public reporting strategies.

    Response: We appreciate the commenters' support. We refer readers to section II.E.10. of this final rule with comment period for final policies regarding public reporting on Physician Compare.

    Comment: Other commenters suggested providing flexibility for QCDRs. The commenters appreciated the proposals to foster the growing acceptance of QCDRs in clinical care, but stated it can only be achieved if CMS recognizes that QCDRs need the flexibility to incorporate measures into the registry as each specialty or clinician field sees fit for its patient population.

    Response: We appreciate the comment, however there are basic criteria for quality measures to be included in our program. These are outlined in section II.E.5.c. of this final rule with comment period.

    Comment: Some commenters requested clarification as to why CMS is proposing to measure requirements that may not be relevant to the data the registry collects, especially when QCDR measures will be held to such a high threshold of review.

    Response: QCDR measures are expected to at least meet the regular MIPS measures requirements. Measures included in MIPS also undergo scrutiny including having to go through the MUC/MAP process. If the commenter is questioning why we require certain data elements such as the source of the data (that is, EHR, web portal, claims, etc.), this is needed to provide bonus points, when applicable, to MIPS eligible clinicians who are using certified EHR technology to collect and manage quality measures data.

    Comment: Some commenters recommended extending the deadline for QCDR submission of measures to April 30th following the performance year because American College of Surgeons (ACS) registries used as QCDRs generally have a lock date of 90 days past the date of surgery to allow ample time to track outcomes in which no data is received. Following the 90-day lock data, time is needed for data analysis and risk adjustment. The commenters indicated that the current submission deadline would not permit the submission of data for October, November or December, which is a high-volume period for surgery.

    Response: We acknowledge the commenter's concern, but we cannot extend the data submission timeframe and still have adequate time to process the information and make the appropriate calculations for accurate scoring for the MIPS.

    Comment: Other commenters requested that CMS provide clearer guidance on what specific criteria must be met for a measure to fall into each specific high priority measures well in advance of the QCDR self-nomination process.

    Response: The measures that are considered high priority are outcomes, appropriate use, patient safety, efficiency, patient experience, and care coordination.

    Comment: One commenter encouraged CMS work with QCDR applicants to provide them feedback on measures that are submitted in the application process.

    Response: We have held calls with potential QCDRs in the past to discuss measure issues and potential measures. In addition, we have worked and will continue to work with potential QCDRs and provide feedback on self-nominated measures.

    Comment: Other commenters were very disappointed with CMS's decision not to adopt new policies or procedures to implement section 105(b) of MACRA (Pub. L. 114-10) which requires CMS to provide QCDRs with access to Medicare data for purposes of linking such data with clinical outcomes data and performing scientifically valid analysis or research to support quality improvement or patient safety. The commenters believed that CMS also ignored the fact that section 105(b) of MACRA is intended to provide QCDRs with access to Medicare data for quality improvement purposes, not just research, and that the broad and continuous access needed for quality improvement purposes is fundamentally different than the access to Medicare data for research purposes provided by Research Data Assistance Center (ResDAC). The commenters stated that CMS's decision not to issue a rule implementing section 105(b) of MACRA violates the black letter principles of statutory construction. The commenters believed CMS should match Medicare claims data with Social Security Death Master File (SSDMF) death data before providing it to QCDRs to greatly enhance the accuracy and robustness the Medicare claims data. Some commenters stated that the Secretary should match Medicare claims data with SSDMF data before providing it to QCDRs. Because the commenters believed that the ultimate purpose for accessing death data was to enhance the accuracy of patient outcomes information, including verification of patient life status and date of death, and not the acquisition of the actual death data set itself, QCDRs would greatly benefit from the Secretary matching Medicare claims data with SSDMF death data to verify patient death status, and sharing the matched data set with QCDRs.

    Response: We recently finalized regulations at 42 CFR 401.722 to implement section 105(b) of MACRA. As discussed in the Medicare Program; Expanding Uses of Medicare Data by Qualified Entities final rule published in the July 7, 2016 Federal Register (81 FR 44471), we recognize that the research request pathway may not be consistent with the types of analyses QCDRs envision conducting using the CMS data. As a result, we finalized regulations to allow QCDRs to serve as quasi-qualified entities. The qualified entity program, which was created by section 10332 of the Affordable Care Act and modified by section 105 of MACRA, authorizes us to provide standardized extracts of Medicare Parts A and B claims data and Part D event data to approved qualified entities. Qualified entities must combine the Medicare data with claims data from other sources and use the combined data to produce public performance reports on providers and suppliers. Qualified entities may use the combined data to conduct non-public analyses and provide or sell these analyses to certain authorized users. They may also provide or sell the combined data or provide the Medicare claims data alone at no cost to providers, suppliers, hospital associations, and medical societies.

    Under the regulations at § 401.722, QCDRs are allowed to serve as quasi qualified entities, provided the QCDR agrees to meet all the requirements of the program with the exception of the requirement at § 401.707(d) that the organization submit information about the claims data it possesses from other sources. In addition, for the purposes of QCDRs serving as quasi qualified entities, we defined combined data as, at a minimum, CMS claims data combined with clinical data or a subset of clinical data. We believe that the requirements of the qualified entity program create an appropriate framework for QCDRs to conduct analyses to support quality improvement and patient safety and to work directly with providers and suppliers on issues related to quality improvement and patient safety.

    With regard to the SSDMF, we recognize that death information is a key aspect of analyses of patient outcomes, but we do not have the authority to disclose the SSDMF to QCDRs. However, we have the date of death information for Medicare patients and we include this date of death information on the data files that are shared with qualified entities and those that are shared with QCDRs who are approved as quasi qualified entities.

    Comment: One commenter requested that CMS clarify whether QCDR quality data can be submitted through the QRDA standard and whether QCDRs may report eCQMs.

    Response: QCDRs may elect to report any MIPS measures, including eCQMs. Additionally, if the data required for a non-MIPS measures is captured electronically in the proper manner as defined in CEHRT, the data can be sent to the QCDR electronically and used as a non-MIPS eCQM. QCDRs will be able to use the data submission standard when submitting their MIPS eCQMs. Additional details will be provided on the Quality Payment Program data submission standard via QCDR support calls and at QualityPaymentProgram.cms.gov.

    Comment: Some commenters recommended that CMS provide MIPS eligible clinicians with cost estimates for electronic data submissions through registries and EHRs, as well as time estimates for submission of attestations through the CMS Web Interface to assist MIPS eligible clinicians in determining which submission method would be the least burdensome and most cost-effective.

    Response: We have information related to the burden of participation in section III.B.12. of this final rule with comment period. Additionally, we will post cost data for registries and QCDRs on the qualified posting list.

    Comment: A few commenters stated that the proposal should emphasize the role of QCDRs to ensure reporting and data submission are flexible, meaningful, and useful. The proposed QCDR data completeness requirement increasing from 50 to 90 percent would require reassuring MIPS eligible clinicians of the value of QCDR participation and reporting. One commenter requested Medicare claims data access to QCDRs be considered in future rulemaking.

    Response: Based on the overwhelming feedback received, we do not intend to finalize the data completeness thresholds as proposed. The numerous details the commenters cited on the increased burden the data completeness thresholds will impose on MIPS eligible clinicians is not intended. We want to ensure that an appropriate, yet achievable, level of data completeness is applied to all MIPS eligible clinicians. Based on stakeholder feedback for the transition year of MIPS, as discussed in section II.E.5.b.(3)(b) of this final rule with comment period, we will finalize a 50 percent data completeness threshold for claims, registry, QCDR, and EHR submission mechanisms. Additionally, we will take the commenter's request for access to Medicare claims data into consideration for future rulemaking.

    Comment: Some commenters stated that March 31 is a welcome extension from the Feb 28 submission deadline. They also stated that bi-annual and quarterly reporting would be advantageous only if CMS intends to provide timely quarterly feedback to MIPS eligible clinicians. The commenter stated that because of the added burden of submission throughout the reporting year, this reporting option would only be useful when CMS can provide feedback that quickly. Additionally, if quarterly reporting would be required going forward, EHR vendors would need to have additional notice regarding measure additions and updates in order to prepare for a sooner submission period than had been required under annual reporting. Finally, the commenters stated that a January 1 submission deadline seems unnecessary since most practices are closed for the New Year holiday. Further MIPS eligible clinicians need several days to compile their data after the last day of the performance period. The commenters suggested that CMS consider delaying the data submission period to January 15-April 15 so that reports could be compiled and tested for submission prior to the open of submission. Additionally, the submission portal should have fewer down times during the 1st quarter to compensate for MIPS eligible clinicians submitting their files. The commenters suggested limiting the maintenance in the first quarter to only have two scheduled downtimes, one in January and one in February, leaving all of March, when heavy data submission is occurring.

    Response: We cannot extend the submission period to April 15 and still process the data, calculate the final score and perform the other necessary tasks in time to make MIPS payment adjustments for the upcoming payment year. With respect to the downtime of our system, the system is shared by multiple components and programs at CMS and thus maintenance weekends must occur regularly throughout the year. We do note that we publish the scheduled maintenance weekends in advance so QCDRs have the ability to build these downtimes into their schedules for data submission.

    Comment: One commenter noted that they could not measure MIPS eligible clinicians by individual patient outcomes, but could measure and accredit team-based performance. The commenter's outcomes registry cannot be a qualified reporting registry for MACRA as currently proposed, because its outcomes are not and could never be physician—specific. The commenter suggested that CMS take advantage of commenter's Center for International Bone Marrow Transplant Research registry, not only for evaluating team-based quality outcomes for hematopoietic SCT (HCT) patients but for assistance in helping other specialties with team-based care enhance their outcomes reporting.

    Response: CMS allows group reporting by qualified registries and QCDRs. If the “team” referred to in the comment practices under one tax identification number (TIN), the measures (if reported by a QCDR and approved by CMS) could be reported for all of the MIPS eligible clinicians under the particular TIN.

    After consideration of the comments received on the QCDR criteria for data submission we are finalizing our policies as proposed, with the following exceptions: Specifically, we have decided to alter the requirement to provide timely feedback to MIPS eligible clinicians six times a year. Rather based on feedback from stakeholders we will finalize the requirement as follows: Provide timely feedback, at least four times a year, on all of the MIPS performance categories that the QCDR will report to us. That is, if the QCDR will be reporting on data for the improvement activities, advancing care information, or quality performance category, all results as of the performance feedback date should be included in the information sent back to the MIPS eligible clinician. The feedback should be given to the individual MIPS eligible clinician or group (if participating as a group) at the individual participant level or group level, as applicable, for which the QCDR reports. The QCDR is only required to provide feedback based on the MIPS eligible clinician's data that is available at the time the performance feedback is generated.

    Additionally, based on our policies finalized in section II.E.5.b.(3) of this final rule with comment period, we are not requiring MIPS eligible clinicians to submit data on cross-cutting measures. Therefore, we are finalizing the requirement at § 414.1335(a)(1)(i) for QCDRs as follows: Be able to submit results for at least six quality measures including one outcome measure. If an outcome measure is not available, be able to submit results for at least one other high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measures). If no outcome measure is available, then the QCDR must provide a justification for not including an outcome measure.

    (5) QCDR Measure Specifications Criteria

    A QCDR must provide specifications for each measure, activity, or objective the QCDR intends to submit to CMS. We proposed at § 414.1400(f) the QCDR must provide the following information:

    • Provide descriptions and narrative specifications for each measure activity, or objective for which it will submit to us by no later than January 15 of the applicable performance period for which the QCDR wishes to submit quality measures or other performance category (improvement activities and advancing care information) data. In future years, starting with the 2018 performance period, those specifications must be provided to us by no later than November 1 prior to the applicable performance period for which the QCDR wishes to submit quality measures or other performance category (improvement activities and advancing care information) data.

    • For non-MIPS quality measures, the quality measure specifications must include: name or title of measures, NQF number (if NQF-endorsed), descriptions of the denominator, numerator, and when applicable, denominator exceptions, denominator exclusions, risk adjustment variables, and risk adjustment algorithms. The narrative specifications provided must be similar to the narrative specifications we provide in our measures list. CMS will consider all non-MIPS measures submitted by the QCDR but the measures must address a gap in care and outcome or other high priority measures are preferred. Documentation or “check box” measures are discouraged. Measures that have very high performance rates already or address extremely rare gaps in care (thereby allowing for little or no quality distinction between MIPS eligible clinicians) are also unlikely to be approved for inclusion.

    • For MIPS measures, the QCDR only needs to submit the MIPS measure numbers and the specialty-specific measure sets (if applicable).

    • The QCDR must publicly post the measure specifications (no later than 15 days following our approval of these measure specifications) for each non-MIPS quality measure it intends to submit for MIPS. The QCDR may use any public format it prefers. Immediately following posting of the measures specification information, the QCDR must provide us with the link to where this information is posted. We would then post this information when it provides its list of QCDRs for the year.

    The following is a summary of the comments we received regarding QCDR measure specifications criteria.

    Comment: A few commenters opposed CMS' proposal to have measures specifications submitted by January 15, as they do not believe this gives enough time for QCDRs to determine which measures would be appropriate for the MIPS following the issuance of the final rule with comment period, which is expected to be released in November 2016. Another commenter suggested QCDRs should be given until March 31 of the applicable performance period (that is March 31, 2017 for the 2019 MIPS payment adjustment) to submit this information.

    Response: We thank the commenters for their feedback. We appreciate the concerns raised regarding the timelines for measure submission. We believe, however, that it is important that MIPS eligible clinicians can make their selection of measures prior to or at the onset of the performance period to ensure they can build these measures into their quality workflow.

    After consideration of the comments regarding the proposal on the QCDR measure specifications criteria we are finalizing the policies as proposed at § 414.1400(f).

    (6) Identifying Non-MIPS Quality Measures

    To explain the definition of a non-MIPS quality measures for purposes of QCDRs submitting data for the MIPS quality performance category, we proposed at § 414.1400(e) to consider the following types of quality measures to be non-MIPS quality measures:

    • A measure that is not contained in the annual list of MIPS quality measures for the applicable performance period.

    • A measure that may be in the annual list of MIPS quality measures but has substantive differences in the manner it is submitted by the QCDR. For example, if a MIPS quality measure is only reportable via the CMS Web Interface and a QCDR wishes to report this quality measure on behalf of its MIPS eligible clinicians, the quality measure would be considered a non-MIPS quality measure. This is because we would have only extracted the data collected from this quality measure using the CMS Web Interface, in which we utilize a claims-based assignment and sampling methodology to inform the groups on which patients they are to report, and the reporting of this quality measure would require changes to the way that the quality measure is calculated and reported to us via a QCDR instead of through the CMS Web Interface. Therefore, due to the substantive changes needed to report this quality measure via a QCDR, this CMS Web Interface quality measure would be considered a non-MIPS quality measure. CMS would not be able to directly compare MIPS eligible clinicians submitting the quality measure using the CMS Web Interface to those submitting the quality measure using the QCDR. Thus, this would be considered a non-MIPS quality measure.

    • In addition, the CAHPS for MIPS survey currently could be submitted only using a CMS-approved survey vendor. Although the CAHPS for MIPS survey is proposed for inclusion in the MIPS measure set, we consider the changes that will need to be made available for reporting by individual MIPS eligible clinicians (and not as a part of a group) significant enough as to treat the CAHPS for MIPS survey as a non-MIPS quality measure for purposes of reporting the CAHPS for MIPS survey via a QCDR. To the extent that further clarification on the distinction between a MIPS and a non-MIPS measure is necessary, we will provide additional guidance on our Web site.

    The following is a summary of the comments we received regarding identifying non-MIPS quality measures.

    Comment: A few commenters requested that CMS increase the number of allowed non-MIPS measures to be well above 30, potentially incrementally on an annual basis. One commenter believed doing so would limit the flexibility that QCDRs need to support MIPS eligible clinician reporting, particularly for MIPS eligible clinicians that have few MIPS measures available to report. Another commenter strongly recommended that CMS increase the cap of 30 measures within any given QCDR because increasing the cap will allow multi-specialty groups comprised of diagnostic radiologists and interventional radiologists to report via the same QCDR.

    Response: We appreciate the suggestion and will evaluate the feasibility of this request for future program years.

    Comment: One commenter supported the proposal for non-MIPS quality measure specifications for QCDRs.

    Response: We appreciate the support.

    Comment: Some commenters stated they strongly recommend that QCDRs maintain the authority to make an initial determination about how best to classify each of their measures, including whether it falls into a high priority category.

    Response: We will accept the QCDR's recommendation if the measure has been endorsed by NQF in a particular category. We reserve the right to not accept non-MIPS QCDR measures or the suggested category designated for the measure.

    Comment: One commenter requested more transparency to the non-MIPS quality measure approval process. The commenter requested rather than going through a rigorous approval process, CMS should require each QCDR to have a transparent, clearly-defined process for developing and updating the data elements and quality measures utilized in their measures. The commenter believed these processes should include an opportunity for public input, timelines for review and approval of new measures or changes to existing measures, a peer-review process, and adequate patient protections and consent procedures. The commenter believed QCDRs should identify data collection methods, including opportunities to collect patient-reported outcomes, and risk-adjustment strategies. The commenter stated that CMS should not dictate how each QCDR registry implements the standards, as flexibility is needed to respond to the evolving standard of care and the rigors and challenges of collecting data. In addition, the commenter encouraged CMS to work to incorporate these recommendations through future rulemaking.

    Response: We will take these suggestions into consideration in future rulemaking.

    Comment: One commenter requested clarification on whether QCDRs can report non-MIPS measures using the XML format with, data extracted from an EHR electronically using applicable interoperability standards, and if these measures would meet CMS' proposed end-to-end electronic reporting requirement, to qualify for the electronic reporting bonus point.

    Response: We would like to explain that QCDRs will be able to report non-MIPS measures using the CMS-specified data submission standard. More specific details, including the full technical specifications for submitting non-MIPS measures to CMS for the 2017 performance period, will be issued via subregulatory guidance at QualityPaymentProgram.cms.gov. We refer readers to the quality performance category scoring discussion in section II.E.6.a.(2) of this final rule with comment period for more details.

    Comment: A few commenters noted that there are quality measures that do not require certification and sought clarity from CMS on their specific certification requirements. The commenters specifically questioned if a registry would need to be certified to § 170.315(c)(1) through (3) to submit quality measures electronically or if the use of QRDA data structure requires certification. Some commenters recommended that the reporting mechanism requirements include discussion about third party intermediaries with incomplete measure certification and recommended that clinicians only be required to exhaust measures that are MIPS certified.

    Response: While a registry, QCDR, or other third party intermediary is not required to certify to submit MIPS eCQMs or non-MIPS measures to meet the requirements to qualify for the electronic reporting bonus, a registry may obtain certification to the CQM related certification criteria at § 170.315(c)(1) through (3) to support the accuracy and standardization of clinician reporting. Registries are encouraged to seek certification to § 170.315(c)(4) (clinical quality measures—filter) if their services include reporting measures results to CMS or providing performance feedback to their participants at various levels of aggregation, such as individual clinician, patient, group, or population. We note that certification for the § 170.315(c) criteria is measure-specific and includes only those CQM for which eCQM specifications have been published by CMS; however, these measures may use value sets and specifications that overlap with MIPS eCQMs. A registry may submit a submit a MIPS eCQM using either health IT certified to import and calculate (§ 170.315(c)(2)) and report (§ 170.315(c)(3)) those MIPS measures, or using an automated, verifiable software to process data, calculate and electronically report to the Quality Payment Program-accepted non-MIPS or registry measures consistent with CMS-vetted protocols. In either case, the registry's participating eligible clinicians would in turn need to record the clinical data for those CMS-published measures in their CEHRT and export to the registry in the required standard HQMF or QRDA using health IT certified to record and export (§ 170.315(c)(1)).

    The MIPS measures for which eCQM specifications are available can be readily identified by presence of a CMS e-Measure ID and by inclusion of “EHR” in the “Data Submission Method” column for that measure in the Appendix Table A: Individual Quality Measure Available for MIPS Reporting in 2017 of this final rule with comment period. Specifications and additional information relevant to submitting eCQMs to CMS are available at QualityPaymentProgram.cms.gov.

    A QCDR that is submitting non-MIPS measures is not required to use HQMF or QRDA, and may choose to use an API or other relevant standards supported by its participants' health IT to achieve standards-based access to quality measurement data. Because the HQMF and QRDA standards are familiar to many health IT vendors and EHR vendors, a registry might choose to use one or both of these standards to implement non-MIPS measures. In this case, we would encourage the registry to use the development and testing tools available via the CMS-ONC eCQI Resource Center Web site (https://ecqi.healthit.gov/), to the extent applicable to their measure development and implementation approaches.

    Comment: Other commenters recommended that QCDR measures, particularly those focused on the improvement activities performance category, should be used to satisfy some requirements for improvement activities since there are no MIPS measures which are relevant to many subspecialists. The commenters stated that QCDR measures are not among the 200 measures that CMS has identified as being able to contribute to the quality performance category reporting score. The commenters stated that it appears that if MIPS eligible clinicians use QCDR measures as one of the six required measures, the method of scoring will penalize MIPS eligible clinicians for using non-standard measures.

    Response: The MACRA legislation requires four performance categories of the MIPS program. We cannot count the reporting of QCDR measures which would count in the quality performance category of the program to also count for the improvement activities performance category. If a MIPS eligible clinician uses a QCDR outcome measure as one of their six measures for the quality performance category, this would still count toward satisfying the reporting requirement. However, there are specific instances in which one improvement activity may be applicable to two performance categories. For example, the CAHPS for MIPS survey is included under the quality performance category, as well as the improvement activities performance category as a high weighted activity in the Beneficiary Engagement subcategory noted in Table H of the Appendix in this final rule with comment period. In addition, certain improvement activities may count for bonus points in the advancing care information performance category if the MIPS eligible clinician uses CEHRT. Reporting extra outcome or other high priority measures would still earn the MIPS eligible clinician bonus points, as discussed in section II.E.6.a.(2)(e) of this final rule with comment period. Regarding adding improvement activities to the improvement activities inventory for future years we refer readers to section II.E.5.f.(8)(b) of this final rule with comment period for the discussion on the annual call for activities.

    Comment: One commenter recommended that as part of the call for quality measures, contributions be entered into a single pool of eCQM definitions that get reviewed to ensure the measure is able to be derived from CEHRT data and reported on using CEHRT. They noted that the past practice of allowing various organizations to have different definitions, measure, and reporting formats has created unnecessary difficulty for clinicians and their CEHRT to effectively collect and report on the measure. The commenter further recommended that we arrive on a single definition for a measure for anybody to use with an interest in that measure and a single report format to make it easier to report to various organizations (CMS, registries, etc.) based on the same underlying data from the CEHRT. The commenter specifically recommended that eCQMs be that single definition for a measure and that the QRDA be the single report format.

    Response: We thank the commenter for their suggestion. We refer the commenter to section II.E.5.c.(2) of this final rule with comment period for more detail on our requirements for the MIPS call for quality measures. We agree that there is value in trying to streamline the measure specification standards and data submission standards. We do not believe however that the eCQM measure specification standard, specifically the Health Quality Measure Format (HQMF) or that the QRDA data submission format can be that unified format for the transition year of MIPS. We will continue to evaluate this issue and address any changes in future notice and comment rulemaking.

    After consideration of the comments regarding the proposal regarding identifying non-MIPS quality measures we are finalizing the policies as proposed at § 414.1400(e).

    (7) Collaboration of Entities To Become a QCDR

    In the CY 2016 PFS final rule (80 FR 71136 through 71138) we finalized our proposal to allow collaboration of entities to become a QCDR based on our experience with the qualifying entities wishing to become QCDRs for performance periods. We received feedback from organizations who expressed concern that the entity wishing to become a QCDR may not meet the criteria of a QCDR solely on its own. We believe this policy supporting entity collaboration should be continued under MIPS. Therefore, we proposed at § 414.1400 that an entity that may not meet the criteria of a QCDR solely on its own but could do so in conjunction with another entity, would be eligible for qualification through collaboration with another entity.

    We proposed to allow that an entity that uses an external organization for purposes of data collection, calculation, or transmission may meet the definition of a QCDR provided the entity has a signed, written agreement that specifically details the relationship and responsibilities of the entity with the external organization effective as of September 1 the year prior to the year for which the entity seeks to become a QCDR (for example, September 1, 2016, to be eligible to participate for purposes of the 2017 performance period). Entities that have a mere verbal, non-written agreement to work together to become a QCDR by September 1 the year prior to the year for which the entity seeks to become a QCDR would not fulfill this proposed requirement. We requested comments on these proposals.

    The following is a summary of the comments we received regarding collaboration of entities to become a QCDR.

    Comment: Some commenters recommended the deadline for a written agreement between entities collaborating to become a QCDR be November 1 rather than September 1 to align with the November 1 deadline to self-nominate.

    Response: We require this element to be completed at the beginning of the self-nomination period to enable and encourage QCDRs to self-nominate as early as possible.

    Comment: Some commenters were not in favor of allowing entities that do not meet the QCDR criteria to collaborate with external organizations to qualify as QCDRs. The commenters were concerned that the language of this provision is so broad that it would allow health IT vendors and other commercial entities to become QCDRs without any participation of MIPS eligible clinician-led professional organizations that are focused on quality improvement relating to specific medical procedures, conditions, or diseases. The commenters believed language should be clarified to state that QCDRs that involve multiple organizations must be led and controlled by MIPS eligible clinician-led professional organizations or similar entities that are focused on quality improvement relating to particular types of medical procedures, conditions, or diseases.

    Response: Many specialty societies including subspecialty groups may not have the resources to develop the software platform needed to be a QCDR and thus partner with outside entities to support their QCDR. We believe that prohibiting specialty groups from partnering with outside entities would only serve to harm smaller societies and possibly prevent their participation in MIPS or at least limit their ability to measure and report data meaningful to their practice.

    After consideration of the comments received on the collaboration of entities to become a QCDR we are finalizing the policies as proposed. Specifically, we are finalizing at § 414.1400 that an entity that may not meet the criteria of a QCDR solely on its own but could do so in conjunction with another entity, would be eligible for qualification through collaboration with another entity.

    b. Health IT Vendors and Other Third Parties That Obtain Data From MIPS Eligible Clinician's CEHRT

    Currently, clinicians seeking to meet CMS quality program technology requirements must use EHR technology that is certified and meets the CEHRT definition established under the EHR Incentive Programs at 42 CFR 495.4. The Office of the National Coordinator for Health Information Technology (ONC) health IT certification program has established standards and other criteria for structured data that EHRs must use in order to be successfully tested and certified. We proposed to maintain this standard and require EHR-based data submission (whether transmitted directly from the EHR or from a data intermediary) to be CEHRT to submit quality measures, advancing care information, and improvement activities data for MIPS. In addition, we proposed at § 414.1400(a)(4) that health IT vendors that obtain data from a MIPS eligible clinician's CEHRT, like other third party intermediaries, would have to meet all criteria designated by us as a condition of their qualification or approval to participate in MIPS as a third party intermediary. This includes submitting data in the form and manner specified by us as proposed at § 414.1400(a)(4)(ii). We anticipate that for the initial years of MIPS the form and manner requirements would be similar to what was used in the PQRS program. However, at a minimum these will be modified to address the four performance categories under MIPS and MIPS data calculation needs. As we gain experience under MIPS we anticipate that these form and manner requirements may change in future years to ease reporting burden. Historical form and manner requirements under the PQRS program are available at https://www.qualitynet.org/imageserver/pqrs/registry2015/index.htm or https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/QRDA_2016_CMS_IG.pdf. In addition, health IT vendors must comply with our QRDA Implementation Guides if submitting data from a CEHRT, which we anticipate will be similar to the one noted above. We anticipate providing further subregulatory guidance that would identify the CEHRT data formats that clinicians must submit. In addition, we proposed at § 414.1325(b)(2) and (c)(2) to allow individual MIPS eligible clinicians and groups to submit data using CEHRT for the quality, improvement activities, or advancing care information performance categories.

    Although section 1848(q)(5)(B)(ii)(I) of the Act specifically requires the Secretary to encourage MIPS eligible clinicians to report on applicable measures using EHR technology with respect to the quality performance category, the statute does not specifically address allowing a third party intermediary—such as a health IT vendor—to submit on a MIPS eligible clinician's behalf for the other performance categories. Although we could limit the usage of health IT vendors assessing the quality performance category under MIPS, we believe it would be less burdensome for MIPS eligible clinicians if we expand the health IT vendors' capabilities. By allowing health IT vendors to report on the quality, advancing care information, and improvement activities performance categories we would alleviate the need for individual MIPS eligible clinicians and groups to use a separate mechanism to report data for these performance categories. Our intention is to encourage health IT vendors to design systems that are able to support new types of EHR reporting (for example, improvement activities and advancing care information) from MIPS eligible clinicians and groups—this would be in addition to the quality measure data that we already can accept. Therefore, we proposed at § 414.1400(a)(2) to expand health IT vendors' capabilities by allowing health IT vendors to submit data on measures, activities, or objectives for any of the following MIPS performance categories:

    • Quality;

    • Improvement activities; or

    • Advancing care information.

    As proposed at § 414.1400(a)(1), health IT vendors submitting data on behalf of a MIPS eligible clinician or group would be required to obtain data from the MIPS eligible clinician's CEHRT. We believe this approach would permit a single health IT vendor to report on quality, advancing care information, and improvement activities performance category requirements for MIPS on behalf of multiple eligible clinicians or groups and should mitigate the risks, costs, and burden of MIPS eligible clinicians having to report multiple times to meet the requirements of MIPS.

    Health IT Vendors Data Criteria

    We further proposed that health IT vendors must be able to do the following:

    • For measures, activities, and objectives under the quality, advancing care information, and improvement activities performance categories, and as proposed at § 414.1400(a)(4)(i); if the data is derived from CEHRT, the health IT vendor must be able to indicate this data source.

    • Either transmit data from the certified EHR technology or through a data intermediary in the CMS-specified form and manner, or have the ability for the individual MIPS eligible clinician and group to be able to submit data directly from their CEHRT, in the CMS-specified form and manner.

    For MIPS eligible clinicians who choose to electronically submit quality, advancing care information, and improvement activities data extracted from their CEHRT to an intermediary, the intermediary would then submit the measure and activity data to us in a CMS-specified form and manner on the MIPS eligible clinician's behalf for the respective performance period. In addition to meeting the appropriate data submission criteria for the quality, advancing care information, and improvement activities performance categories for the MIPS EHR submission mechanism, MIPS eligible clinicians who choose the EHR submission mechanism would be required to have CEHRT meeting the proposed definition at § 414.1305. We requested comments on these proposals.

    The following is a summary of the comments we received regarding health IT vendors and other third parties that obtain data from an eligible clinician's CEHRT, referred to throughout this section as “health IT vendors.”

    Comment: Some commenters disagreed with the following health IT vendors' data criterion: Either transmit data from the CEHRT or through a data intermediary in the CMS-specified form and manner, or have the ability for the individual MIPS eligible clinician and group to be able to submit data directly from their CEHRT, in the CMS-specified form and manner. The commenters stated that they believed this requirement would be an intrusion the privacy of their patients.

    Response: The standards for submission for MIPS do not vary significantly from the existing standards and privacy protections in place for programs such as PQRS, and the EHR Incentive Programs. We intend to maintain these important privacy protections for patients in any new system designed for MIPS reporting. However, we would like to explain that the policy outlined here is only a requirement for health IT vendors or other authorized third parties submitting data on behalf of an MIPS eligible clinician or group for participation in MIPS. In order to do such a submission, the health IT vendor or third party must meet the form and manner requirements which include privacy and security standards. Additional details will be provided on the Quality Payment Program data submission standard at QualityPaymentProgram.cms.gov.

    Comment: A few commenters recommended making a requirement for health IT vendors to build and maintain products that meet federal specifications rather than forcing MIPS eligible clinicians to purchase and constantly upgrade expensive often-bulky systems. Another commenter encouraged standards developers to introduce accelerated cycle times for updating standards, especially new and modified standards required to support automation of quality reporting along with incorporating a degree of flexibility to accommodate the needs of a rapidly changing health IT landscape.

    Response: We thank the commenters for their feedback and note that our intent is to align the adoption of the key health IT standards across CMS quality reporting programs. That is to say, the standards we adopted for the use of certified health IT are industry standards which are first reviewed, analyzed, and adopted by the Secretary and are a part of ONC's Health IT Certification Program. These standards include key language for capturing structured data and document formats which are used by CMS programs and within the wider health care arena. In order to accommodate new standards and modifications as they arise, ONC periodically releases a new Edition of certification criteria which includes important updates as well as new functions to support clinicians leveraging health IT for patient care. CMS maintains a definition of CEHRT which requires clinicians to transition to new Editions over time in a consistent manner across our programs referencing certified health IT. These updates are essential to ensuring clinicians are using standards with high efficacy, accuracy, and the appropriate patient safety and security protocols.

    We established the definition of CEHRT to set the federal specifications for clinicians using certified health IT within our programs. For a certified health IT product or products to meet the CEHRT definition they must include Health IT Modules which are certified to certification criteria under the ONC Health IT Certification Program and are related to certain CMS program reporting. These include the ability to calculate the advancing care information measures, as well as the ability to accurately capture and export CMS eCQMs. Further, the CEHRT definition includes functions which reference a wide range of file transport standards including the CCDA and QRDA formats as well as the API functionality. CMS and ONC then work together to further develop and publish specifications for health IT vendors which meet the form and manner requirements for reporting to our programs. For the Quality Payment Program, CMS and ONC will work to ensure that similar processes and testing of specifications is completed to support accurate and efficient CEHRT-based reporting for program participants. Additional details will be provided on the Quality Payment Program data submission standard at QualityPaymentProgram.cms.gov. For further information on the CEHRT definition adopted for the Quality Payment Program at § 414.1305, we direct readers to section II.E.5.g. of this final rule with comment period.

    Comment: A few commenters believed that based on the proposal, it is difficult to determine if health IT vendors and third party data submission vendors are held to the same standards. The commenters requested that health IT vendors be held to the same criteria as QCDRs under MIPS. This includes providing regular feedback to participants and explaining methodologies. The commenters were concerned that the proposal was too broad and that health IT vendors and other entities could become QCDRs without MIPS eligible clinician-led professional organization participation. Several commenters requested that CMS reduce or eliminate the criteria related for third party intermediaries.

    Response: Health IT vendors and QCDRs are distinct submission mechanisms that have differing requirements and capabilities under MIPS. Generally, QCDRs which are engaged in quality measurement activities are held to standards related to these services. Health IT vendors provide services related to the development, implementation, and support of health IT systems. Some health IT vendors offer data submission services to CMS programs as a part of their support of health IT services. Other health IT vendors maintain a range of data transmission, aggregation, and calculation services or functions separate from the EHR immediately installed in the practice location, for example those operating a cloud-based system. Still other health IT vendors offer certified health IT products which allow a health care provider to autonomously manage their EHR system and electronically extract or export and report data to CMS programs directly from their CEHRT using functions which meet CMS form and manner requirements. Still other scenarios and potential options related to other authorized third parties not involved in the direct provision of EHR systems may be available to support MIPS participation. For example, some HIE organizations are exploring the option of supporting provider data submission by establishing partnerships with health IT vendors to submit data on behalf of their customers. The policies noted in this section which apply to health IT vendors apply to other authorized third parties and across each of these circumstances and other potential related scenarios. In this section of the final rule with comment period, we are explaining only that health IT vendors are accountable to ensure that their products and services meet the form and manner required regardless of which scenario or submission method is applicable when submitting on behalf of a MIPS eligible clinician or group.

    We note that form and manner requirements for the submission are related to the requirements defined for the measures and activities in each performance category within this final rule with comment period. Therefore, we note that while there is no specific standard or certification requirement for a health IT vendor or other authorized third party submitting data on behalf of an eligible clinician or group beyond the form and manner specifications for the submission mechanism, the eligible clinician or group must still meet the category or measure specific requirements. For example, within the quality performance category there are different requirements for CQMs which must be met depending on the measures an MIPS eligible clinician or group chooses to report, and the form and manner must be used for the submission mechanism appropriate for reporting those selected measures. This is consistent with prior CMS policy for PQRS and the EHR Incentive Programs, and is reflected in the CEHRT definition for the quality payment programs at § 414.1305 for MIPS eligible clinicians or groups supported by these services. For example, the CQM submission requirement within that CEHRT definition states at paragraph (1)(ii)(B)(3) that a CQM submission meets certain certification criteria and can be electronically accepted by CMS if the data is submitted electronically. We reiterate that there are no certification criteria associated with measurement for the improvement activities performance category. For further information on how the CEHRT definition applies for MIPS eligible clinicians and groups under the quality performance category, we direct readers to the end-to-end electronic reporting bonus in section II.E.6. of this final rule with comment period. For further information on how the use of CEHRT is applicable for MIPS eligible clinicians and groups for the advancing care information performance category, we direct readers to section II.E.5.g. of this final rule with comment period. Finally, for information on how the use of CEHRT is applicable for APM Entities, we direct readers to section II.F.4.b. of this final rule with comment period.

    We appreciate the commenter's concern however on health IT vendors becoming a QCDR without the sponsorship or governance of a professional organization and would like to refer the commenter above to section II.E.9.a.(7) of this final rule with comment period, where we discuss the requirements of allowable partnerships between IT vendors and specialty organizations. We further disagree with setting no requirements for any third party intermediary as these policies ensure both that the MIPS eligible clinician is provided appropriate supports and protections and that CMS is able to accept and use the data submitted on their behalf.

    Comment: A few commenters stated the qualification requirements for companies in the general health IT vendor category (in contrast to requirements for PQRS submitters) are unclear. Many commenters requested CMS clarify what constituted a submission method that would need to be certified and requested clarification and additional details regarding what third party submission must do regarding submitting data for all performance categories. While some believed that EHR vendors can add improvement activities criteria into their systems fairly easily. Other commenters stated there is no current certification for improvement activities data and it is unclear how a MIPS eligible clinician could use CEHRT to submit improvement activities data without criteria for how to record or transmit such data. Some commenters requested an interim rule defining the specific requirements to become certified for MIPS data submission.

    Some commenters agreed with the health IT vendor criteria at § 414.1400(a)(2). However, the commenters were concerned about the ability of health IT vendors to incorporate mechanisms for reporting the new advancing care information and improvement activities performance categories into QRDAs under the current reporting deadlines and without new implementation guides. Second, commenters noted that the most recent draft of the HL7 Quality Reporting Document Architecture (QRDA) had not incorporated these new performance categories as of the publication of the MIPS proposed rule and noted that this would be essential for facilitating vendor efforts to make software modifications. Third, once the QRDA is updated to accommodate the MIPS, it will be important for CMS to test and validate the reporting standards related to the inclusion of these new performance categories.

    Response: In our proposal, we stated our intent to encourage health IT vendors to design systems that are able to accept new types of EHR data (for example, improvement activities and advancing care information performance categories) from MIPS eligible clinicians and groups—would be in addition to the quality measure data that we already can accept directly through electronic reporting from CEHRT. Therefore, we proposed at § 414.1400(a)(2) to expand health IT vendors' capabilities by allowing health IT vendors to submit data on measures, activities, or objectives for the quality, improvement activities, or advancing care information performance categories. We also proposed to require that EHR-based data submission (whether transmitted directly from the EHR or from a data intermediary) meet the CEHRT definition before it can submit quality measures, advancing care information, and improvement activities performance category data for MIPS. However, as noted in public comments, no certification criteria currently exists which is specific to the improvement activities performance category of MIPS and while there are criteria required for the calculation of measures within the advancing care information performance category, there is not a submission format certification requirement. We do not intend to add new burden on developers who are already working toward certification to the 2015 Edition certification criteria, nor do we intend to require MIPS eligible clinicians to obtain new certified Health IT Modules beyond the current CEHRT definition. For these reasons, we are finalizing a modified version of our proposal to require use of CEHRT for EHR-based data submission purposes for the 2017 performance period. We will continue to require the use of CEHRT for those items that the MIPS eligible clinician or group is reporting where that criterion is part of the current CEHRT definition for the 2014 Edition and 2015 Edition certification criteria for CY 2017 and the 2015 Edition only for CY 2018 and subsequent years, as defined for the Quality Payment Program at § 414.1305. For instance, CEHRT may be required when submitting CMS eCQMs for which certification criteria exist for use depending on the selected submission mechanism (see section II.E.6. of this final rule with comment period for further details on end-to-end electronic reporting). The CEHRT definition also includes certification criteria for calculating advancing care information performance category objectives and measures included in the certification criteria (see section II.E.5.g. of this final rule with comment period for further details on the advancing care information performance category objectives and measures). We direct readers to section II.E.5.g. of this final rule with comment period for further discussion of the CEHRT definition adopted for the Quality Payment Program at § 414.1305.

    However, we do agree with the commenters who note that the inclusion of improvement activities performance category reporting should be allowed for health IT vendors and we intend to allow MIPS eligible clinicians and groups the option to submit improvement activities performance category data in a manner similar to current reporting. In this way, we maintain our intent to encourage health IT vendors to design systems to be able to accept and support new types of data reporting within EHR systems. We further note that for MIPS, we are maintaining the requirement that submissions be reported in the form and manner specified by CMS. That form and manner will be specified by CMS for each available submission method through subregulatory guidance consistent with prior CMS quality reporting programs.

    We appreciate the commenter's feedback regarding the use of the QRDA and note than in prior years the CMS Implementation Guide (IG) included updated specifications for the QRDA that are similar to the types of updates that could potentially be included for reporting on advancing care information and improvement activities performance categories in MIPS. We do, however, understand and acknowledge the commenters concern on the timing of development to the IG as well as the need for adequate time for development, testing, and verification of any future updates to the QRDA Implementation Guide. We note that we will provide additional details related to the Quality Payment Program data submission standard at QualityPaymentProgram.cms.gov. We will continue to engage the vendor community as we implement MIPS in order to ensure that developers are aware of applicable criteria pertaining to the advancing care information, quality, and improvement activities performance categories and to obtain feedback and input on potential timing and development requirements to support reporting. We refer reader to section II.E.5.g. of this final rule with comment period for further discussion on CEHRT in the MIPS program.

    Comment: Some commenters recommended CMS collaborate more with health IT vendors. The commenters acknowledged that groups have a very difficult time finding a vendor that knows the requirements and can assist the groups. Other commenters stated they are concerned that allowing the health IT vendors use intermediaries to submit data to CMS would result in cost, waste, and risk of security breaches.

    Response: We thank the commenters for their suggestions and note that we will continue to engage the vendor and health IT vendor community as we implement MIPS. We appreciate the commenters expressing the concern and recognize that health IT vendors provide varying types of functions. We encourage MIPS eligible clinicians and groups to review the types of functions and services health IT vendors would be able to provide before selecting a health IT vendor in order to ensure that needs of the MIPS eligible clinician or group would be able to be met. For MIPS eligible clinicians, groups, or the supporting health IT vendors that do not have the functionality to submit data to CMS, the use of intermediaries may be necessary and beneficial. However, we note that any entity providing submission services on behalf of an MIPS eligible clinician or group must be authorized by the MIPS eligible clinician or group as a surrogate or proxy to submit data to CMS on their behalf. In addition, when an MIPS eligible clinician (a HIPAA covered entity) or health IT vendor (a HIPAA business associate) shares ePHI with an intermediary (another business associate) to perform a function for the covered entity all these entities must comply and abide by the HIPAA Privacy, Security, and Breach Notification Rules and all CMS policies pertaining to privacy and security.

    Comment: Some commenters were concerned that they believed CMS was moving away from the use of ONC-certified health IT products for calculating measures and requested that CMS work with ONC to clarify in the final rule with comment period what the expectations/requirements are for third party calculation and submission. The commenters noted that it was unclear if a third party submitting on behalf of an MIPS eligible clinician or group will be receiving raw data from CEHRT and would be required to calculate numerators and denominators, or if they would be receiving already calculated data from CEHRT where the third party intermediary could pass the already calculated data electronically to CMS.

    Response: First, our goal is to encourage flexibility for the MIPS eligible clinician and the health IT vendors that support the MIPS eligible clinician. We, therefore, note that either scenario described where the third party performs calculations or where that third party submits already calculated data, would be acceptable for MIPS eligible clinicians and groups reporting to MIPS. We note that in either case, the third party would not be required to also be separately certified; however, the third party may test the calculations or certify to the calculations if there is an applicable certification criterion defined in the CEHRT definition for the Quality Payment Program at § 414.1305. While testing and certification is optional, we do strongly encourage this action to support accurate measurement where certification is available for the measure. In either case, the data must be appropriately electronically exported or extracted from the MIPS eligible clinicians' CEHRT. This means that if the MIPS eligible clinician is performing an export of raw data, the appropriate CEHRT function must be used if applicable for that data transmission, and if the MIPS eligible clinician is calculating and then exporting the data, the appropriate CEHRT function must be used if applicable for that calculation and data transmission. We refer readers to section II.E.5.g. of this final rule with comment period for further information specific to the capture, calculation, and submission of CQMs related to the end-to-end electronic reporting bonus within the quality performance category.

    We note that it is not our intent to move away from the use of certified health IT for calculating measures, but rather our intent is to recognize and accommodate the variability among MIPS eligible clinicians in technology use and adoption. Through these policies, we are seeking to reduce the burden and remove entry barriers to participation where possible for those MIPS eligible clinicians who may be engaging with CEHRT, meaningful use, quality measurement, and improvement activities for the first time as an individual MIPS eligible clinician or group. By allowing for greater flexibility in the first few years of the program, we are establishing a guide path to move toward expanded adoption, implementation, use, and innovation of certified health IT. In this way, we are allowing for adequate time for MIPS eligible clinicians, health IT vendors, and other third party entities like QCDRs to develop, test, implement, and monitor health IT systems designed to support participation in MIPS. However, where relevant standards have been established as part of the certification program, we believe that applying these standards will support more reliable, accurate quality measurement, and we will work with Quality Payment Program participants and the health IT vendor community to continue to expand the availability and applicability of these tools. Finally, we are maintaining our focus on electronic reporting that is standards-based. We also believe this approach encourages increased adoption and use within the health care industry of advanced health IT amongst MIPS eligible clinicians and APM entities. We intend to publish specific standards that third party intermediaries will need to follow for data submission through subregulatory guidance and will work with health IT vendors to develop, test, and verify that guidance for MIPS data submission.

    Comment: A few commenters sought clarification on the statement that EHR-based systems are required to be certified for multiple programs. Other commenters stated that beyond what is already required for CEHRT certification, they did not believe that CMS should force third party intermediaries to implement reporting capabilities that may be outside of their organizational and client priorities.

    Response: First, the CEHRT definition is what MIPS eligible clinicians and groups must use to meet certain requirements of MIPS related to the use of certified health IT. The CEHRT definition is not applicable to a QCDR, registry, or other third party providing health IT support services to an MIPS eligible clinician or group, although these groups may choose to develop or adopt certain elements of certified health IT in order to support reliable standards based measurement where relevant certification criteria exist.

    Second, there are not separate CEHRT requirements for separate CMS programs which reference CEHRT. The ONC-established certification criteria which are included in the CEHRT definition for the Quality Payment Program at § 414.1305 and required for MIPS are not specific to a single component of CMS programs. Instead, ONC certifies individual Health IT Modules to perform specific functions using specific standards and implementation specifications that are part of the ONC Editions of certification criteria which are required only for those health IT vendors which are seeking to have their health IT certified. CMS then defines a package or collection of those certified Health IT Modules which an MIPS eligible clinician or group must possess to meet the CMS definition of CEHRT. The definition of CEHRT is currently substantively the same for MIPS, the EHR Incentive Programs, and Advanced Alternate Payment Models. This means that if an MIPS eligible clinician has an EHR system that meets the CEHRT definition, that system can support participation in each program (MIPS, EHR Incentive Program of AAPM) for which that clinician is eligible. The MIPS eligible clinician, or a health IT vendor submitting on their behalf, can report using data from that CEHRT for any such program without any additional certification as long as the submission meets the form and manner requirements established by CMS.

    Finally, there are multiple data submission methods available to MIPS eligible clinicians and groups. These multiple paths for reporting are designed to allow for flexibility to select the method most relevant for their practice, processes, and available features. For each of these submission methods, any submission on behalf of an MIPS eligible clinician or group must meet the form and manner requirements for data submission to us for that method. We understand a third party may offer a wide range of services to an MIPS eligible clinician or group beyond data submission to us. However, if that third party is offering data submission services for MIPS eligible clinicians, they must meet the form and manner specifications related to the chosen submission method. It is essential to maintain form and manner requirements specific to each of those methods in order to ensure the data can be accurately received, validated, and used to establish the appropriate payment adjustment for the performance period. We believe the flexibility of multiple paths will help to minimize burden on MIPS eligible clinicians, groups, and authorized third party intermediaries submitting on their behalf.

    Comment: Some commenters believed that since the final list of quality measures will not be published until the end of the year, it will be impossible to make any EHR configuration changes that may be necessary for the reporting of the measures. In addition, a few commenters noted that EHR vendors will not have sufficient time to develop dashboards for tracking quality and advancing care information performance in MIPS before January 1, 2017. The commenters believed that EHR vendors will have to develop measurement logic for tracking and reporting group reporting of advancing care information before March 1, 2018 and several improvement activities rely on the use of EHRs may also create a need for system changes.

    Response: We recognize and can appreciate the concerns raised by the commenters. We have instituted numerous flexibilities in the transition year of MIPS to account for additional time needed for development and implementation. In addition, CMS and ONC will work together with health IT vendors on development, testing, and verification pathways for measure calculation and group reporting to support MIPS data reporting.

    Comment: Some commenters were concerned about the complexity of how CEHRT interacts with other products and registries and what capabilities should be certified. Other commenters recommended that new ONC EHR certifying criteria require meaningful data flow into registries and QCDRs, and that formats be amenable for the CMS Web Interface to reduce data burden. One commenter requested ongoing adoption of data interoperability standards for clinical data registry so they become interoperable with structured EHR clinical data.

    Response: At present, the definition of CEHRT established by CMS for MIPS at § 414.1305 aligns with ONC certification criteria and includes requirements for a range of document formats which could be leveraged to engage with third parties such as registries, HIE organizations, and even with other health care providers who may not yet have access to certified health IT. In this way, technology that meets the definition of CEHRT supports the interoperable electronic exchange of data among varied settings across multiple platforms. CMS and ONC are working together to continue to advance the health IT infrastructure and support interoperability. We recognize that in the present environment, not all data received and used by qualified registries is derived from EHRs and readiness for certified health IT adoption among QCDRs varies greatly within the industry. While we agree that electronic transmission of the data elements needed to calculate quality measures would reduce burden on MIPS eligible clinicians and be potentially beneficial, we do not think it is appropriate for qualified registries to be required at this time to adopt a potentially costly change to their data collection model without adequate time to plan, test, implement, and ensure the efficacy of any such transition. We will continue to review and analyze readiness and engage with stakeholders to consider future development and needs.

    After consideration of the comments regarding health IT vendors that obtain data from MIPS eligible clinician's CEHRT, we are finalizing that the MIPS eligible clinicians who choose the EHR submission mechanism would be required to have certified EHR technology meeting the CEHRT definition for the quality payment program at § 414.1305 as proposed. In addition, we are finalizing the proposed policies for submission with modifications as follows.

    We are not finalizing a requirement for any certification criteria related to the submission of data beyond those which are currently defined within the CEHRT definition for the quality payment programs at § 414.1305.

    In addition, we are further noting that the requirements within the CEHRT definition apply to the MIPS eligible clinician or group, not to the health IT vendor or other third party intermediary supporting that MIPS eligible clinician with data submission. We are finalizing at § 414.1325(b)(2) and (c)(2) to allow MIPS eligible clinicians and groups to submit data for the improvement activities performance category, and data exported or extracted from CEHRT for the quality and advancing care information performance categories, either directly to CMS or with the support of a third party intermediary such as a health IT vendor or other authorized third party.

    Additionally, we are finalizing modifications to our proposal at § 414.1400(a)(1) and (a)(2) that a health IT vendor or other authorized third party intermediary may submit data for the improvement activities performance category, and data exported or extracted from CEHRT for the quality and advancing care information performance categories, on behalf of an MIPS eligible clinician or group.

    We are finalizing at § 414.1400(a)(4) that health IT vendors and other authorized third party intermediaries that obtain data exported or extracted from a MIPS eligible clinician's CEHRT would have to meet all criteria designated by us as a condition of their qualification or approval to participate in MIPS as a third party intermediary. As noted, this would include authorization by the MIPS eligible clinician or group to submit on their behalf and also includes submitting data in the form and manner specified at § 414.1400(a)(4)(ii).

    Finally, we are finalizing a modification to the proposed policy regarding the requirements for health IT vendors to state that health IT vendors or other authorized third party intermediary that are submitting on behalf of an MIPS eligible clinician or group must be able to do the following to submit MIPS data to us:

    • For measures, activities, and objectives under the quality, advancing care information, and improvement activities performance categories, and as proposed at § 414.1400(a)(4)(i); if the data is exported or extracted from certified EHR technology, the health IT vendor or third party must be able to indicate this data source; and

    • Transmit the data electronically exported or extracted from the CEHRT to us directly or through a data intermediary in the CMS-specified form and manner.

    c. Qualified Registries

    We proposed to define a qualified registry at § 414.1305 as a medical registry, a maintenance of certification program operated by a specialty body of the American Board of Medical Specialties or other data intermediary that, with respect to a particular performance period, has self-nominated and successfully completed a vetting process (as specified by CMS) to demonstrate its compliance with the MIPS qualification criteria specified by CMS for that performance period. The registry must have the requisite legal authority to submit MIPS data (as specified by CMS) on behalf of a MIPS eligible clinician or group to CMS. In addition, we proposed at § 414.1400(a)(2) to expand a qualified registry's capabilities by allowing qualified registries to submit data on measures, activities, or objectives for any of the following MIPS performance categories:

    • Quality;

    • Improvement Activities; or

    • Advancing care information, if the MIPS eligible clinician or group is using certified EHR technology.

    The following is a summary of the comments we received regarding the proposed qualified registry definition and expanded capabilities proposal.

    Comment: Some commenters agreed with the proposed definition of a qualified registry.

    Response: We appreciate the commenters support.

    Comment: Several commenters agreed with the proposal to allow third party intermediaries, such as qualified registries, to submit data for the performance categories of quality, advancing care information, and improvement activities. The commenters believed allowing MIPS eligible clinicians to use a single, third party submission method reduces the administrative burden on MIPS eligible clinicians, facilitates consolidation, and standardization of data from disparate EHRs and other systems, and enables the third parties to provide timely, actionable feedback to MIPS eligible clinicians on opportunities for improvement in quality and value.

    Response: We thank the commenters for their support.

    Comment: A few commenters did not support the criteria that qualified registries must have the capability to submit for all performance categories. The commenters believed that while this could reduce burden for MIPS eligible clinicians, choosing to support one or more performance categories is a business decision and should not be regulated. In addition, the commenters stated this would limit the MIPS eligible clinician's choice in the early years of MIPS, as not all third party entities would necessarily be able to meet the criteria for submittal for all three performance categories.

    Response: While we do encourage qualified registries to be able to support for all performance categories we do not require that all MIPS performance categories be reported by a qualified registry. Rather we require that a qualified registry be able to report the quality performance category and note that it is the registry's choice to be qualified to the advancing care information and improvement activities performance categories.

    After consideration of the comments on the qualified registry policies above we are finalizing the policies at §§ 414.1305 and 414.1400(a)(2) as proposed.

    (1) Establishment of an Entity Seeking To Qualify as a Registry

    We proposed at § 414.1400(h) that in order for an entity to become qualified for a given performance period as a qualified registry, the entity must be in existence as of January 1 of the performance period for which the entity seeks to become a qualified registry (for example, January 1, 2017, to be eligible to participate for purposes of performance periods beginning in 2017). The qualified registry must have at least 25 participants by January 1 of the performance period. These participants do not necessarily need to be using the qualified registry to report MIPS data to us; rather, they need to be submitting data to the qualified registry for quality improvement. We also proposed a qualified registry must provide attestation statements from the qualified registry/MIPS eligible clinicians during the data submission period that all of the data (quality measures, improvement activities, and advancing care information measures and objectives, if applicable) and results are accurate and complete.

    The following is a summary of the comments we received regarding our proposal for the establishment of an entity seeking to qualify as a registry.

    Comment: A few commenters expressed concern for the proposed 25-participant minimum for qualified registries for an entity to become qualified for a given performance period as the commenters believed the criteria were arbitrary. Further, the commenters stated that participants should be required to be in place on January 1 as they did not believe registries would have the potential to pull historical data from the performance period.

    Response: As the MIPS program relies on the ability of CMS to receive accurate data for MIPS eligible clinicians, we believe it is important to approve established entities who have demonstrated their ability to collect and calculate data. We require a 25 participant minimum for entities to self-nominate as a qualified registry because we have found in past programs that 25 participants is an adequate number of participants that will prevent small clinical practices from attempting to be their own registry. We are concerned that potentially smaller practices may not have the IT expertise to report their data and there is no intermediary to validate the submitted data. Additionally, having existing registry members will help to ensure that the entity has at least some experience collecting and calculating quality measure data.

    After consideration of the comments received regarding our proposal for the establishment of an entity seeking to qualify as a registry, we are finalizing the policies at § 414.1400(h) as proposed.

    (2) Self-Nomination Period

    For the 2017 performance period, we proposed at § 414.1400(g) a self-nomination period from November 15, 2016 until January 15, 2017. For future years of the program, starting with the 2018 performance period, we proposed to establish the self-nomination period from September 1 of the prior year until November 1 of the year in which the qualified registry seeks to be qualified. Entities that desire to qualify as a qualified registry for purposes of MIPS for a given performance period would need to provide all requested information to us at the time of self-nomination and would need to self-nominate for that performance period. Having qualified as a qualified registry does not automatically qualify the entity to participate in subsequent MIPS performance periods. For example, a qualified registry may choose not to continue participation in the program in future years, OR the qualified registry may be precluded from participation in a future year, due to multiple data or submission errors as noted below. As such, we believe an annual self-nomination process is the best process to ensure accurate information is conveyed to MIPS eligible clinicians and accurate data is submitted to MIPS.

    We proposed to require further information of qualified registries at the time of self-nomination. All self-nomination information must be submitted to [email protected]. If technically feasible we will accept self-nomination information via a web-based tool; we will provide any further information on the web-based tool at QualityPaymentProgram.cms.gov. If an entity becomes qualified as a qualified registry, they would need to sign a statement confirming this information is correct prior to us listing their qualifications on their Web site. Once we post the qualified registry on our Web site, including the services offered by the qualified registry, we would require the qualified registry to support these services/measures for its clients as a condition of the entity's qualification as a qualified registry for purposes of MIPS. Failure to do so will preclude the qualified registry from participation in MIPS in the subsequent performance year.

    We did not receive any comments regarding our proposals for the qualified registry self-nomination period. Therefore, we are finalizing the policies at § 414.1400(g) as proposed.

    (3) Information Required at the Time of Self-Nomination

    We proposed that a qualified registry must provide the following information to us at the time of self-nomination:

    • Organization Name (Specify Sponsoring Organization name and software vendor name if the two are different. For example, a specialty society in collaboration with a software vendor).

    • MIPS performance categories (that is, categories for which the entity is self-nominating to report. For example, quality measures, advancing care information, or improvement activities).

    • Performance Period.

    • Vendor Type (for example, qualified registry).

    • Provide the method(s) by which the entity obtains data from its customers for each performance category for which it is approved: Claims; web-based tool; practice management system; CEHRT; other (please explain). If a combination of methods (Claims, web-based tool, Practice Management System, CEHRT, or other) is utilized, please state which method(s) the entity utilizes to collect data (performance numerator and denominator).

    • Indicate the method the entity will use to verify the accuracy of each TIN/NPI and/or TIN it is intending to submit (for example; National Plan and Provider Enumeration System (NPPES), CMS claims, tax documentation).

    • Describe the method the entity will use to accurately calculate performance rates for quality measures based on the appropriate measure type and specification. For composite measures or measures with multiple performance rates, the entity must provide us with the methodology the entity uses to calculate these composite measures and measures with multiple performance rates. The entity should be able to report to us a calculated composite measure rate, if applicable.

    • Describe the method that the entity will use to accurately calculate performance data for improvement activities and advancing care information performance categories based on the appropriate parameters or activities.

    • Describe the process that the entity will use for completion of a randomized audit of a subset of data prior to the submission to us (for all performance categories the qualified registry is submitting data on; that is, quality, improvement activities, and advancing care information, as applicable). Periodic examinations may be completed to compare patient record data with submitted data or ensure MIPS quality measures or other performance category (improvement activities and advancing care information) activities, measures, or objectives were accurately reported and performance calculated based on the appropriate measure specifications (that is, accuracy of numerator, denominator, and exclusion criteria) or performance category criteria.

    • Provide information on the entity's process for data validation for both individual MIPS eligible clinicians and groups within a data validation plan. For example, for individuals, it is encouraged that 3 percent of the MIPS eligible clinicians submitted to CMS by the qualified registry be sampled with a minimum sample of 10 TIN/NPIs or a maximum sample of 50 MIPS eligible clinicians. For each MIPS eligible clinician sampled, it is encouraged that 25 percent of the MIPS eligible clinicians' patients (with a minimum sample of five patients or a maximum sample of 50 patients) should be reviewed for all measures applicable to the patient.

    • Provide the results of the executed data validation plan by May 31st of the year following the performance period. If the results indicate the qualified registry's validation reveals inaccuracy or low compliance provide to us an improvement plan. Failure to implement improvements may result in the qualified registry being placed in a probationary status or disqualification from future participation.

    We did not receive any comments on the proposal regarding information required at the time of self-nomination for a qualified registry. Therefore, we are finalizing the above policies as proposed.

    (4) Qualified Registry Criteria for Data Submission

    Further, we proposed that a qualified registry must perform the following functions:

    • For measures, activities, and objectives under the quality, advancing care information, and improvement activities performance categories and as proposed at § 414.1400(a)(4)(i); if the data is derived from CEHRT, the qualified registry must be able to indicate this data source.

    • A qualified registry submitting MIPS quality measures that are risk-adjusted (and have the risk-adjusted variables and methodology listed in the measure specifications) must submit the risk-adjusted measure results to CMS when submitting the data for these measures.

    • Submit to us, quality measures and activities data on all patients, not just Medicare patients.

    • Submit quality measures, advancing care information, or improvement activities performance categories data and results to us in the applicable MIPS performance categories for which the qualified registry is providing data.

    • Provide timely feedback, at least four times a year, on all of the MIPS performance categories that the qualified registry will report to us. That is, if the qualified registry will be reporting on data for the improvement activities, advancing care information, or quality performance category, all results as of the performance feedback date should be included in the information sent to the MIPS eligible clinician. The feedback should be given to the individual MIPS eligible clinician or group (if participating as a group) at the individual participant level or group level, as applicable, for which the qualified registry reports. The qualified registry is only required to provide feedback based on the MIPS eligible clinician's data that is available at the time the performance feedback is generated.

    • A qualified registry must comply with any request by us to review the data submitted by the qualified registry for purposes of MIPS in accordance with applicable law. Specifically, data requested would be limited to the minimum necessary for us to carry out, for example, health care operations or health oversight activities.

    • Mandatory participation in ongoing support conference calls hosted by us (approximately one call per month), including an in-person qualified registry kick-off meeting (if held) at our headquarters in Baltimore, MD. More than one unexcused absence could result in the qualified registry being precluded from participation in the program for that year. If a qualified registry is precluded from participation in MIPS, the individual MIPS eligible clinician or group would need to find another entity to submit their MIPS data.

    • Agree that data inaccuracies including (but not limited to) TIN/NPI mismatches, formatting issues, calculation errors, data audit discrepancies affecting in excess of 3 percent of the total number of MIPS eligible clinicians submitted by the qualified registry may result in notations on our qualified registry posting of low data quality and would place the qualified registry on probation (if they decide to self-nominate for the next program year). If the qualified registry does not reduce their data error rate below 3 percent in the subsequent year, they would continue to be on probation and have their listing on the CMS Web site continue to note the poor quality of the data they are submitting for MIPS. Data errors affecting in excess of 5 percent of the MIPS eligible clinicians submitted by the qualified registry may lead to the disqualification of the qualified registry from participation in the following year's program. As we gain additional experience with qualified registries, we intend to revisit and enhance these thresholds in future years.

    • Be able to report at least six quality measures including one cross-cutting measure and one outcome measure. If an outcome measure is not available, be able to report another high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measures).

    • Enter into and maintain with its participating MIPS eligible clinicians an appropriate Business Associate agreement that provides for the qualified registry's receipt of patient-specific data from an individual MIPS eligible clinician or group, as well as the qualified registry's disclosure of quality measure results and numerator and denominator data and/or patient specific data on Medicare and non-Medicare beneficiaries on behalf of MIPS eligible clinicians or group.

    • Obtain and keep on file signed documentation that each holder of an NPI whose data are submitted to the qualified registry, has authorized the qualified registry to submit quality measure results, improvement activities measure and activity results, advancing care information objective results and numerator and denominator data or patient-specific data on Medicare and non-Medicare beneficiaries to us for the purpose of MIPS participation. This documentation should be obtained at the time the MIPS eligible clinician or group signs up with the qualified registry to submit MIPS data to the qualified registry and must meet any applicable laws, regulations, and contractual business associate agreements. Groups participating in MIPS via a qualified registry may have their group's duly authorized representative grant permission to the qualified registry to submit their data to us. If submitting as a group each individual MIPS eligible clinician does not need to grant their individual permission to the qualified registry to submit their data to us.

    • Not be owned and managed by an individual locally-owned single specialty group (for example, single specialty practices with only one practice location or solo practitioner practices are prohibited from self-nominating to become a MIPS qualified registry).

    • Be able to separate out and report on all payers, including Medicare Part B FFS patients and non-Medicare patients.

    • Provide the measure numbers for the MIPS quality measures on which the qualified registry is reporting.

    • Provide the measure title (and specialty-specific measure set title, if applicable) for the MIPS quality measures and improvement activities (if applicable) on which the qualified registry is reporting.

    • Indicate if the qualified registry will be reporting the advancing care information component measures and objectives.

    • Report the number of eligible instances (reporting denominator).

    • Report the number of instances a quality service is performed (performance numerator).

    • Report the number of performance exclusions, meaning the quality action was not performed for a valid reason as defined by the measure specification.

    • Comply with a CMS-specified secure method for data submission, such as submitting the qualified registry's data in an XML file.

    • Sign a document verifying the qualified registry's name, contact information, cost for MIPS eligible clinicians or groups to use the qualified registry, services provided, and the specialty-specific measure sets the qualified registry intends to report. Once posted on the qualified registry's CMS Web site, the qualified registry will need to support the measures or measure sets confirmed by the qualified registry. Failure to do so will may preclude the qualified registry from participation in MIPS in the subsequent year.

    • Must provide attestation statements during the data submission period that all of the data (quality measures, improvement activities, and advancing care information measures and objectives, if applicable) and results are accurate and complete.

    • For purposes of distributing performance feedback to MIPS eligible clinicians, collect a MIPS eligible clinician's email address(es) and have documentation from the MIPS eligible clinician authorizing the release of his or her email address.

    • Be able to calculate and submit measure-level reporting rates or, upon request, the data elements needed to calculate the reporting and performance rates by TIN/NPI and/or TIN.

    • Be able to calculate and submit, by TIN/NPI or TIN, a performance rate (that is the percentage of a defined population who receive a particular process of care or achieves a particular outcome based on a calculation of the measures' numerator and denominator specifications) for each measure on which the TIN/NPI and/or TIN reports or, upon request the Medicare and non-Medicare level data elements needed to calculate the performance rates.

    • Provide the performance period start date the qualified registry will cover.

    • Provide the performance period end date the qualified registry will cover.

    • Report the number of instances in which the applicable submission criteria were not met, for example, the quality measure was not reported and a performance exclusion did not apply.

    • For data validation purposes, provide information on the entity's sampling methodology. For example, if is encouraged that 3 percent of the MIPS eligible clinicians be sampled with a minimum sample of 10 MIPS eligible clinicians or a maximum sample of 50 MIPS eligible clinicians. For each MIPS eligible clinician sampled, it is encouraged that 25 percent of the MIPS eligible clinicians' patients (with a minimum sample of five patients or a maximum sample of 50 patients) should be reviewed for all measures applicable to the patient.

    The following is a summary of the comments we received regarding our proposal for the qualified registry criteria for data submission.

    Comment: One commenter requested assurance that Immunization Registries and Immunization Information Systems (IIS) did not fall into the category of a qualified registry.

    Response: We would like to explain that any organization that would like to become a qualified registry for the MIPS must self-nominate and meet the requirements of qualified registries described within this final rule with comment period.

    Comment: One commenter recommended the Quality Markers program (qualified vendor and a qualified registry under PQRS) as a reporting tool.

    Response: We would like to explain that any organization that would like to become a qualified registry for the MIPS must self-nominate and meet the requirements of qualified registries described within this rule. MIPS eligible clinicians and groups have the option to choose whatever data submission method best suits their practice.

    Comment: A few commenters agreed that a qualified registry must provide attestation statements from the qualified registry or MIPS eligible clinicians during the data submission period that all the data and results are accurate and complete.

    Response: We appreciate the support and are working to streamline this process for registries by allowing the attestation at the time of actual data submission.

    Comment: One commenter was concerned about the cost of the qualified registries and questioned if CMS could provide a qualified registry or EHR at low cost.

    Response: We will take the commenters suggestion under consideration of creating a CMS registry or EHR for future rulemaking.

    After consideration of the comments regarding qualified registry criteria for data submission we are finalizing the above policies as proposed with one modification. Based on our policies finalized in section II.E.5.b.(3) of this final rule with comment period, we are not requiring MIPS eligible clinicians to submit data on cross-cutting measures. Therefore, we are finalizing at § 414.1335(a)(1)(i) the requirement for registries as follows: Be able to submit results for at least six quality measures including one outcome measure. If an outcome measure is not available, be able to submit results for at least one other high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measures). If no outcome measure is available, then the registry must provide a justification for not including an outcome measure.

    d. CMS-Approved Survey Vendors

    As discussed in the proposed rule (81 FR 28188), we proposed to allow groups to report CAHPS for MIPS survey measures. We proposed the data collected on the CAHPS for MIPS survey measures would be transmitted to us via a CMS-approved survey vendor.

    For purposes of MIPS, we proposed to define a CMS-approved survey vendor at § 414.1305 as a survey vendor that is approved by us for a particular performance period to administer the CAHPS for MIPS survey and transmit survey measures data to us. We proposed at § 414.1400(i) that vendors are required to undergo the CMS approval process for each year in which the survey vendor seeks to transmit survey measures data to us. We anticipate retaining the same policies and procedures we currently follow for a CMS-approved survey vendor for PQRS and apply them to a MIPS CMS-approved survey vendor. We proposed the following criteria for a CMS-approved survey vendor for the CAHPS for MIPS survey. A CMS-approved survey vendor for CAHPS for MIPS must:

    (1) Comply with and complete the Vendor Participation Form—We anticipate retaining the same application process and Vendor Participation Form that was required for the CAHPS for PQRS survey. Please refer to http://www.pqrscahps.org/en/participation-form/ for further details. Therefore, we proposed at § 414.1400(i) that all CMS-approved survey vendor applications and materials will be due April 30 of the performance period. However, we sought comments on whether the deadline for CMS-approved survey vendor applications and materials should be earlier, such as prior to the beginning of the performance period. In addition, we proposed the following items will be required for your organization to be a CMS-approved survey vendor of the CAHPS for MIPS survey:

    • Meet all of the Minimum Survey Vendor Business Requirements at the time of the submission of the Vendor Participation Form; and

    • Complete the Vendor Participation Form.

    (2) Comply with the Minimum Survey Vendor Business Requirements—We anticipate retaining the same minimum survey business requirements that were required for the CAHPS for PQRS survey. Please refer to http://www.pqrscahps.org/en/business-requirements/ for further details. We proposed Applicant Organizations (survey vendor and subcontractors) must possess all required facilities and systems to implement the CAHPS for MIPS survey. Subcontractors will be subject to the same requirements as the applicant vendor. Organizations that are approved to administer the CAHPS for MIPS s-Survey must conduct all their CAHPS for MIPS business operations within the United States. This requirement applies to all staff and subcontractors. In addition, we proposed to request information regarding:

    • Relevant organization and survey experience.

    • Survey capability and capacity.

    • Adherence to quality assurance guidelines and participation in quality assurance activities.

    • Documentation requirements.

    • Adhere to all protocols and specifications, and agree to participate in training sessions

    Specifically, to obtain our approval, we proposed that survey vendors would be required to undergo training, meet our standards on how to administer the survey, and submit a quality assurance plan. We would provide the identified survey vendor with an appropriate sample frame of beneficiaries from each group that has contracted with the survey vendor and elected to participate in the CAHPS for MIPS survey. The survey vendor would also be required to administer the survey according to established protocols to ensure valid and reliable results. More information on quality assurance and protocols can be reviewed at http://www.pqrscahps.org/en/quality-assurance-guidelines/. CMS-approved survey vendors would be supplied with mail and telephone versions of the survey in electronic form, and text for beneficiary pre-notification and cover letters. CAHPS for MIPS surveys can be administered in English, Spanish, Cantonese, Mandarin, Korean, Russian and/or Vietnamese. Survey vendors would be required to use appropriate quality control and security (to include encryption and backup) procedures to maintain survey response data. The data would then be securely sent back to us for scoring and/or validation in accordance with applicable law. To ensure that a survey vendor possesses the ability to transmit survey measures data for a particular performance period, we propose to require survey vendors to undergo this approval process for each year in which the survey vendor seeks to transmit survey measures data to us. We requested comments on this proposal.

    The following is a summary of the comments we received regarding the CMS-approved survey vendors.

    Comment: One commenter recommended that CMS‐approved survey vendors should have 2 years of prior experience selecting random samples based on specific eligibility criteria, work with their contracted client medical group(s) or MIPS eligible clinician(s) to obtain patient data for sampling via HIPAA compliant electronic data transfer processes, and adequately document the sampling process.

    Response: We will take these comments into consideration in future rulemaking.

    After consideration of the comments regarding CMS-approved survey vendors we are finalizing §§ 414.1305 and 414.1400(i) and the above policies as proposed.

    e. Probation and Disqualification of a Third Party Intermediary

    We proposed at § 414.1400(k) a process for placing third party intermediaries on probation and for disqualifying such entities for failure to meet certain standards established by CMS. Specifically, we proposed that if at any time we determine that a third party intermediary (that is, a QCDR, health IT vendor, qualified registry, or CMS-approved survey vendor) has not met all of the applicable criteria for qualification, we may place the third party intermediary on probation for the current performance period and/or the following performance period, as applicable.

    In addition, we proposed that we require a corrective action plan from the third party intermediary to address any deficiencies or issues and prevent them from recurring. We proposed the corrective action plan must be received and accepted by us within 14 days of the CMS notification to the third party intermediary of the deficiencies or probation. Failure to comply with this would lead to disqualification from MIPS for the subsequent performance period.

    We proposed probation to mean that, for the applicable performance period, the third party intermediary would not be allowed to miss any meetings or deadlines and would need to submit a corrective action plan for remediation or correction of deficiencies identified that resulted in the probation.

    In addition, we proposed that if the third party intermediary has data inaccuracies including (but not limited to) TIN/NPI mismatches, formatting issues, calculation errors, data audit discrepancies affecting in excess of 3 percent (but less than 5 percent) of the total number of MIPS eligible clinicians or groups submitted by the third party intermediary, we would annotate on the CMS qualified posting that the third party intermediary furnished data of poor quality and would place the entity on probation for the subsequent MIPS performance period with the opportunity to go on probation for a year to correct their deficiencies.

    Further, we proposed if the third party intermediary does not reduce their data error rate below 3 percent for the subsequent performance period, the third party intermediary would continue to be on probation and have their listing on the CMS Web site continue to note the poor quality of the data they are submitting for MIPS for one additional performance year. After 2 years on probation, the third party intermediary would be disqualified for the subsequent performance year. Data errors affecting in excess of 5 percent of the MIPS eligible clinicians or groups submitted by the third party intermediary may lead to the disqualification of the third party intermediary from participation for the following performance period. In placing the third party intermediary on probation; we would notify the third party intermediary of the identified issues, at the time of discovery of such issues.

    Finally, we proposed if the third party intermediary does not submit an acceptable corrective action plan within 14 days of notification of the deficiencies and correct the deficiencies within 30 days or before the submission deadline—whichever is sooner, we may disqualify the third party intermediary from participating in MIPS for the current performance period and/or the following performance period, as applicable. We requested comments on these proposals.

    The following is a summary of the comments we received regarding probation and disqualification of a third party intermediary.

    Comment: A few commenters agreed that CMS should implement a process for placing third party intermediaries on probation for disqualifying such entities for failure to meet certain standards.

    Response: We appreciate the commenters' support.

    Comment: Some commenters expressed concern about the potential for qualified QCDRs and registries to subsequently fail to fulfill their reporting criteria and advised CMS to finalize language holding MIPS eligible clinicians harmless in the event of a vendor data failure.

    Response: CMS cannot ensure that third party intermediaries will meet the applicable submission criteria in all instances. We can, however, monitor the success of these entities and preclude their participation in the program in future years. Further, we note that MIPS eligible clinicians are ultimately responsible for the data that is submitted by their third party intermediaries and expect that MIPS eligible clinicians are ultimately holding their third party intermediaries accountable for accurate reporting. We refer readers to section II.E.8.c. of this final rule with comment period for more information on the targeted review process.

    Comment: A few commenters supported CMS' proposal to provide an initial probationary period where a third party intermediary can correct identified issues, and recommends that if a QCDR is found not able to submit accurate data, then CMS should assess MIPS eligible clinicians who used that QCDR as “average” for the MIPS quality performance category. The commenters recommended that CMS change the corrective action plan deadline to 21 days or any timeline as agreed upon by both CMS and the submitter, as depending on the issue and the entity, 14 days may not be reasonable. Another commenter believed that 14 days is too short to properly diagnose a problem and 30 days is too short to solve it. The commenter requested 30 days for diagnosis and 45 for the implementation of the solution, to prevent hasty coding that may cause future errors.

    Some commenters proposed that at least 30 days be allowed for the corrective action plan and an additional 45 days to deploy the solution; the imminence of a reporting deadline should not limit the time available to deploy a solution; such haste could create additional problems for clinicians and CMS. The commenters also recommended that CMS have provisions in place to use updated data submitted after the reporting deadline. The commenters stated that 14 days could be much too little time to properly diagnose a problem and propose and test a solution. Similarly, 30 days could be much too little time to deploy a solution that could require patching software and changes in clinician workflows.

    Other commenters stated that timeframes in this section are unreasonably short and recommended they be extended. They believed that it is unreasonable for CMS to expect that a health IT vendor would be able to verify that a problem exists, identify and troubleshoot the source of the problem, and present a precise solution for correcting the problem, within 14 days. The commenters requested that CMS extend this to 30 days, at a minimum. The commenters believed for the correcting deficiencies, 30 days may be unreasonably short and depending on the nature of the problem, believed it can take anywhere from a week to several months to program a software patch, and even longer to correct the problem through a software upgrade. The commenters requested that CMS extend this to 90 days, at a minimum.

    Response: We acknowledge the challenges of a 14 day time period for correction of errors by qualified registries and QCDRs, however we believe that the data should be submitted early in the submission window which would allow for a longer correction timeframe. Additionally, we encourage the qualified entity to run their results through a quality assurance check before submission. The requested time extension would affect CMS' ability to calculate and report final score to MIPS eligible clinicians and their ability to question the results before any MIPS payment adjustments are made the following year.

    Comment: A few commenters did not believe QCDRs should be placed on probation if they submit data with inaccuracies. The commenters believed this should be consistent across document and measurement criteria.

    Response: We want the MIPS eligible clinicians using QCDRs and qualified registries to be able to have confidence that their data is collected, analyzed, and reported accurately. We provide QCDRs and qualified registries a report of the data issues discovered from each previous participation year so that the entities have an opportunity to correct any identified problems. Accordingly, we believe the best way to ensure we receive accurate data from QCDRs and qualified registries and to protect participating MIPS eligible clinicians is to place entities with high data issue rates on probation or disqualify them from participating in future program years. At the same time, we note that TINs are ultimately responsible for the data that are submitted by their third party intermediaries and expect that TINs are ultimately holding their third party intermediaries accountable for accurate reporting.

    Comment: A few commenters expressed concern with the disqualification process and the resulting financial impact to MIPS eligible clinicians and groups. The commenters stated that the third party intermediaries have limited financial risk and burden if they are disqualified, and that financial burden rests on the MIPS eligible clinicians and groups.

    Response: We note that MIPS eligible clinicians and groups are ultimately responsible for the data that are submitted by their third party intermediaries and expect that MIPS eligible clinicians and groups should ultimately hold their third party intermediaries accountable for accurate reporting. We believe that operational and policy protections that we are putting in place through this final rule with comment period will significantly limit the number of third party intermediaries from being disqualified during the performance period.

    Comment: Some commenters stated that they support CMS' proposal for probation and disqualification of third party intermediaries.

    Response: We appreciate the commenters' support.

    Comment: Other commenters stated that if a QCDR, qualified registry, or EHR vendor is not submitting correct and valid data (after testing, validation and the opportunity to correct), then the QCDR should be placed on a corrective action plan. The commenters added that if after the probationary period the QCDR is still not adequately submitting data, the QCDR should be excluded from future performance periods until such time that it could show through testing that it is able to submit valid data.

    Response: We appreciate the comment. If the QCDR or qualified registry has a large percent of their participants whose final data is inaccurate and not usable, then the entity may be excluded from future program years.

    Comment: Some commenters suggested that to help resolve potential and on-going issues, CMS should develop a root-cause analysis toolkit that vendors could use to help self-identify issues.

    Response: We agree with this suggestion and will look at the feasibility of doing this for future program years.

    Comment: A few commenters stated that if a vendor is incapable of submitting accurate data, then the MIPS eligible clinicians who used that vendor should be held harmless from any penalties. Another commenter noted the absence of “hold harmless” provisions to ensure MIPS eligible clinicians would not be subject to penalties under MIPS if a third party intermediary were to have any error rate, and particularly if the intermediary were disqualified, or if they pull out of the market at any point during the reporting period. Similar provisions are included as part of CMS' EHR Incentive Program in the form of hardship exceptions. Specifically, CMS grants hardship exceptions when clinicians faced extreme and uncontrollable circumstances in the form of issues with the certification of the EHR product or products such as delays or decertification. The commenters stated CMS must include such provisions in the final rule with comment period.

    Response: We note that MIPS eligible clinicians are ultimately responsible for the data that are submitted by their third party intermediaries and expect that MIPS eligible clinicians and groups should ultimately hold their third party intermediaries accountable for accurate reporting. We will consider cases of vendors leaving the marketplace during the performance period on a case by case basis. We would, however, need proof that the MIPS eligible clinician had an agreement in place with the vendor at the time of their withdrawal from the marketplace.

    Comment: One commenter requested that CMS revisit thresholds in regards to data errors as they believed the strict thresholds for corrective action may be counterproductive.

    Response: We believe it is necessary to give QCDRs and qualified registries fair notice of the expectation for their performance (as a QCDRs and qualified registries). Data errors affecting in excess of 5 percent of the MIPS eligible clinicians or groups submitted by the third party intermediary may lead to the disqualification of the third party intermediary from participation for the following performance period. We chose a 5 percent data error rate because from past experience under the PQRS program we have found that a 5 percent error rate increases the confidence interval for third party intermediary scoring under MIPS. If the third party intermediaries data is incomplete or inaccurate, this can adversely affect the program as a whole and all MIPS eligible clinicians may suffer from inaccurate or missing data. The QCDR or qualified registry is responsible for ensuring accurate data calculation and submission.

    Comment: Some commenters expressed appreciation of the critical importance of accuracy of submitted data. The commenters believed, however, that the proposed error thresholds are too stringent (for example, data audit discrepancies affecting in excess of 3 percent but less than 5 percent of the MIPS eligible clinicians or groups submitted); and that these thresholds as proposed do not take into account the materiality of the errors, or whether they are concentrated in specific clinicians, which could occur due to interactions between workflows and measure logic. The commenters stated there would also need to be exclusions for data calculation errors that could be attributed to poorly or inadequately specified measures. In addition, the commenters stated that until we have mature, well-vetted and error-free measures, this potential will continue to exist and should not result in probation or suspension.

    Another commenter believed it would be virtually impossible for most QCDRs to meet the 3 percent error rate criteria to avoid the low data quality notation and threatened probation. The commenter recommended that CMS review the proposal for a 3 percent error rate and adopt an error rate that is more feasible for QCDRs to achieve at this early stage in their development. A few commenters stated it will be important not to penalize submitters with errors in calculations in excess of the three to five percent in the proposed rule if the calculation error is due to a different interpretation of an imprecisely-specified measure.

    Response: We established the thresholds of data errors affecting in excess of 5 percent of the MIPS eligible clinicians or groups submitted by the third party intermediary may lead to the disqualification of the third party intermediary from participation for the following performance period. We chose a 5 percent data error rate based on past experience under the PQRS program we have found that a 5 percent error rate increases the confidence interval for third party intermediary scoring under MIPS. In addition, third party intermediaries are considered have experience with handling and calculating data and are experts in quality reporting. The data they submit not only affects the MIPS eligible clinicians and groups for whom they report but can affect other MIPS clinicians as the overall program (MIPS payment incentives vs. MIPS payment adjustments) is budget neutral. Accurate data is therefore imperative for the program as a whole.

    Comment: One commenter recommended that CMS establish a process for notifying MIPS eligible clinicians ahead of terminating or placing an entity on probation. This would provide the MIPS eligible clinician time to research an alternative submission mechanisms or vendor.

    Response: We agree with the commenter. We intend to notify MIPS eligible clinicians when a third party intermediary is terminated or placed on probation via the qualified posting.

    Comment: Another commenter requested a 2-year grace period for implementing of CMS' proposal for probation and disqualification of third party intermediaries, as QCDRs will need to gain experience with these new performance categories.

    Response: We would like to note that registry reporting has occurred since 2008, and QCDRs have been in use since 2014. We believe this is an adequate time for qualified registries and QCDRs to be able to report data with few or no errors.

    After consideration of the comments regarding auditing of third party intermediaries submitting MIPS data, we are finalizing the proposal at § 414.1400(k) to include that if at any time we determine that a third party intermediary (that is, a QCDR, health IT vendor, qualified registry, or CMS-approved survey vendor) has not met all of the applicable criteria for qualification, we may place the third party intermediary on probation for the current performance period and/or the following performance period, as applicable. In addition, we are finalizing that we require a corrective action plan from the third party intermediary to address any deficiencies or issues and prevent them from recurring. We are finalizing the corrective action plan must be received and accepted by us within 14 days of the CMS notification to the third party intermediary of the deficiencies or probation. Failure to comply with this would lead to disqualification from MIPS for the subsequent performance period. In addition, we are finalizing that probation means for the applicable performance period, the third party intermediary would not be allowed to miss any meetings or deadlines and would need to submit a corrective action plan for remediation or correction of deficiencies identified that resulted in the probation. Further, we are finalizing that if the third party intermediary has data inaccuracies including (but not limited to) TIN/NPI mismatches, formatting issues, calculation errors, data audit discrepancies affecting in excess of 3 percent (but less than 5 percent) of the total number of MIPS eligible clinicians or groups submitted by the third party intermediary, we would annotate on the CMS qualified posting that the third party intermediary furnished data of poor quality and would place the entity on probation for the subsequent MIPS performance period with the opportunity to go on probation for a year to correct their deficiencies. In addition, we are finalizing that if the third party intermediary does not reduce their data error rate below 3 percent for the subsequent performance period, the third party intermediary would continue to be on probation and have their listing on the CMS Web site continue to note the poor quality of the data they are submitting for MIPS for one additional performance year. After 2 years on probation, the third party intermediary would be disqualified for the subsequent performance year. Data errors affecting in excess of 5 percent of the MIPS eligible clinicians or groups submitted by the third party intermediary may lead to the disqualification of the third party intermediary from participation for the following performance period. In placing the third party intermediary on probation; we would notify the third party intermediary of the identified issues, at the time of discovery of such issues. Further, we are finalizing that if the third party intermediary does not submit an acceptable corrective action plan within 14 days of notification of the deficiencies and correct the deficiencies within 30 days or before the submission deadline—whichever is sooner, we may disqualify the third party intermediary from participating in MIPS for the current performance period and/or the following performance period, as applicable.

    (f) Auditing of Third Party Intermediaries Submitting MIPS Data

    We proposed at § 414.1400(j) that any third party intermediary (that is, a QCDR, health IT vendor, qualified registry, or CMS-approved survey vendor) must comply with certain auditing criteria as a condition of their qualification or approval to participate in MIPS as a third party intermediary. Specifically, we proposed the entity must make available to us the contact information of each MIPS eligible clinician or group on behalf of whom it submits data. The contact information would include, at a minimum, the MIPS eligible clinician or group's practice phone number, address, and, if available, email. Further, we proposed the entity must retain all data submitted to us for MIPS for a minimum of 10 years. We requested comments on this proposal.

    The following is a summary of the comments we received regarding auditing of third party intermediaries submitting MIPS data.

    Comment: A few commenters noted that CMS proposed that an entity must retain all data submitted to CMS for MIPS for a minimum of 10 years. The commenters stated that they believe this amount of time is excessive and is an invasion of privacy. Another commenter recommended using a lesser time period similar to other health record criteria. Other commenters requested that CMS maintain the current criteria to obtain and keep on file signed documentation for 7 years as is currently required under PQRS.

    Another commenter stated that CMS should only require a QCDR to obtain and keep on file signed documentation that each holder of an NPI whose data is submitted to the QCDR and who has authorized the QCDR to submit quality measure results, improvement activities (if applicable), advancing care information objective results and numerator and denominator data (if applicable) and/or patient-specific data on Medicare and non-Medicare beneficiaries to CMS for the purpose of MIPS participation for 3 years beyond each reporting year for which a user participates via the QCDR.

    Response: We believe that a 10-year record retention, as proposed, for third party intermediaries is appropriate. We are creating a policy that is intended to align across the various components of the Quality Payment Program and is consistent with the record retention requirement for APMs. This consistency will provide a streamline transition between the MIPS program and the APM program. We are requiring third party intermediaries to retain copies of the contact information and permission to submit data on behalf of a MIPS eligible clinician or group and the aggregated data submitted by the third party intermediary for up to 10 years after the performance year to prepare for verification in the event they are selected for an audit. For the purposes of auditing we reserve the right to lookback 6 years and 3 months. Refer to section II.E.8.e. of this final rule with comment period for information on record retention requirements for MIPS eligible clinicians and groups.

    Comment: Some commenters requested that CMS provide important clarification that the audits called for in this section are focused on the accuracy of the health IT vendor and their products and not on the MIPS eligible clinician or group. The commenters further requested that any findings related to the audit of the third party intermediary would be focused on the third party intermediary only and would not lead to actions affecting the MIPS eligible clinician.

    Response: We would like to explain that as a condition of their qualification or approval to participate in MIPS, third party intermediaries are required to comply with certain auditing criteria, which include a request for an audit, from us or the federal government. Specifically, an applicant or current third party intermediary must consent to and agree to comply with an audit by us or the federal government of all related documentation and data they stored or submitted on behalf of any MIPS eligible clinicians or groups. Those who fail to comply with audit requests will be considered for non-qualified status. This clarification is consistent with the same approach in the auditing provision for addressing MIPS eligible clinicians found in section II.E.8.e. of this final rule with comment period. Data inaccuracies on the part of the third party vendor will be considered when the third party intermediary requests to continue participation in the Quality Payment Program in subsequent years (self-nomination). Data inaccuracies discovered during an audit of a third party intermediary and occurring due to inaccurate data submitted by the MIPS eligible clinician or group, could result in the MIPS eligible clinician or group's data being reviewed as well.

    Comment: Other commenters stated that the third party intermediary should not be held responsible for the accuracy of data provided or stored by MIPS eligible clinicians or groups when the third party intermediary would not be in a position to assess the validity of the data.

    Response: We appreciate the concern regarding third party intermediaries' responsibility for data accuracy and validity. We would like to explain that the primary purpose of auditing third party intermediaries is to ensure that accurate data is submitted and to maintain the integrity of MIPS payment adjustments made in accordance with program determinations and scoring that are based on data submitted by third party intermediaries. Thus, as part of the qualification and approval requirement to comply with auditing criteria third party intermediaries must ensure that the data they submit to us on behalf of MIPS eligible clinicians and groups is accurate. To meet this requirement, third party intermediaries must have a data validation plan in place, they must execute this plan after they submit data to us, and they must send us the results of their data validation execution report. Please note we also expect third party intermediaries to notify us if their data validation results include a finding that data submitted by a MIPS eligible clinician or group is invalid. Those third party intermediaries who fail to comply with these data validation requirements, as part of their auditing compliance, will be considered non-qualified or non-approved for future MIPS program years.

    Comment: Some commenters stated that any negative findings from an audit under this section should not impact the MIPS eligible clinician or group and that they should be “held harmless” from any negative MIPS adjustments or other civil monetary penalties (CMPs) under the False Claims Act.

    Response: We understand the concerns regarding the impact audits under this section have on MIPS eligible clinicians and groups. As a general matter, the contractual agreement or other arrangement between a MIPS eligible clinician or groups and a third party intermediary is not within our authority to control and we are not a party to such agreements or arrangements. However, we note that MIPS eligible clinicians and groups may be able to seek recourse against their third party intermediary if significant issues or problems arise. Notwithstanding, MIPS eligible clinicians and groups are ultimately responsible for the data submitted by their third party intermediary on their behalf and we expect MIPS eligible clinicians and groups to hold their third party intermediary accountable for accurate data submissions. Moreover, we suggest that MIPS eligible clinicians and groups work with their third party intermediary to ensure data is submitted timely and accurately.

    Comment: A few commenters requested that CMS provide data validation of calculated reporting and performance rates while data is submitted by third party intermediaries including flagging any errors on both format and values.

    Response: We are working on increasing the data checks beyond formatting issues in the submission engine validation tool which can be used for testing prior to data submission. Additionally, we are looking at incorporating additional data checks in the portal to be used at the time a file is submitted. This is an on-going process.

    After consideration of the comments regarding auditing of third party intermediaries submitting MIPS data, we are modifying the proposal at § 414.1400(j) to include the proposed policies that any third party intermediary (that is, a QCDR, health IT vendor, qualified registry, or CMS-approved survey vendor) must comply with the following procedures as a condition of their qualification and approval to participate in MIPS as a third party intermediary: (1) The entity must make available to CMS the contact information of each MIPS eligible clinician or group on behalf of whom it submits data. The contact information will include, at a minimum, the MIPS eligible clinician or group's practice phone number, address, and, if available, email; and (2) the entity must retain all data submitted to CMS for MIPS for a minimum of 10 years. In addition, we are adding that for the purposes of auditing, CMS may request any records or data retained for the purposes of MIPS for up to 6 years and 3 months.

    10. Public Reporting on Physician Compare

    This section contains the approach for public reporting on Physician Compare for the MIPS, APM, and other information as required by the MACRA.

    Physician Compare draws its operating authority from section 10331(a)(1) of the Affordable Care Act. As required, by January 1, 2011, we developed a Physician Compare Internet Web site with information on physicians enrolled in the Medicare program under section 1866(j) of the Act, as well as information on other EPs who participate in the PQRS under section 1848 of the Act. More information about Physician Compare can be accessed on the Physician Compare Initiative Web site at https://www.cms.gov/medicare/quality-initiatives-patient-assessment-instruments/physician-compare-initiative/.

    The first phase of Physician Compare was launched on December 30, 2010 (http://www.medicare.gov/physiciancompare). Since the initial launch, Physician Compare has been continually improved and more information has been added. Currently, Web site users can view information about approved Medicare professionals, such as name, Medicare primary and secondary specialties, practice locations, group affiliations, hospital affiliations that link to the hospital's profile on Hospital Compare as available, Medicare Assignment status, education, residency, and American Board of Medical Specialties (ABMS), American Osteopathic Association (AOA), and American Board of Optometry (ABO) board certification information. For group practices, users can view group practice names, specialties, practice locations, Medicare assignment status, and affiliated professionals. In addition, Medicare professionals and group practices that satisfactorily or successfully participated in a CMS quality program have a green check mark on their profile page to indicate their commitment to quality.

    Consistent with section 10331(a)(2) of the Affordable Care Act, Physician Compare also phased in public reporting of information on physician performance that provides comparable information on quality and patient experience measures for reporting periods beginning January 1, 2012. To the extent that scientifically sound measures are developed and are available, Physician Compare is required to include, to the extent practicable, the following types of measures for public reporting, for example: Measures collected under PQRS and an assessment of efficiency, patient health outcomes, and patient experience, as specified. The first set of quality measures were publicly reported on Physician Compare in February 2014. Currently, Physician Compare publicly reports 14 group practice level measures collected through the Web Interface for groups of 25 or more EPs participating in 2014 under the PQRS and for ACOs participating in the Shared Savings Program or Pioneer ACO program, and six individual level measures collected through claims for individual EPs participating in 2014 under the PQRS. A complete history of public reporting on Physician Compare is detailed in the CY 2016 PFS final rule (80 FR 71117 through 71122).

    As finalized in the CY 2015 and CY 2016 PFS final rules (79 FR 67547 and 80 FR 70885) Physician Compare will expand public reporting over the next several years. This expansion includes publicly reporting both individual EP (now referred to as clinician) and group practice level QCDR measures starting with 2015 individual clinician measures on Physician Compare in late 2016, and expanding public reporting of group practice QCDR measures in late 2017 (80 FR 71125).

    Section 1848(q)(9)(A) and (D) of the Act facilitates the continuation of the phased approach to public reporting by requiring the Secretary to make available on the Physician Compare Web site, in an easily understandable format, individual MIPS eligible clinician and groups performance information, including:

    • The MIPS eligible clinician's final score;

    • The MIPS eligible clinician's performance under each MIPS performance category (quality, cost, improvement activities, and advancing care information);

    • Names of eligible clinician's in Advanced APMs and, to the extent feasible, the names of such Advanced APMs and the performance of such models; and

    • Aggregate information on the MIPS, posted periodically, including the range of final scores for all MIPS eligible clinician's and the range of the performance of all MIPS eligible clinician's for each performance category.

    The proposals related to each of these requirements are addressed below.

    Section 1848(q)(9)(B) of the Act also requires that this information indicate, where appropriate, that publicized information may not be representative of the eligible clinician's entire patient population, the variety of services furnished by the eligible clinician, or the health conditions of individuals treated. The information mandated for Physician Compare under section 1848(q)(9) of the Act will generally be publicly reported consistent with section 10331(a)(2) and 10331(b) of the Affordable Care Act, and like all measure data included on Physician Compare, will be comparable. In addition, section 10331(b) of the Affordable Care Act requires that we include, to the extent practicable, processes to ensure that data made public are statistically valid, reliable, and accurate, including risk adjustment mechanisms used by the Secretary. In addition to the public reporting standards identified in the Affordable Care Act—statistically valid and reliable data that are accurate and comparable—we have established a policy that, as determined through consumer testing, the data we disclose generally should resonate with and be accurately interpreted by consumers to be included on Physician Compare profile pages. Together, we refer to these conditions as the Physician Compare public reporting standards (80 FR 71118 through 71120). Section 10331(d) of the Affordable Care Act also requires us to consider input from multi-stakeholder groups, consistent with sections 1890(b)(7) and 1890A of the Act. We also continue to receive general input from stakeholders on Physician Compare through a variety of means, including rulemaking and different forms of stakeholder outreach (for example, Town Hall meetings, Open Door Forums, webinars, education and outreach, Technical Expert Panels, etc.).

    In addition, section 1848(q)(9)(C) of the Act requires the Secretary to provide an opportunity for MIPS eligible clinicians to review the information that will be publicly reported prior to such information being made public. This is generally consistent with section 10331(a)(2) of the Affordable Care Act, under which we have established a 30-day preview period for all measurement performance data that allows physicians and other eligible clinicians to view their data as it will appear on the Web site in advance of publication on Physician Compare (80 FR 71120). Section 1848(q)(9)(C) of the Act also requires that MIPS eligible clinicians be able to submit corrections for the information to be made public. We proposed that this extension of the current Physician Compare 30-day preview period will be implemented starting with data from the 2017 MIPS performance period. We proposed a 30-day preview period in advance of the publication of data on Physician Compare (81 FR 28290). We proposed to coordinate efforts between Physician Compare and the four performance categories of MIPS in terms of data review and any relevant data resubmission or correction. All data available for public reporting—measure rates, scores, and attestations—would be available for review and correction during the targeted review process (81 FR 28278). The process would begin at least 30 days in advance of the publication of new data. Data under review will not be publicly reported until the review is complete. All corrected measure rates, scores, and attestations submitted would be available for public reporting. The technical details of the process would be communicated directly to affected MIPS eligible clinicians and groups and detailed outside of rulemaking.

    As with the current process, the details would be made public on the Physician Compare Initiative page on cms.gov and communicated through Physician Compare and other CMS listservs.

    The following is a summary of the comments we received regarding our proposal to implement a 30-day preview period in advance of the publication of data on Physician Compare.

    Comment: Some commenters requested that CMS extend the preview period from 30 days to 45, 60, or 90 days. Some commenters noted 30 days was too short, and others more specifically indicated more time was needed to fully review their data.

    Response: Finalizing a 30-day preview period for MIPS eligible clinicians is consistent with the preview period we have adopted for Physician Compare for other types of data (80 FR 71120), and has proven sufficient to fully review the data currently publicly reported. We will explore the preview period duration to assess it is providing adequate time for review and data resubmission, when necessary, once the Quality Payment Program begins to receive a higher volume of data on a more frequent basis, which will be done through separate notice-and-comment rulemaking.

    Comment: Commenters requested that data being contested is not published on Physician Compare.

    Response: We will coordinate efforts between Physician Compare and the four performance categories of MIPS in terms of targeted review and any relevant data resubmission or correction. All data available for public reporting—measure rates, scores, and attestations—will be available for review and correction during the targeted review process (see II.E.8.c. of this final rule with comment period). The process will begin at least 30 days in advance of the publication of new data. Data under a review will not be publicly reported until the review is complete. As proposed, all corrected measure rates, scores, and attestations submitted will be available for public reporting. The technical details of the process will be communicated directly to affected MIPS eligible clinicians and groups and detailed outside of rulemaking.

    After consideration of the comments, we are finalizing our policy as proposed. As consistent with current practice (80 FR 71120), we are adopting a 30-day preview period in advance of the publication of data on Physician Compare.

    In addition, section 1848(q)(9)(D) of the Act requires that aggregate information on the MIPS be periodically posted on the Physician Compare Web site; including the range of final scores for all MIPS eligible clinicians and the range of performance for all MIPS eligible clinicians for each performance category.

    Lastly, section 104(e) of the MACRA requires the Secretary to make publicly available, on an annual basis (beginning with 2015), in an easily understandable format, information for physicians and other eligible clinician's on items and services furnished to Medicare beneficiaries, and to include, at a minimum:

    • Information on the number of services furnished under Part B, which may include information on the most frequent services furnished or groupings of services;

    • Information on submitted charges and payments for Part B services; and

    • A unique identifier for the physician or other eligible clinician that is available to the public, such as an NPI.

    The information would further be required to be made searchable by at least specialty or type of physician or other eligible clinician; characteristics of the services furnished (such as, volume or groupings of services); and the location of the physician or other eligible clinician.

    Therefore, at § 414.1395(a) we proposed public reporting of an eligible clinician's MIPS data; in that for each program year, we would post on a public Web site, in an easily understandable format, information regarding the performance of MIPS eligible clinicians or groups under the MIPS. This proposal and related public comments are addressed in detail below.

    Furthermore, in accordance with section 104(e) of the MACRA, we finalized a policy in the CY 2016 PFS final rule (80 FR 71130) to add utilization data to the Physician Compare downloadable database. Utilization data is currently available at http://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Provider-Charge-Data/Physician-and-Other-Supplier.html. This information will be integrated on the Physician Compare Web site via the downloadable database using the most current data starting with the 2016 data, targeted for initial release in late 2017 (80 FR 71130). Not all available data will be included. The specific HCPCS codes included will be determined based on analysis of the available data, focusing on the most used codes. Additional details about the specific HCPCS codes that will be included in the downloadable database will be provided to stakeholders in advance of data publication. And, all data available for public reporting—on the consumer-facing Web site pages or in the downloadable database—will be available for review during the 30-day preview period.

    We believe section 10331 of the Affordable Care Act supports our overarching goals of the MACRA by providing consumers with quality information that will help them make informed decisions about their health care, while encouraging clinicians to improve the quality of care they provide to their patients. In accordance with section 10331 of the Affordable Care Act, section 1848(q)(9) of the Act, and section 104(e) of the MACRA, we plan to continue to publicly report performance information on Physician Compare. As a result, we proposed inclusion of the following information on Physician Compare (81 FR 28291 through 28293).

    a. Final Score, Performance Categories, and Aggregate Information

    As noted, section 1848(q)(9)(A) and (D) of the Act requires that we publicly report on Physician Compare the final score for each MIPS eligible clinician, performance of each MIPS eligible clinician for each performance category, and periodically post aggregate information on the MIPS, including the range of final scores for all MIPS eligible clinicians and the range of performance of all the MIPS eligible clinicians for each performance category. We proposed that these data would be added to Physician Compare for each MIPS eligible clinician or group, either on the profile pages or in the downloadable database, as technically feasible. Statistical testing and consumer testing, as well as consultation of the Physician Compare Technical Expert Panel (TEP), would determine how and where these data are reported on Physician Compare. We requested comments on these proposals.

    The following is a summary of the comments we received regarding our proposal to publicly report on Physician Compare the final score for each MIPS eligible clinician, performance of each MIPS eligible clinician for each performance category, and periodically post aggregate information on the MIPS, including the range of final scores for all MIPS eligible clinicians and the range of performance of all the MIPS eligible clinicians for each performance category.

    Comment: Commenters suggested that CMS limit initial public reporting on MIPS clinicians to their final score and performance category participation and not publicly report any of the specific measures within any of the performance categories at this time. Some commenters, however, expressed concern with public posting of the final score for clinicians because they believe it does not fully represent quality and may be misleading regarding the quality of care provided. There was also concern it may lead to comparisons across different specialties. Some of these and other commenters encouraged CMS to report the specific measures within performance categories instead of the final score. Another commenter opposed publishing the final score for groups with fewer than ten eligible clinicians. Other commenters recommended that CMS delay publishing final scores and performance by category until it has been further tested to ensure it is fully understood by consumers and truly represents quality care, and to ensure clinicians have time to learn from and improve on their early performance.

    Some commenters believe all category scores should be visible on Physician Compare rather than just the final score noting the final score oversimplifies performance without taking into consideration things like higher costs. Another commenter recommended that in the first few years of MIPS data be shared only with clinicians and after this period consider all MIPS data for public reporting.

    Other commenters suggested that CMS implement precautions before releasing certain information (for example, quality, cost, and utilization data) on Physician Compare as individualized data without explanation could be misleading, and instead encouraged CMS to release this information only to professional societies. Another commenter encouraged CMS to include contextual information to clarify which eligible clinicians could and could not submit data in the first 2 years of MIPS so lack of reporting is not misinterpreted by consumers.

    Additional commenters encouraged CMS to obtain ample feedback from patients and clinicians prior to posting information to Physician Compare to ensure that public reporting standards are upheld, including the requirement that all data resonate with and be accurately interpreted by consumers. Another commenter stated it is important for CMS to determine the accuracy of the data posted on Physician Compare.

    Response: Data for the final score and performance categories will be added to Physician Compare for each MIPS eligible clinician or group, either on the profile pages or in the downloadable database. Statistical testing and consumer testing, as well as consultation of the Physician Compare TEP, will determine how, where, and when these data are best reported on Physician Compare. Publicly reporting MIPS data continues the ongoing phased approach to public reporting we have been engaging in since the release of the 2012 PQRS data, allowing us to continue this public reporting process and therefore continue to provide helpful information valued by consumers in their health care decision-making process.

    The statistical and consumer testing done ensures the data are accurate, they represent quality of care, and they are well understood and correctly interpreted by consumers. The nature of how clinicians and groups are searched on Physician Compare facilitates comparison within specialty, not across. And, language is currently available on the site to explain that lack of data does not mean lack of quality care, and this concept has been well understood in previous consumer testing. Previous testing has also shown that consumers not only accurately interpret but need aggregate scoring, such as composite scores and star ratings, to best understand what are often complex data. These aggregations are not oversimplifications, but beneficial tools for the average consumer to use to best interpret the data. And, although we appreciate the request to have more time to learn from and improve on the data collected, as a continuation of the existing public reporting plan, we believe clinicians have had the opportunity to benefit from previous years of data submission as public reporting was slowly phased in under the PQRS and the data under MIPS are well timed for public reporting.

    Comment: Commenters believe CMS should include MIPS information in the downloadable database as they supported making public all statistically valid and reliable data, but appreciated the importance of not overwhelming consumers with too much information on profile pages. Some commenters stated that they would like all information added to the profile pages, including basic demographic and descriptive information, to be proposed for public comment along with results of statistical and consumer testing for measure data.

    Response: Again, as noted, typically data considered for public reporting on public profile pages must meet all public reporting criteria. Summary reports of TEP meetings are shared publicly on the Physician Compare Initiative Web site on CMS.gov. This documentation provides an overview of the statistical and consumer testing conducted as part of the measure review process for Physician Compare. To fulfill the purpose of the Web site and ensure consumers have the information they need to make informed health care decisions it is important to continue to include quality information on the profile pages in addition to making data available in the downloadable database, as appropriate.

    Comment: One commenter objected to the public reporting of zeroes on the Physician Compare Web site; indicating this could misrepresent physicians who choose not to share data.

    Response: We will take this into consideration as we analyze data for public reporting on Physician Compare. However, it is important to note that if a measure is not submitted, there is no performance rate publicly reported. If a measure is reported and the performance rate is zero, this is available for public reporting.

    Comment: One commenter expressed concern about public reporting, generally, noting that without virtual groups by specialty, the consequences for many specialties will be inaccurate scoring. They will be unable to report correct measures.

    Response: Only those groups and eligible clinicians with measure data will be scored and have measure data included on Physician Compare. The data reported will be at the clinician and group level respectively. The absence of Virtual Groups will not impact the data consumers see for clinicians and groups. As Virtual Groups are implemented we will take this feedback into consideration for public reporting on Physician Compare.

    Comment: One commenter recommended that CMS thoroughly explain Physician Compare data to the consumers. The commenter agreed that some of the performance categories were difficult to understand for both consumers and clinicians.

    Response: As noted, all data included on the Physician Compare profile pages is tested with consumers to ensure that the information is accurately interpreted and meaningful to consumers. In addition to consumer testing, we are also engaging in increasing consumer outreach around Physician Compare and MACRA data, specifically, to ensure this information is clear and useful to consumers. This process will be ongoing.

    After consideration of the comments and for the reasons we explained previously, we are finalizing our proposal to report on Physician Compare the final score for each MIPS eligible clinician, performance of each MIPS eligible clinician for each performance category, and to periodically post aggregate information of such data. Accordingly, we are finalizing § 414.1395(a), which provides that for public reporting of an eligible clinician's MIPS data in that for each program year, we will post on a public Web site, in an easily understandable format, information regarding the performance of MIPS eligible clinicians or groups under the MIPS. As we discussed in this final rule with comment period, such data will be posted on Physician Compare, as required by MACRA; however, we will use statistical and consumer testing for purposes of determining how and where such data will be reported on Physician Compare. A detailed discussion of comments for each performance category of MIPS data is included below.

    In addition, we solicited comment on the advisability and technical feasibility of including data voluntarily reported by eligible clinicians and groups that are not subject to MIPS payment adjustments, such as those practicing through RHCs, FQHCs, etc., on Physician Compare, which would be addressed through separate notice-and-comment rulemaking.

    The following is a summary of the comments we received regarding our solicitation of comments for including data voluntarily reported by eligible clinicians and groups that are not subject to MIPS payment adjustments.

    Comment: One commenter supported giving FQHCs who voluntarily submit data under MIPS, appropriately adjusted for patients' social determinants of health, the option to have the data published on Physician Compare. Another commenter also generally supported allowing any eligible clinician or group that voluntarily reported data to have the data publicly reported on Physician Compare. One commenter did caution against publicly reporting RHC data noting concern around the assumptions that could be drawn from the data.

    Response: We may consider these suggestions in future notice-and-comment rulemaking.

    b. Quality

    As detailed in the proposed rule, consistent with the current policy that makes all current PQRS measures available for public reporting, we proposed to make all measures under the MIPS quality performance category (81 FR 28184) available for public reporting on Physician Compare (81 FR 28291). This would include all available measures reported via all available submission methods, and applies to both MIPS eligible clinicians and groups. Also consistent with current policy, although all measures will be available for public reporting not all measures will be made available on the consumer-facing Web site profile pages. As explained in the proposed rule (81 FR 28291), providing too much information can overwhelm consumers and lead to poor decision making. Therefore, consistent with section 1848(q)(9)(A)(i)(II) of the Act, we proposed that all measures in the quality performance category that meet the statistical public reporting standards would be included in the downloadable database, as technically feasible. We also proposed that a subset of these measures would be publicly reported on the Web site's profile pages, as technically feasible, based on consumer testing. Statistical testing and consumer testing would determine how and where measures are reported on Physician Compare. In addition, we proposed to apply our existing policy of not publicly reporting first year measures, meaning new measures that have been in use for less than 1 year, regardless of submission methods. After a measure's first year in use, we would evaluate the measure to see if and when the measure is suitable for pubic reporting (81 FR 28291).

    Currently, there is a minimum sample size requirement of 20 patients for performance data to be included on the Web site. As part of the MIPS and APMs RFI we asked for comment on moving away from this requirement and moving to a reliability threshold for public reporting. In general, commenters supported a minimum reliability threshold. As a result, we proposed to institute a minimum reliability threshold for public reporting this data on Physician Compare (81 FR 28291).

    The reliability of a measure refers to the extent to which the variation in measure is due to variation in quality of care as opposed to random variation due to sampling. Statistically, reliability depends on performance variation for a measure across entities, the random variation in performance for a measure within an entity's panel of attributed beneficiaries, and the number of beneficiaries attributed to the entity. High reliability for a measure suggests that comparisons of relative performance across entities, in this case groups or eligible clinicians, are likely to be stable and consistent, and that the performance of one entity on the quality measure can confidently be distinguished from another. Conducting analysis to determine reliability of the data collected will allow us to calculate the minimum reliability threshold for those data. Once an appropriate minimum reliability threshold is determined, the reporting of reporters' performance rates for a given measure can be restricted to only those meeting the minimum reliability threshold.

    We proposed to also include the total number of patients reported on per measure in the downloadable database to facilitate transparency and more accurate understanding and use of the data. We requested comments on these proposals (81 FR 28291).

    We also solicited comment on the types of data that should be reported on Physician Compare as the MIPS program evolves, specifically in regard to the quality performance category. Any regulatory changes would be made in separate notice-and-comment rulemaking.

    The following is a summary of the comments we received regarding our proposal related to public reporting data from the MIPS quality performance category (81 FR 28291).

    Comment: Several commenters agreed with the proposal that all measures in the quality performance category that meet the public reporting standards should be included in the downloadable database, as technically feasible. Some noted it was beneficial because then QCDRs and qualified registries can use this data to report back to eligible clinicians and groups on how they compare to others. Other commenters noted that the quality performance category measures should only be reported if there were clear measure descriptions that allowed consumers to understand the measures in context and individual measures were reported along with benchmark and score ranges. Some commenters cautioned against publicly reporting measures for specific specialties that may be more difficult for consumers to understand.

    Response: In this final rule, we are finalizing our proposal that all measures in the quality performance category that meet the statistical public reporting standards will be included in the downloadable database, as technically feasible. Per suggestions that commenters made regarding consumer understanding of the quality performance category measure data, only those measures that also test well with consumers will be included on the public facing profile pages. This includes testing plain language measure descriptions that provide the information in an easy-to-understand format and in context, to ensure measures are fully explained and accurately understood. We also plan to include, as feasible, the individual measures in addition to the aggregate information as consumers and clinicians find value in both.

    Comment: Some commenters recommended not publicly reporting on new measures for as many as 3 years so that clinicians and groups had more time to learn from the measures and their performance in early years of reporting.

    Response: We are finalizing our proposal to continue to not publicly report first year measures, meaning new measures that have been in use for less than 1 year, regardless of submission methods. After a measure's first year in use, we will evaluate the measure to see if and when the measure is suitable for public reporting and will take into consideration concerns expressed around publishing newer data during that review process. However, experience with public reporting to date has shown that 1 year is generally sufficient and provides adequate time to assess the measure and to provide clinicians and groups an opportunity to gain experience collecting the measure as well as provide feedback to help them improve on the measure before the data are made public.

    Comment: Commenters supported the use of a minimum reliability threshold for measures, and agreed with the use of Physician Compare public reporting standards, because together these will ensure accurate data that are statistically comparable. One commenter also noted it was important to ensure adequate sample sizes in addition to the reliability threshold.

    Response: We appreciate this support and agree and will move forward with the reliability threshold in conjunction with our existing public reporting standards and minimum sample size of 20 to ensure confidentiality and sufficient data.

    Comment: One commenter stated that data needed to be properly vetted for accuracy and multiple commenters noted data should be risk-adjusted prior to being posted on Physician Compare.

    Response: We agree data should be vetted prior to being publicly reported on Physician Compare. As stated previously, data from the quality performance category must meet our statistical public reporting standards to be publicly reported on Physician Compare. As explained in section II.E.5.b. of this final rule with comment period, under the IMPACT Act, ASPE has been conducting studies on the issue of risk adjustment for sociodemographic factors on quality measures and cost, as well as other strategies for including SDS evaluation in CMS programs. We will closely examine the ASPE studies when they are available and incorporate findings as feasible and appropriate through future rulemaking.

    After consideration of the comments and for the reasons we discussed in this final rule with comment period, we are finalizing our policies as proposed.

    c. Cost

    As detailed in the proposed rule, we proposed, consistent with section 1848(q)(9)(A)(i)(II) of the Act, to make all measures under the MIPS cost performance category (see 81 FR 28196) available for public reporting on Physician Compare. This includes all available measures reported via all available submission methods, and applies to both MIPS eligible clinicians and groups.

    We have found that cost data do not resonate with consumers and can instead lead to significant misinterpretation and misunderstanding. Therefore, we proposed to include a sub-set of cost measures, that meet the aforementioned public reporting standards, on Physician Compare, either on profile pages or in the downloadable database, if technically feasible (81 FR 28291 through 28292). Statistical testing and consumer testing would determine how and where measures are reported on Physician Compare. In addition, we proposed not to publicly report first year measures, meaning new measures that have been in use for less than 1 year, regardless of submission methods. After a measure's first year in use, we would evaluate the measure to see if and when the measure is suitable for pubic reporting (81 FR 28292). We requested comments on these proposals.

    We also solicited comment on the types of data that should be reported on Physician Compare as the MIPS program evolves, specifically in regard to the cost performance category. Any regulatory changes would be made in separate notice-and-comment rulemaking.

    The following is a summary of the comments we received regarding our proposal related to public reporting of data from the MIPS cost performance category.

    Comment: Some commenters recommended that CMS not publicly report cost measures. One commenter mentioned that cost data requires other information such as specifics about the patient population served and needs to be reported in the context of other quality measures. Similarly, another commenter recommended the cost measures not be publicly reported without being risk adjusted and without more information about what portion of a clinician's total patient population is included in the data. Another commenter recommended that cost information not be displayed until CMS develops better, more applicable measures for cost. Other commenters opposed publication because the data do not resonate with consumers and can be misinterpreted and therefore recommended consumer testing on this category to ensure the necessary context is provided so consumers fully understand the information.

    Another commenter appreciated the proposal to limit public reporting on the Physician Compare Web site to a subset of cost measures that meet the public reporting standards, and to include the total number of patients reported on per measure in the downloadable database so that quality data is accurately interpreted per practice size.

    Response: We appreciate the commenters' concerns. As explained in this final rule with comment period, we are awaiting ASPE's report on risk adjustment and will evaluate that report with the concerns raised here about patient population variation in mind. The cost measures will be reported in conjunction with performance information for all MIPS performance categories, as technically feasible, which will provide additional context for this information. And, as with all data publicly reported, the measures will only be included on public facing pages if consumer testing shows the measures are accurately interpreted and in fact resonate with consumers. Therefore, as technically feasible, and based on our statistical public reporting standards and consumer testing we will publicly report cost measures on Physician Compare. We note that we intend to make cost data publicly available in the downloadable database, regardless of consumer testing performance, for use in research if it meets our other public reporting standards.

    Comment: Some commenters recommended not publicly reporting on new measures for 3 years, noting the data should first be shared only with eligible clinicians and groups before being considered for public reporting so that they could learn from the data in the early years of reporting.

    Response: As explained in our discussion about the quality performance category in this final rule with comment period, our experience with public reporting to date has shown that 1 year is generally sufficient and provides adequate time to assess the measure and to provide clinicians and groups an opportunity to gain experience collecting the measure as well as provide feedback to help them improve on the measure before the data are made public. So we do not believe 3 years is needed. Accordingly, we are finalizing a policy not to publicly report first year measures, meaning new measures that have been in use for less than 1 year, regardless of submission methods and performance category as we have generally found 1 year to be sufficient to evaluate new measures. After a measure's first year in use, we will evaluate the measure to see if and when the measure is suitable for pubic reporting appreciating the concerns raised.

    After consideration of the comments and for the reasons we discussed in this final rule with comment period, we are finalizing our policies as proposed. Based on the policies being finalized in II.E.5.e. of this final rule with comment period we may not have data for public reporting in year 1, the transition year, of MIPS for the cost performance category.

    d. Improvement Activities

    As detailed in the proposed rule, we proposed, consistent with section 1848(q)(9)(A)(i)(II) of the Act, to make all activities under the MIPS improvement activities performance category (81 FR 28209) available for public reporting on Physician Compare (81 FR 28292). This includes all available improvement activities reported via all available submission methods, and applies to both MIPS eligible clinicians and groups.

    We proposed to include a subset of improvement activities data that meet the aforementioned public reporting standards, on Physician Compare, either on the profile pages or in the downloadable database, if technically feasible (81 FR 28292). For those eligible clinicians that successfully meet the improvement activities performance category requirements this may be posted on Physician Compare as an indicator. The improvement activities performance category is a new field of data for Physician Compare so concept and consumer testing will be needed to ensure these data are understood by consumers. Therefore, we proposed that statistical testing and consumer testing would determine how and where improvement activities are reported on Physician Compare. In addition, since we do not publicly report first year measures, we proposed to also apply this policy to improvement activities, meaning new improvement activities that have been in use for less than 1 year, regardless of submission methods. After an improvement activity's first year in use, we would evaluate the activity to see if and when the activity is suitable for pubic reporting (80 FR 71118). We requested comments on these proposals.

    We also solicited comment on the types of data that should be reported on Physician Compare as the MIPS program evolves, specifically in regard to the improvement activities performance category. Any regulatory changes would be made in separate notice-and-comment rulemaking.

    The following is a summary of the comments we received regarding our proposal related to public reporting of data from the MIPS improvement activities performance category.

    Comment: One commenter recommended that CMS gain experience with the improvement activities category before adding that information to Physician Compare. Other commenters recommended that improvement activities not be reported on Physician Compare until we performed consumer and statistical testing to validate the category as accurate and ensured the data were being publicly reported with enough context so that consumers accurately interpreted the data. One commenter recommended only including a subset of the improvement activities data on Physician Compare.

    Response: We do acknowledge that the improvement activities performance category is a new field of data for Physician Compare so, as noted, concept and consumer testing will be needed to ensure these data are understood by consumers and presented in a way that is easy to understand and with appropriate context. Prior to any data being released on Physician Compare, statistical testing and consumer testing will determine how and where improvement activities are publicly reported and if it is most appropriate to publicly report all available data or only a subset as suggested.

    Comment: Some commenters recommended not publicly reporting on new improvement activities for as many as 3 years so there was an opportunity to learn from the measures in the early years of reporting, while other commenters recommended improvement activities data only be shared with eligible clinicians and groups and not be considered for public reporting for at least the first few years if at all.

    Response: We are finalizing a policy not to publicly report first year activities, meaning new improvement activities that have been in use for less than 1 year, regardless of submission methods, will not be considered for public reporting. After an improvement activity's first year in use, we will evaluate the activity to see if and when the activity is suitable for pubic reporting. As 1 year has proven sufficient to understand if quality measures are appropriate and accurate and has provided sufficient time for clinicians and groups to learn from these data, we believe the same will be true for performance activities. However, again, after the first year, we will further review to ensure more time is not needed.

    After consideration of the comments and for the reasons we articulated previously, we are finalizing our policies as proposed.

    e. Advancing Care Information

    Since the beginning of the EHR Incentive Programs in 2011, participant performance data has been publicly available in the form of public use files on the CMS Web site. In the 2015 EHR Incentive Programs final rule, we addressed comments requesting that we not only continue this practice but also include a wider range of information on participation and performance. In that rule, we stated our intent to publish the performance and participation data on Stage 3 objectives and measures of meaningful use in alignment with quality programs which utilize publicly available performance data such as Physician Compare (80 FR 62901). At this time there is only a green check mark on Physician Compare profile pages to indicate that an eligible clinician successfully participated in the current Medicare EHR Incentive Program for eligible clinicians.

    As MIPS will now include advancing care information as one of the four MIPS performance categories, we proposed, consistent with section 1848(q)(9)(i)(II) of the Act, to include more information on an eligible clinician's performance on the objectives and measures of meaningful use on Physician Compare (81 FR 28292). An important consideration is that to meet the aforementioned public reporting standards, the data added to Physician Compare must resonate with the average Medicare consumer and their caregivers. Consumer testing to date has shown that people with Medicare value the use of certified EHR technology and see EHR use as something that if used well can improve the quality of their care. In addition, we believe the inclusion of indicators for clinicians who achieve high performance in key care coordination and patient engagement activities provide significant value for consumers.

    We therefore proposed to include an indicator for any eligible clinician or group who successfully meets the advancing care information performance category, as detailed in the proposed rule (81 FR 28215), as technically feasible on Physician Compare (81 FR 28292). Also, as technically feasible, we proposed to include additional indicators (81 FR 28292), including but not limited to the indicators specified in section II.E.5.g. of this final rule with comment period such as, identifying if the eligible clinician or group scores high performance in patient access, care coordination and patient engagement, or health information exchange; as further specified in the proposed rule (81 FR 28215). We also proposed that any advancing care information objectives or measures would need to meet the public reporting standards applicable to data posted on Physician Compare, either on the profile pages or in the downloadable database. This would include all available objectives or measures reported via all available submission methods, and would apply to both MIPS eligible clinicians and groups. Statistical testing and consumer testing would determine how and where objectives and measures are reported on Physician Compare. In addition, we proposed to apply our policy of not publicly reporting first year measures (80 FR 71118), meaning new measures that have been in use for reporting for less than 1 year, regardless of submission methods. After a measure's first year in use, we would evaluate the measure to see if and when the measure is suitable for pubic reporting (81 FR 28292). We requested comment on these proposals.

    We also solicited comment on potentially including an indicator to show low performance in the advancing care information performance category, as well as, the types of data that should be reported on Physician Compare as the MIPS program evolves, specifically in regard to the advancing care information performance category. Additionally, we would need to perform consumer testing and evaluate the feasibility of potentially including an indicator to show low performance in the advancing care information performance category to ensure this is understood by consumers. Any regulatory changes would be made in separate notice-and-comment rulemaking.

    The following is a summary of the comments we received regarding our proposal related to public reporting of data from the MIPS advancing care information performance category.

    Comment: A commenter recommended that CMS designate physician performance in the advancing care information category with a green check mark as it has done for the EHR Incentive Program, while some commenters recommended against publicly reporting an indicator for this performance category. One commenter suggested limiting information publicly reported on this category to an indicator showing use of certified EHR technology, generally.

    Response: As technically feasible, and based on consumer testing, we will include indicators for the advancing care information performance category on Physician Compare as this is an extension of our existing public reporting related to EHR Incentive Program participation and this information is deemed valuable by consumers and their caregivers. We will use the statistical and consumer testing methods we have adopted for Physician Compare to determine the final presentation and timing of data reported on the Web site.

    Comment: Some commenters recommended not showing low performance in the advancing care information category, which one commenter stated would be confusing to consumers without adequate context. Other commenters recommended not adding an indicator of high performance until the performance score is more refined. Some commenters disagreed with CMS' proposal to include additional indicators, including but not limited to, identifying if the eligible clinician or group scores high performance in patient access, care coordination and patient engagement, or health information exchange. Other commenters noted that continued indication of performance category success is acceptable, but publicly reporting individual metrics within the advancing care information performance category is not. Additional commenters raised concerns about publicly reporting the advancing care information because performance in this category is not solely under the control of the eligible clinician, especially for hospital-based clinicians.

    Response: Viewing this as a continuation of our current public reporting, and based on consumer testing, we will include indicators for the advancing care information performance category, as technically feasible. Part of testing is ensuring that the appropriate context is provided for consumers to understand not only all the data points or indicators included, but also the factors that impact performance. This means we will ensure that consumers fully understand individual metrics versus a simple mention of participation success prior to including individual metrics. And, we will evaluate understanding of attribution to ensure certain types of clinicians, specifically hospital-based clinicians, are not unfairly measured. All of these considerations, and the additional concerns raised, will be taken into account and statistical and consumer testing will be done to determine the final presentation and timing of data reported on the Web site.

    Comment: Some commenters recommended not publicly reporting on new measures for as many as 3 years, and first only sharing this information with the eligible clinicians and groups until they gain experience with the measures and learn from the measures in the early years of public reporting.

    Response: As previously noted, under existing programs 1 year has proven sufficient for evaluating the measure for public reporting, so we do not believe using a longer time frame of 3 years is necessary. Accordingly, we are finalizing a decision not to publicly report first year measures or indicators, meaning new measures or indicators that have been in use for reporting for less than 1 year, regardless of submission methods. After a measure or indicator's first year in use, we will evaluate the measure or indicator to see if and when the measure or indicator is suitable for pubic reporting.

    Comment: One commenter requested that CMS indicate a disclaimer on the clinician's profile if they were exempt from participating in the advancing care information performance category.

    Response: We will evaluate the need for including disclaimers based on the final data available for public reporting and consumer testing.

    After consideration of the comments and for the reasons we discussed in this final rule with comment period, we are finalizing our policies as proposed.

    f. Utilization Data

    We previously finalized a policy to include utilization data in the Physician Compare downloadable database in late 2017 using the most currently available data (80 FR 71130) to meet section 104(e) of the MACRA. As there are thousands of Healthcare Common Procedure Coding System (HCPCS) codes in use, not all available data will be included. The specific HCPCS codes included will be determined based on analysis of the available data, focusing on the most used codes. The goal will be to include counts that can facilitate a greater understanding and more in-depth analysis of the other measure and performance data being made available. We proposed to continue to include utilization data in the Physician Compare downloadable database (81 FR 28292). We requested comment on this proposal.

    The following is a summary of the comments we received regarding our proposal to continue to include utilization data in the Physician Compare downloadable database.

    Comment: Some commenters supported including utilization data in the downloadable database, though one commenter suggested CMS make the specific HCPCS codes included available for public comment. One commenter recommended that CMS implement precautions such as including only aggregated data in the downloadable database or making this information available only to professional societies, citing concern that without explanation this data could be misleading. Other commenters recommended CMS only publicly report data suitable for an eligible clinicians profile page noting that if it can be misinterpreted by consumers, it may be misused by other stakeholders. Another commenter recommends that CMS provide a disclaimer regarding the limits of utilization data.

    Response: To satisfy section 104(e) of the MACRA, we implemented a policy to begin to include utilization data in the Physician Compare downloadable database in late 2017 using the most currently available data, and previously finalized the specific codes to be included would be determined via data analysis, and reported at the eligible clinician level (80 FR 71130). We proposed to continue this policy of reporting utilization data. Given that section 104 of the MACRA requires the utilization data to be searchable by specialty, characteristics of services, and location of eligible clinician, we believe it is necessary to report the data un-aggregated. Aggregated data are available at https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Provider-Charge-Data/Physician-and-Other-Supplier2014.html. Given the audience of the downloadable database is predominantly the professional community and third-party data users, we believe the data are appropriate for inclusion in the downloadable database. The audience for the public facing profile pages and the use of the information on those pages is significantly different. We will take recommendations to add additional context and disclaimers around the use and limits of the utilization to the downloadable database data dictionary under consideration.

    After consideration of the comments and for the reasons we articulated, we are finalizing the policy as proposed.

    g. APM Data

    As discussed above, section 1848(q)(9)(A)(ii) of the Act requires us to publicly report names of eligible clinicians in Advanced APMs and, to the extent feasible, the names and performance of Advanced APMs. We see this as an opportunity to continue and build on reporting we are now doing of ACO data on Physician Compare. At this time, if a clinician or group submitted quality data as part of an ACO, there is an indicator on the clinician's or group's profile page indicating this. In this way, it is known which clinicians and groups took part in an ACO. Also, currently, all ACOs have a dedicated page on the Web site to showcase their data. If technically feasible, we proposed to use this model as a guide as we add APM data to Physician Compare. We proposed to indicate on eligible clinician and group profile pages when the eligible clinician or group is participating in an APM (81 FR 28293). We also proposed to link eligible clinicians and groups to their APM's data, as relevant and possible, through Physician Compare. Data posting would be considered for both Advanced APMs and APMs that are not considered Advanced APMs.

    At the outset, APMs will be very new concepts for consumers. Testing shows that at this time, ACOs are not a familiar concept to the average Medicare consumer. It is very easy for consumers to misunderstand an ACO as just a type of group. We expect at least the same lack of familiarity when introducing the broader concept of APM, of which ACOs comprise only one type. In these early years, indicating who participated in APMs and testing language to accurately explain that to consumers provides useful and valuable information as we continue to evolve Physician Compare. As we come to understand how to best explain this concept to consumers, we can continue to assess how to most fully integrate these data on the Web site. We requested comment on these proposals.

    The following is a summary of the comments we received regarding our proposal to publicly report names of eligible clinicians in Advanced APMs and, to the extent feasible, the names and performance of Advanced APMs.

    Comment: Commenters supported CMS' proposal to use the current approach for reporting ACO involvement as a model for reporting APM involvement, and one commenter supported publicly reporting APM data at the individual eligible clinician level. Another commenter noted the importance of gradually integrating the APM data onto Physician Compare agreeing this is a new concept for consumers that will need to be fully explained. Some commenters did express concern that APM information may be difficult for consumers to understand, and one commenter suggested CMS provide additional contextual information including how the APM is structured and how APM structure influences comparability.

    Response: We agree using the ACO reporting model is a beneficial approach, and we support the gradual integration of the APM data onto Physician Compare as informed by consumer testing. This will ensure the information is presented in a way that is accurately interpreted and most beneficial to consumers. We will take recommendations to add additional contextual information into consideration as we work to include this information on the Web site.

    Comment: One commenter asked for clarification as to how CMS will prevent the display of APM data on Physician Compare from giving APM participants an advantage over MIPS participants.

    Response: We do not believe that one type of data provides an advantage over the other based on consumer understanding of the information currently available. Testing shows that consumers do prefer data at the individual eligible clinician level over data aggregated to the group or ACO level, but they find value in all data presented. We will keep this concern in mind as we continue to test APMs with consumers, however.

    After consideration of the comments and for the reasons we set forth, we are finalizing this policy as proposed.

    h. Miscellaneous Comments

    Some of the comments received did not specifically relate to the public reporting proposals in the proposed rule. The following is a summary of these miscellaneous comments.

    Comment: A commenter supported including the number of patients reported per measure in the downloadable database. Another commenter stated that if QCDRs are going to take on a more important role in the Quality Payment Program, CMS should set better standards with regard to the public reporting of QCDR data and the issue of non-MIPS quality measures. One commenter recommended that Physician Compare have quality measures that reflect the physicians' specific contributions to patient care and outcomes, which emphasize the team-based approach that certain specialties take, such as palliative care. One commenter recommended publicly reporting performance information only if eligible clinicians can have an assurance that their reported data is normalized and comparable. This commenter opposed publicly reporting performance information otherwise. One commenter recommended providing educational tools for patients viewing Physician Compare. This commenter believed that this will enable patients who view eligible clinicians on Physician Compare to note when a physician could not participate in a specific performance category listed.

    One commenter supported the inclusion of ABMS board certification and participation in Maintenance of Certification (MOC) Programs on Physician Compare. Another commenter recommended MOC participation as a measure in future rulemaking as part of quality performance data publicly reported on Physician Compare.

    One commenter believed that payers, providers, large group purchasers, and consumers should be fully empowered to access, use, share, contribute, and benefit from data that improve their health care decision making. Another commenter recommended including Medicare Advantage plan quality information that is comparable to FFS information on Physician Compare. One commenter recommended that Physician Compare provide comparative quality information and comparative pricing data across services; estimated costs for in network and out of network costs; allow consumer to customize the provider information to highlight the most relevant information; expand provider information to include the most relevant topics for consumers, such as patient-reported outcome measures; online decision-support tools, as well as assistive and cognitive technology tools.

    One commenter recommended CMS provide a method for comparing IHS, Tribal, and urban Indian providers. The commenter also recommended that CMS remain aware of these providers as distinct when collecting and reporting data.

    One commenter recommended providing a disclaimer that publicly reported final scores are not admissible judicially. Another commenter recommended a disclaimer on Physician Compare that performance information should not be used to determine whether an act of medical negligence has occurred.

    One commenter recommended that CMS provide a disclaimer where insufficient performance data exists on Physician Compare which explains why certain eligible clinicians do not have data publicly reported. Another commenter recommended that CMS indicate whether an eligible clinician has been excluded from reporting data so consumers do not potentially misinterpret limited performance data on the eligible clinicians profile page.

    Response: We appreciate the points, concerns, and suggestions raised by commenters and, if feasible and appropriate under the statute, we may possibly consider these issues in future rulemaking.

    Comment: One commenter supported allowing QCDRs to publicly report their performance data on their Web site. One commenter asked whether CMS will post all QCDR performance data on Physician Compare or allow QCDRs to post their performance data on their Web site. Another commenter recommended that QCDRs be able to provide a link to an external site that publicly reports information on clinicians associated with that QCDR.

    Response: To note current policies that will be carried forward under MIPS, QCDRs can choose to report their unique measures on their own Web site and provide a link for Physician Compare to include or on Physician Compare profile pages. All data that meet public reporting standards are included in the downloadable database, however.

    Comment: One commenter asked CMS to clarify the process for how partial data submission during a performance period is publicly reported on Physician Compare. Another commenter recommended publicly reporting eligible clinicians who report fewer than the required number of quality measures along with their reasoning for doing so. This commenter believes this will increase transparency.

    Response: To note current policies that will be carried forward under MIPS if feasible and appropriate, each measure submitted is evaluated on a measure-by-measure basis. If the specific measure meets all public reporting standards, it will be publicly reported even if the eligible clinician, for example, is not a satisfactory reporter under PQRS (for example, the clinician did not satisfactorily report 9 measures across 3 domains). As a result, clinicians that report partial data do have data included on Physician Compare.

    Comment: One commenter recommended CMS provide a method for comparing IHS, Tribal, and urban Indian clinicians. The commenter also recommended that CMS remain aware of these clinicians as distinct when collecting and reporting data.

    Response: We appreciate the points, concerns, and suggestions raised by the commenter and, if feasible and appropriate under the statute, we may possibly consider these issues in future rulemaking and will conduct tribal consultation with tribes and tribal officials, as feasible and appropriate.

    Comment: Many commenters supported information that is publicly reported be statistically valid, reliable, scientifically based, and/or meaningful to consumers and eligible clinicians and request CMS focus on ensuring the data on Physician Compare are as accurate, reliable, and representative as possible. They also requested adequate disclaimers when there are limitations to the available data or questions about their completeness. Commenters also encouraged CMS to ensure the data included on Physician Compare are clear and useful to consumers. One commenter was concerned with inaccurate data being reported on Physician Compare, and recommended publishing MIPS data with an adequate description of the program including eligibility rules. Another commenter expressed some concern with the accuracy of the information and its usefulness for consumers.

    One commenter recommended a principal focus be on providing reliable and useful data rather than expediency.

    Response: We appreciate your comments and remain dedicated to publicly reporting data that generally meet public reporting standards.

    Comment: One commenter recommended that updates in PECOS be updated on Physician Compare within a short time frame, such as 30 days.

    Response: Data are refreshed on Physician Compare bi-weekly. Edits to PECOS do take longer to be reflected on the site as a result of the time it takes for MACs to review and verify information as needed. We are continually working to improve this timeline.

    F. Overview of Incentives for Participation in Advanced Alternative Payment Models

    Section 1833(z) of the Act, as added by section 101(e)(2) of the MACRA, requires that an incentive payment be made to Qualifying APM Participants (QPs) for participation in eligible alternative payment models (referred to as Advanced APMs). Key statutory elements of the incentives for participation in Advanced APMs under the Quality Payment Program addressed in the proposed rule include:

    • Beginning in 2019, if an eligible clinician participates in a certain type of APM (an Advanced APM), that eligible clinician may become a QP. Eligible clinicians who become QPs are excluded from MIPS.

    • For years from 2019 through 2024, QPs receive a lump sum incentive payment equal to 5 percent of their prior year's payments for Part B covered professional services, and beginning in 2026, QPs receive a higher update under the PFS than non-QPs.

    • For 2019 and 2020, eligible clinicians may become QPs only through participation in Advanced APMs.

    • For 2021 and later, eligible clinicians may become QPs through a combination of participation in Advanced APMs and Other Payer Advanced APMs.

    This section of the rule discusses public comments and finalizes the definitions, requirements, procedures, and thresholds of participation that will govern this program.

    1. Policy Principles

    Several core policy principles are derived from both the MACRA law and the Department's broad vision for better care, smarter spending, and healthier people. These principles drive many of our decisions in developing the overall framework for making APM Incentive Payments to QPs and for approaching interactions between MIPS and APMs discussed in the proposed rule. In addition to increasing the quality and efficiency of care delivered in the Medicare program and across the health system, these principles include the following seven goals:

    • To the greatest extent possible, continue to build a portfolio of APMs that collectively allows participation for a broad range of physicians and other practitioners. We believe finding better ways to deliver care across settings and specialties can lead to improved health outcomes and more efficient health care spending. Doing this requires active CMS engagement with stakeholders, as well as input from those stakeholders to refine ideas in ways that meet statutory and delivery system reform goals.

    • Design the program such that the APM Incentive Payment is attainable by increasing numbers of Advanced APM participants over time, yet remains reserved for those eligible clinicians participating in organizations that are truly engaged in care transformation. We believe the structure of the law is clear in that the APM Incentive Payments are earned through participation in APMs that are designed to be challenging and involve rigorous care improvement activities. In general, we believe eligible clinicians that receive incentives should be those who: take on financial risk for potential losses under an APM; are accountable for performance based on meaningful quality metrics; and use certified EHR technology.

    • Maximize participation in both Advanced APMs and other APMs. Although we want to maintain high standards for eligible clinicians to earn the APM Incentive Payment, we also want to enable and encourage high levels of participation in a broad range of APMs, including those that are not Advanced APMs. We believe participation in any APM offers eligible clinicians and beneficiaries significant benefits.

    • Create policies that allow for flexibility in future innovative Advanced APMs. We do not want to constrain the robust development of new Advanced APMs by framing standards only in terms of today's APMs but rather in ways that allow many avenues for meeting the Advanced APM criteria.

    • Support multi-payer models and participation in innovative models in Medicaid and commercial markets in order to promote high quality and efficient care across the health care market.

    • Minimize burden on organizations and professionals. Between APM participation and MIPS reporting, we hope to coordinate administrative processes, minimize overall reporting burden, and make transitioning between being a QP and being subject to MIPS as seamless as possible.

    • We do not intend to create additional performance assessments or audits beyond those specified under an APM. Rather, we believe the process for determining whether an eligible clinician receives the APM Incentive Payment should focus on the relative degree of participation by eligible clinicians in Advanced APMs, not on their performance within the APM. The Quality Payment Program does not alter how each particular APM measures and rewards success within its design. Rather, it rewards a substantial degree of participation in certain APMs.

    2. Overview of Proposed APM Policies

    The incentives for Advanced APM participation established by the statute include several sets of related requirements that must be met. Three distinct roles play important parts in the program structure: (1) The Advanced Alternative Payment Model (Advanced APM), which is a health care payment and/or delivery model that includes payment arrangements and other design elements as part of a particular approach to care improvement and that by its design satisfies the criteria set forth in section 1833(z) of the Act; (2) the Advanced APM Entity, which is the entity participating in the Advanced APM; and (3) the eligible clinician, who is the individual physician or practitioner, or group of physicians or practitioners, who is a participant of the Advanced APM Entity and may be determined to be a QP.

    In this final rule with comment period, we describe a series of steps that result in the determination of certain eligible clinicians as QPs for a particular year (the payment year). QPs will receive the APM Incentive Payment as specified in section 1833(z) of the Act for each of the years they qualify from 2019 through 2024, and the differential update incentive in section 1848(d)(20) of the Act for each of the years they qualify beginning in 2026. Per section 1833(z)(1)(A) of the Act, the APM Incentive Payment that an eligible clinician receives as a QP for a year between 2019 and 2024 is a lump sum payment equal to 5 percent of the QP's estimated aggregate payments for Medicare Part B covered professional services (services paid under or based on the Medicare PFS) for the prior year. Eligible clinicians who are QPs for a year are also excluded from MIPS for that year. In addition, beginning in 2026, QPs receive a higher Medicare PFS update (the “qualifying APM conversion factor”) than non-QPs. This QP determination is made for one calendar year at a time.

    The steps that will result in a QP determination can be summarized as follows: (1) We determine whether the design of an APM meets three specified criteria for it to be deemed an Advanced APM; (2) an entity (the Advanced APM Entity) with a group of individual eligible clinicians participates in the Advanced APM; (3) we determine whether, during a performance period (the QP Performance Period), the eligible clinicians in the Advanced APM Entity collectively have at least a specified percentage of their aggregate Medicare Part B payments for covered professional services, or patients who received covered professional services, through the Advanced APM; (4) all of the eligible clinicians in the Advanced APM Entity are designated QPs for the payment year associated with that QP Performance Period. Those QPs would receive the 5 percent lump-sum APM Incentive Payments mentioned above for the payment year. This QP determination process would occur each year following the QP Performance Period, with the first payment year being 2019. Figure B illustrates the stages of determinations that result in QP determinations.

    ER04NO16.008

    The following is a summary of the comments we received generally regarding the incentives for participation in Advanced APMs.

    Comment: Some commenters expressed support for the policy principles and goals for Advanced APMs. Many commenters expressed a desire for more opportunities to participate in Advanced APMs. Many commenters specifically called for new Advanced APMs that focus on small or rural practices or specialty practices such as surgery, emergency medicine, dentistry, or long-term care. Some commenters suggested focusing on specific beneficiary populations. Other commenters expressed support for transitional pathways to Advanced APM participation, and changes to existing APMs both in order to change them into Advanced APMs and make them more accessible to new participants.

    Some commenters appreciated that the inception of this part of the Quality Payment Program will serve as a catalyst for more Advanced APMs and an acceleration of the movement from volume- to value-based payment. One commenter expressed concern that the process used to create and approve new Advanced APMs is too slow. One commenter expressed concern that because QPs in an Advanced APM Entity can earn the 5 percent APM Incentive Payment without demonstrating improved quality, controlled cost, or both, there will be no change to health care delivery, and that clinicians would not have a strong incentive to change their practice patterns. One commenter recommended CMS evaluate its overall approach and perhaps abandon Advanced APMs.

    Response: We agree with commenters that it is paramount for us to develop and offer more Advanced APM opportunities in the future. In order to increase participation in both APMs and Advanced APMs, we recognize that we must strive to create offerings for clinicians across the entire care continuum and across geographic regions.

    We plan to achieve these goals in the immediate and long-term future by expanding opportunities for clinicians to participate in existing Advanced APMs, changing certain existing APMs to meet the Advanced APM criteria, and developing new Advanced APMs, especially based on recommendations from the PTAC. The PTAC is discussed in section II.F.10. of this final rule with comment period. We note that not all models that are derived from PTAC recommendations must be Advanced APMs.

    The incentives for Advanced APM participation, as specified under section 1833(z) of the Act, do not provide for consideration of eligible clinicians' performance in terms of quality, cost, or other factors in making determinations as to whether eligible clinicians are QPs for a year. However, we believe the performance requirements that are applicable to eligible clinicians under each Advanced APM will incentivize participants to make improvements in care delivery so as to improve quality of care and reduce expenditures. Additionally, the Transforming Clinical Practice Initiative (TCPI) is a $685 million CMS Innovation Center initiative designed to support 140,000 clinicians in sharing, adopting, and further developing comprehensive quality improve strategies, which are expected to lead to greater improvements in patient health and reduction in health care costs.

    In regard to the request for transitional pathways to Advanced APM participation, section 1848(q)(11) of the Act provides $100 million in funding over 5 years for CMS to provide technical assistance to help clinicians develop the capabilities to be successful in the Quality Payment Program. These technical assistance efforts will target eligible clinicians in individual or small group practices of 15 or fewer, focusing on those practicing in rural areas, health professional shortage areas (HPSAs), and medically underserved areas (MUAs), as well as practices with low composite scores under the MIPS.

    The planned technical assistance will support small practices by helping them think through what they need to be successful under the Quality Payment Program, such as what quality measures and/or EHR may be appropriate for their practices' needs. The planned technical assistance would also educate clinicians about clinical practice improvement activities and how these activities could fit into their practices' workflow, or help practices evaluate their options for joining an APM. We believe this technical assistance, combined with our continued outreach and education efforts, will provide substantial support to eligible clinicians in their transition into APMs and Advanced APMs.

    3. Terms and Definitions

    The APM track of the Quality Payment Program uses a set of interrelated defined terms. The basis for some core terms are set forth at sections 1833(z)(3) and 1848(q)(1)(C)(iii) of the Act, and others we will define in this final rule with comment period.

    We use the statutory text as a foundation to develop definitions for other key terms used in this final rule with comment period. The terms cover three primary topics: (1) The different types of APMs and their participating entities and clinicians; (2) the timing, process and thresholds for determining QPs and Partial Qualifying APM Participants (Partial QPs); and (3) the payment of the 5 percent lump sum incentive (APM Incentive Payment) to QPs.

    We are finalizing definitions for the following terms specific to incentives for participation in Advanced APMs, which are located at § 414.1302 of new subpart O:

    • Affiliated Practitioner.

    • APM Entity.

    • APM Incentive Payment.

    • Attributed beneficiary.

    • Attribution-eligible beneficiary.

    • Alternative Payment Model (APM).

    • Advanced Alternative Payment Model (Advanced APM).

    • Advanced APM Entity.

    • Eligible clinician.

    • Episode payment model.

    • Incentive Payment Base Period.

    • Medicaid APM.

    • Medicaid Medical Home Model.

    • Medical Home Model.

    • Other Payer Advanced APM.

    • Other payer arrangement.

    • Partial Qualifying APM Participant (Partial QP).

    • Partial QP Patient Count Threshold.

    • Partial QP Payment Amount Threshold.

    • Qualifying APM Participant (QP).

    • QP Patient Count Threshold.

    • QP Payment Amount Threshold.

    • QP Performance Period.

    • Threshold Score.

    a. Definitions of APM Entity and Advanced APM Entity

    The MACRA uses the term “Eligible APM” in the heading for section 1833(z) of the Act, in section 1848(q)(9)(A)(ii) of the Act, and indirectly defines it at section 1833(z)(3)(D) of the Act as the APM in which “eligible alternative payment entities” participate. We have decided to use the term “Advanced” in lieu of “Eligible,” for those APMs defined by section 1833(z)(3)(C) of the Act that meet the criteria under section 1833(z)(3)(D) of the Act. Rather than referring indirectly, as is done in section 1833(z)(3)(D)(i) of the Act, to the APM in which an eligible alternative payment entity participates, we believe it is essential to the understanding of this final rule with comment period to be able to identify and finalize requirements directly for an Advanced APM.

    Similarly, we proposed to use the term “Advanced APM Entity” instead of “alternative payment entity” because it highlights the connected but different roles of the Advanced APM (for example, a CMS Innovation Center ACO model meeting specified criteria) and the Advanced APM Entity (for example, a specific ACO participating in that ACO model). We also believed that it was important to the clarity of the proposed rule to define “APM Entity” in addition to “Advanced APM Entity” so that we can easily distinguish between the two under both MIPS and the APM incentives. We proposed that an APM Entity is an entity that participates in an APM or Other Payer APM through a direct agreement with CMS or a non-Medicare other payer, respectively. These APM Entities will be primarily responsible for the cost and quality of care provided to beneficiaries through the APM. The term “eligible alternative payment entity” (which we refer to as an “Advanced APM Entity”) is defined under section 1833(z)(3)(D) of the Act. We proposed that an Advanced APM Entity is an APM Entity that participates in an Advanced APM that, through terms of a direct agreement with CMS or through federal law or regulation, meets the criteria finalized in this rule.

    The following is a summary of the comments we received regarding our proposed definitions of the terms APM Entity and Advanced APM Entity.

    Comment: Commenters noted that a direct CMS agreement is not necessarily the operative legal instrument for entities to participate in APMs. They were concerned that the proposed definition would inadvertently prevent APM Entities from being considered Advanced APM Entities. One commenter stated concern that hospitalists would not be included under this definition of APM Entity and supported a more inclusive definition. One commenter disliked the set of terms related to APMs, such as Advanced APMs, APM Entities, and Other Payer Advanced APMs, and believed they were unclear. Another commenter stated that the definition of APM Entity was too restrictive, and requested that CMS expand it to include any entity that executed a Participation Agreement. One commenter criticized the use of the term Advanced APM and suggested that we use either Qualifying APM or Eligible APM.

    Response: We appreciate the attention to the definitions and agree that the definitions of APM Entity and Advanced APM Entity should not be a barrier to eligible clinicians becoming QPs, but rather descriptors of the entities that are participating in APMs and Advanced APMs, respectively. We believe that the proposed terms clearly distinguish each while showing the relationship between the terms, such as how an APM Entity participates in an APM. We understand that “qualifying” or “eligible” could also have been used in the definitions because these are used in the statute. However, we chose “Advanced APM” because we believe it reflects the element of additional rigor relative to APMs, allowing the term to serve as a meaningful descriptor of a certain type of APM.

    We are modifying our proposed definition of APM Entity to no longer require a direct agreement with CMS in all cases. Instead, we are defining APM Entity to mean an entity that participates in an APM or payment arrangement with CMS or another payer, respectively, through a direct agreement with CMS or the other payer, or through federal or state law or regulation. We are also finalizing the definition of Advanced APM Entity to mean an APM Entity that participates in an Advanced APM or Other Payer Advanced APM with CMS or a non-Medicare other payer, respectively, through a direct agreement with CMS or the payer or through federal or state law or regulation. We note that we determine whether an APM is an Advanced APM or a payment arrangement is an Other Payer Advanced APM consistent with the criteria finalized in this final rule with comment period.

    These changes are important because some APMs define participation through a voluntarily signed agreement whereas other APMs may define participation through rulemaking or based on federal or state statutory requirements. For example, the CJR model defines participant hospitals (the APM Entities) in regulation based on their geographic location in specified Metropolitan Statistical Areas (MSAs). These definitions ensure that entities participating in APMs and Advanced APMs by various binding legal means are included in the definitions of APM Entity and Advanced APM Entity, respectively.

    b. Definitions of Medical Home Model and Medicaid Medical Home Model

    We also proposed to define the terms “Medical Home Model” and “Medicaid Medical Home Model” as subsets of APMs and Other Payer APMs, respectively. The MACRA does not define “medical homes” but sections 1848(q)(5)(C)(i), 1833(z)(2)(B)(iii)(II)(cc)(BB), 1833(z)(2)(C)(iii)(II)(cc)(BB), and 1833(z)(3)(D)(ii)(II) of the Act make medical homes an instrumental piece of the law.

    We note that medical homes would be the APM Entities in an APM, not the APM itself. The requirements in the MACRA and in this final rule with comment period actually relate to the disposition of the APM, not the participating APM Entities. For instance, as described in section II.F.4.b.(6) of this final rule with comment period, section 1115A(c) of the Act relates to the expansion of models (APMs), not the participants (APM Entities) of such models. APM participants are not expanded under section 1115A(c) of the Act. Therefore, we discuss medical homes in terms of the Medical Home Model, which is the concept to which the MACRA and this final rule with comment period actually refer. Although the definitions are identical but for their payer context, we distinguish Medicaid Medical Home Models because there are specific requirements for them under the determination of Other Payer Advanced APMs as described in section II.F.7.b.(3) of this final rule with comment period.

    We proposed that a Medical Home Model must have the following elements:

    • Model participants include primary care practices or multispecialty practices that include primary care physicians and practitioners and offer primary care services.

    • Empanelment of each patient to a primary clinician.

    In addition to these elements, we proposed that a Medical Home Model must have at least four of the following elements:

    • Planned coordination of chronic and preventive care.

    • Patient access and continuity of care.

    • Risk-stratified care management.

    • Coordination of care across the medical neighborhood.

    • Patient and caregiver engagement.

    • Shared decision-making.

    • Payment arrangements in addition to, or substituting for, FFS payments (for example, shared savings or population-based payments).

    The two required elements are consistent with the fundamental characteristics of medical homes in the various incarnations and accreditation standards across the health care market. Therefore, we believe that an APM cannot be a Medical Home Model unless it has a primary care focus with an explicit relationship between patients and their practitioners. To determine that an APM has a primary care focus, we proposed that the Medical Home Model will have to involve specific design elements related to eligible clinicians practicing under one or more of the following Physician Specialty Codes: 01 General Practice; 08 Family Medicine; 11 Internal Medicine; 37 Pediatric Medicine; 38 Geriatric Medicine; 50 Nurse Practitioner; 89 Clinical Nurse Specialist; and 97 Physician Assistant. We solicited comment on whether this proposal for determining that an APM has a primary care focus is sufficiently specified.

    We believe the optional elements should be present in Medical Home Models, but individually, each is less definitive of a characteristic than the two required elements. We also want to adhere to our principle of supporting future flexibility of APM design. Extensive rigid Medical Home Model criteria would not serve the purpose of promoting the development of new and potentially better ways of managing patient care through primary care.

    We solicited comment on these elements and which of the elements should be required as opposed to optional. Our proposed definition of Medicaid Medical Home Model is identical to Medical Home Model, except that it specifically describes a payment arrangement operated by a State under title XIX. It is important to separate the terms because Medicaid Medical Home Models have distinct implications in the Other Payer Advanced APM determination and the QP determination under the All-Payer Combination Option.

    The following is a summary of the comments we received regarding our proposed definitions of the terms Medical Home Model and Medicaid Medical Home Model.

    Comment: Several commenters addressed the terms used to describe Medical Home Models, and supported the proposed definitions. One commenter supported CMS' classification of a medical home as an “entity” rather than as a “model.” One commenter recommended that CMS alter the term “medical home” to “medical home entity” to clarify that it is a TIN or collection of TINs that is an accountable unit within the Medical Home Model. This same commenter also suggested creating two new terms: “Advanced Medicaid Medical Home Model” and “Other Payer Advanced Medical Home Model.” One commenter suggested it makes more sense to name Medical Home Models “Primary-Care Focused Models” and incorporate the term in the proposed required elements.

    Response: We thank commenters for their attention to this definition. We believe that the term “Medical Home Model” best reflects the intent of the statute's use of the term medical homes expanded under section 1115A(c) of the Act as specified in section 1833(z) of the Act. We believe it makes the most sense, in context, to read the statutory references to “medical home” to identify a specific type of APM that potentially could be expanded, rather than to refer to an entity made up of eligible clinicians and other health care providers that would participate in an APM. That is why we proposed to define “Medical Home Model” as the APM and “APM Entity” as the participants in APMs. We use the term APM Entity as a general term to describe all entities that are participants in APMs and, except when it is expedient to implement statutory requirements, we do not believe we should create additional terms to describe subcategories of APM Entities as multiple terms could create confusion. Similarly, we believe that the terms Medical Home Model and Medicaid Medical Home Model provide sufficient clarity for purposes of implementing the statute, and that creating additional definitions may create confusion.

    Comment: Several commenters addressed how we define primary care as part of a Medical Home Model and a Medicaid Medical Home Model. One commenter agreed with our proposal to require a primary care focus as an essential requirement for Medical Home Models and encouraged CMS to additionally require that Medical Home Model participants be primary care medical home practices or multi-specialty practices that offer primary care, and empanelment of each patient to a primary care physician. The same commenter encouraged CMS to include in the optional elements for a Medical Home Model: Whole person orientation, quality and safety.

    In addition, the commenter expressed concern with our proposal to include certain eligible clinicians within Medical Home Models, as they are not always primary care practitioners, that is, 50 Nurse Practitioner; 89 Clinical Nurse Specialist; and 97 Physician Assistant. One commenter wanted more information on the licensing description for each category of eligible clinician and more information on the list of physician specialty codes. Another commenter sought clarification on whether or not the code for Nurse Practitioners includes all Nurse Practitioner codes or if the APM should specify codes for certain primary care certifications, and another commenter recommended that codes for family nurse practitioners, geriatric nurse practitioners, adult nurse practitioners, and others be included. A few other commenters recommended that CMS add code “16 Obstetrics and Gynecology” to the list of specialties that we would use to determine a primary care focus in a Medical Home Model. One commenter requested that occupational therapists to be considered a required component of any Medical Home Model. Another commenter suggested that behavioral health organizations be included in the definition of Medical Home Model. Some commenters requested clarification regarding the definition of “parent organization” and “empanelment” as it relates to Medical Home Models.

    A few commenters recommended CMS to broaden its definition of a Medical Home Model to include APMs that focus on specialty care. Another commenter suggested CMS to include specialist-focused Medical Home Models as a viable option for qualifying as an Advanced APM regardless of risk, much like it proposed for primary care-focused Medical Home Models. One commenter appreciated that CMS provided for elements such as continuity of care, coordination of chronic and preventive care, and coordination across the medical neighborhood, which will assist multispecialty practices when seeking to participate in an Advanced APM, but believed the definition we proposed for a Medical Home Model would largely exclude specialty-focused models. An additional commenter requested CMS to consider adding additional specialties to the approved list of Physician Specialty Codes to whom the patient may be assigned within the Medical Home Model. Another commenter expressed concern that Medicaid Medical Home Models might be prohibited from empaneling patients to any specialists, and one other commenter suggested that we add to the definition of Medical Home Models as a requirement the attribution of patients to specialists. One commenter suggested we address the special needs of children as a requirement in our definition. Another commenter requested information clarifying the relationship between the Medical Home Model definition and certified patient-centered medical home, and asked whether patient-centered medical home certification is a requirement to be considered a Medical Home Model.

    Response: We appreciate commenters' input and will consider the suggestions for future rulemaking applicable to performance periods after 2017. We believe that the proposed definition is sufficient to identify Medical Home Models that might be in place for the 2017 QP Performance Period. However, we are modifying the proposed definition to emphasize the primary care focus. We note that because a Medical Home Model is a type of APM, having a primary care focus means that there are specific design elements that target eligible clinicians with the specified specialty codes. We are also adding code “16 Obstetrics and Gynecology” to the list of specialty codes that we will use to determine primary care focus because we agree with the commenter that these physicians often coordinate primary care services for women.

    We clarify that the definition of Medical Home Model does not include a requirement for patient-centered medical home certification. A certified patient-centered medical home is a practice-level designation, whereas a Medical Home Model is a type of APM (a payment model) defined in this final rule with comment period.

    We believe that empanelment is a commonly understood term used in existing APMs and primary care practices that does not need to be defined in this rulemaking. We believe that empanelment methodologies are specific to each Medical Home Model, and we do not want to unduly restrict APM design flexibility by prescribing how and to whom empanelment may be done. Although we note that Medical Home Models must have a primary care focus, we do not specify that empanelment in a Medical Home Model must be only to primary care practitioners. Finally, we discuss the meaning of “parent organization” in section II.F.4.b.(4) of this final rule with comment period in the context of the Advanced APM financial risk criterion.

    Comment: One commenter encouraged CMS to move towards measuring whether meaningful shared decision-making has occurred, specifically through patient-reported measures. This commenter also requested that CMS establish clear standards for practices to ensure that clinicians have the skills and training to furnish shared decision-making services at a high level of quality and to effectively use shared decision making tools. In addition, commenters recommended that shared decision-making be re-framed as an integral part of “shared care planning” which occurs across a patient's lifespan rather than in a single episode of care and consisted of two key elements: (1) Patients faced with a treatment decision must be informed about all the reasonable options, including doing nothing, and told what is known about the potential risks, benefits and alternatives to those options; and (2) patients must be meaningfully involved in the decision making process. A few commenters suggested CMS require that all seven criteria must be met for an APM to be a Medical Home Model or a Medicaid Medical Home Model, and one of these commenters suggested CMS should define activities that demonstrate how those criteria can be satisfied. The same commenter also recommended adding an eighth element related to coordinating delivery of care with other services that address social determinants of health.

    Response: We thank the commenters for their suggestions. We believe that the suggestions may prove to be too prescriptive when setting standards that apply across many APMs, and we are concerned that imposing additional requirements would contradict our principle of supporting APM flexibility. For instance, we could develop an APM that addresses social determinants of health, but requiring social determinants of health to be an element of an APM in order for it to be considered a Medical Home Model would be so strict as to exclude as Medical Home Models APMs that are widely available or focused on discrete care improvement goals. Therefore, we continue to believe that defining Medical Home Model to require a small set of core characteristics of medical home homes, along with a flexible set of additional characteristics, is the appropriate approach to maintain our principle to support APM flexibility. Defining Medical Home Model this way will allow for the inclusion of additional elements when actually creating a Medical Home Model to customize the APM for testing particular ways to improve the cost and quality of care.

    We are finalizing the definitions of Medical Home Model and Medicaid Medical Home Model with modifications to emphasize the requirement that the APM have a primary care focus, clarify the required versus additional elements, and add Obstetrics and Gynecology (specialty code 16) as a primary care specialty. We are finalizing the definitions as follows:

    A Medical Home Model or Medicaid Medical Home Model is an APM or payment arrangement under title XIX, respectively that we determine to have the following required elements:

    • Primary care focus with participants that include primary care practices or multispecialty practices that include primary care physicians and practitioners and offer primary care services. For the purposes of this provision, primary care focus means involving specific design elements related to eligible clinicians practicing under one or more of the following Physician Specialty Codes: 01 General Practice; 08 Family Medicine; 11 Internal Medicine; 16 Obstetrics and Gynecology; 37 Pediatric Medicine; 38 Geriatric Medicine; 50 Nurse Practitioner; 89 Clinical Nurse Specialist; and 97 Physician Assistant.

    • Empanelment of each patient to a primary clinician.

    In addition to these required elements, a Medical Home Model or Medicaid Medical Home Model must have at least four of the following additional elements:

    • Planned coordination of chronic and preventive care.

    • Patient access and continuity of care.

    • Risk-stratified care management.

    • Coordination of care across the medical neighborhood.

    • Patient and caregiver engagement.

    • Shared decision-making.

    • Payment arrangements in addition to, or substituting for, FFS payments (for example, shared savings, population-based payments).

    c. Other Definitions

    We believe that the proposed terms and definitions are sufficient to clearly implement the Quality Payment Program. These terms cover all steps of the APM Incentive Payment process, from participation in Advanced APMs to QP determinations and payment of incentives. We are aware that this is a complex program and that we propose to define a significant number of terms. We believe that, in general, it is preferable to use more, distinctive terms than to use fewer broader terms that could overlap and convey different meanings in different contexts. For instance, Partial QP Patient Count Threshold is a highly specific term, but we believe that it is necessary in context because there are differences between QPs and Partial QPs, and there are differences between the payment amount and patient count thresholds used to determine whether an eligible clinician becomes a QP or a Partial QP.

    We sought comment on these terms, including our proposed definitions, the relationship between terms, any additional terms that we should formally define to clarify the explanation and implementation of this program, and potential conflicts with other terms used by CMS in similar contexts. We also sought comment on the naming of the terms and whether there are ways to name or describe their relationships to one another that make the definitions more distinct and easier to understand. For instance, we wanted to know if commenters believe there are more intuitive or efficient terms than those proposed that would still adhere to the statutory language and the intended purposes of the terms. In particular, we indicated that we would consider options for a framework of definitions that might more intuitively distinguish between APMs and Other Payer APMs and between APMs and Advanced APMs.

    We also sought comment on alternative terms or definitions that could be useful in the calculations described in the proposed regulations in §§ 414.1430, 414.1435, 414.1440, and 414.1445 of this final rule with comment period and easily understood by stakeholders.

    Comment: Some commenters expressed concern that non-physician practitioners are not included in the Advanced APM considerations and should be more explicitly represented in APM design. Some commenters requested that the Advanced APM CEHRT criterion should be waived for APMs that include non-physician practitioners because such clinicians were not eligible for incentive payments or subject to reduced Medicare payments related to the meaningful use of CEHRT under the Medicare EHR Incentive Program. Other commenters simply inquired about whether PTs, OTs, and SLPs are eligible to become QPs. One commenter found it confusing to use the term “professional” instead of the term “clinician.”

    Response: We appreciate that commenters expressed concern about the inclusion of non-physician practitioners in Advanced APMs. We believe it is important to clarify that physicians are not the only eligible clinicians who can become QPs. The list of eligible clinicians is defined in section 1833(z)(3)(B) of the Act (by cross-reference to the definition of “eligible professional” in section 1848(k)(3)(B)), and includes: physicians, physician assistants, nurse practitioners, clinical nurse specialists, certified registered nurse anesthetists, certified nurse-midwives, clinical social workers, clinical psychologists, registered dietitians or nutrition professionals, physical or occupational therapists, qualified speech-language pathologists, and qualified audiologists; and a group that includes these professionals.

    Therefore, any of those eligible clinicians who participate in Advanced APMs can become QPs for a year and receive the associated APM Incentive Payment. Each APM has its own focus, and many offer opportunities for non-physician practitioners to be participants. Although altering the design of existing or future APMs is beyond the scope of this final rule with comment, we welcome ideas on how to further engage underrepresented clinicians as we work hard to develop more APM opportunities. Finally, we do not believe it would be appropriate to waive the Advanced APM CEHRT requirement for APM Entities that may comprise non-physician practitioners. We believe it is also important to note that, as described in full in section II.F.4.b.(1) of this final rule with comment period, the Advanced APM criteria describe requirements that apply within APMs, but not necessarily to all APM Entities or eligible clinicians in the APM. Under the finalized policy in section II.F.4.b.(1), an APM does not necessarily have to specify that all non-physician practitioners use CEHRT in order to be an Advanced APM.

    We are finalizing the definition of “eligible clinician” as proposed. Eligible clinician has the meaning of the term “eligible professional” as defined in section 1848(k)(3) of the Act, is identified by a unique NPI and includes any of the following: A physician; a practitioner described in section 1842(b)(18)(C) of the Act; a physical or occupational therapist or a qualified speech-language pathologist.; a qualified audiologist (as defined in section 1861(ll)(3)(B) of the Act) or a group that includes these professionals.

    We received no comments in response to our other proposed terms and definitions.

    We are finalizing all other definitions listed in this section as proposed.

    4. Advanced APMs

    This section defines and outlines the proposed criteria for Advanced APMs, APMs through which eligible clinicians would have the opportunity to become QPs as specified in section 1833(z)(3)(C) and (D) of the Act. Other Payer Advanced APMs, types of alternative payment arrangements related to the All-Payer Combination Option, are addressed in section II.F.7. of this final rule with comment period.

    An Advanced APM must, by statute, meet certain requirements, and we are finalizing policies for these requirements within this section. First, the broad category of APMs is defined at section 1833(z)(3)(C) of the Act, which states that an APM is any of the following: (i) A model under section 1115A (other than a health care innovation award); (ii) the Shared Savings Program under section 1899; (iii) a demonstration under section 1866C; or (iv) a demonstration required by federal law.

    We believed it was necessary to propose additional clarification around the requirements as defined in section 1833(z)(3)(C)(iv) of the Act given the broad scope of programs and demonstrations required by federal legislation that are administered by the Department. We proposed that in order to be an APM as a “demonstration required by Federal law,” the demonstration must meet the following 3 criteria: (1) The demonstration must be compulsory under the statute, not just a provision of statute that gives the agency authority, but one that requires the agency to undertake a demonstration; (2) there must be some “demonstration” thesis that is being evaluated; and (3) the demonstration must require that there are entities participating in the demonstration under an agreement with CMS or under a statute or regulation. We solicited comments on our proposal for these criteria defining a demonstration required under federal law.

    We received no comments regarding our proposal that these three criteria must be satisfied in order for a demonstration to be considered an APM as a “demonstration required by Federal law.”

    We are finalizing our proposal that an APM that is considered a demonstration required by Federal law is one that meets the following 3 criteria: (1) The demonstration must be compulsory under the statute, not just a provision of statute that gives the agency authority, but one that requires the agency to undertake a demonstration; (2) there must be some “demonstration” thesis that is being evaluated; and (3) the demonstration must require that there are entities participating in the demonstration under an agreement with CMS or under a statute or regulation.

    Second, to be considered an Advanced APM, an APM must meet all three of the following criteria, as required under section 1833(z)(3)(D) of the Act. The criteria are:

    • The APM must require participants to use CEHRT;

    • The APM must provide for payment for covered professional services based on quality measures comparable to those in the quality performance category under MIPS;

    • The APM must either require that participating APM Entities bear risk for monetary losses of a more than nominal amount under the APM, or be a Medical Home Model expanded under section 1115A(c) of the Act. For a discussion of Medical Home Models under this criterion, see section II.F.4.b.(6) of this final rule with comment period.

    In some cases, APMs offer multiple options or tracks with variations in the level of financial risk, or multiple tracks designed for different types of organizations, and we proposed to assess the eligibility of each such track or option within the APM independently. For instance, the Shared Savings Program has three distinct tracks, the Comprehensive ESRD Care Initiative (CEC) consists of a two-sided track for large dialysis organizations and a one-sided track for non-large dialysis organizations with the option for non-large dialysis organizations to elect to participate in the two-sided risk track beginning in 2017, and the Next Generation ACO Model has two risk arrangement options that feature different levels of financial risk.

    Significant distinctions between the design of different tracks or options may mean that some tracks or options within an APM would meet the Advanced APM criteria while other tracks or options would not. For example, APM Entities may have the option to assume two-sided risk (meaning that they bear a portion of the losses when spending exceeds expectations and share in the savings when spending is below expectations) or one-sided risk (meaning that they share in the savings when spending is below expectations, but do not bear a portion of the losses when spending exceeds expectations) under an APM. If the one-sided risk track does not meet the standard for financial risk as discussed in section II.F.4.b.(3) of this final rule with comment period, APM Entities in this track would not be Advanced APM Entities, whereas those in the two-sided risk track could be Advanced APM Entities. In these instances, we would distinguish that the APM is only an Advanced APM for specific options or tracks.

    The following is a summary of the comments we received regarding our proposal to make Advanced APM determinations for each individual track or option within in APM when applicable.

    Comment: Commenters expressed general agreement that in cases where APMs offer multiple options or tracks, we should evaluate each option or track against the Advanced APM criteria independently.

    Response: We thank the commenters for their responses and agree that this proposal is logical.

    We are finalizing the proposal to consider different tracks or options within an APM separately for purposes of making Advanced APM determinations. All entities participating in Advanced APMs are Advanced APM Entities, and distinguishing between the model and the participating entities allows us to directly identify and discuss the requirements unique to each. This approach to identifying Advanced APMs and Advanced APM Entities is also consistent with our finalized proposals for determining QPs, described in section II.F.5. of this final rule with comment period, at the Advanced APM Entity level.

    a. Advanced APM Determination

    To determine Advanced APMs and to support transparency for the Quality Payment Program, we proposed to establish a process by which we identify and notify the public of the APMs (including specific APM tracks or options) that would be considered Advanced APMs for a QP Performance Period. We indicated that we would post an initial notification to our Web site prior to the beginning of the first QP Performance Period and update the information on a rolling basis as explained below. We believed that making this information available in a timely and accessible format is important for stakeholders to understand how we apply the Advanced APM criteria to existing APMs and to be informed as early as possible about whether an APM they are considering joining is an Advanced APM.

    We proposed two phases of Advanced APM determinations and notice. First, we proposed to release an initial set of Advanced APM determinations no later than January 1, 2017, for APMs that will be operating during the first QP Performance Period. Second, for new APMs announced after January 1, 2017, we would include its Advanced APM determination in conjunction with the first public notice of the APM, such as the Request for Applications (RFA) or final rule. In preliminary discussions of potential APMs, such as proposed rules, we will provide a non-binding determination based on the proposed APM design. We proposed that determinations of Advanced APMs would be posted on our Web site and updated on an ad hoc basis to the extent feasible, but no less frequently than annually, as new APMs become available and others end or change. Both the initial and ad hoc notifications would contain descriptions of whether each track or option within an APM would have in different Advanced APM statuses. We believe that this proposal incorporates both the interest in immediate dissemination of Advanced APM determinations for the existing APM portfolio following finalization of this rule and the structure for making the Advanced APM status a regular part of the development and release of new APMs in the future.

    We solicited comment on the proposals for both the initial and ad hoc notices of Advanced APM determinations. In particular, we solicited comments on optimal times, locations, formats, and other methods of notice of Advanced APM determinations to promote clarity and consistency as to which APMs are considered Advanced APMs for a particular QP Performance Period.

    The following is a summary of the comments we received regarding our proposed process to make and notify the public of Advanced APM determinations.

    Comment: Some commenters requested that we develop a transparent public process for determining which APMs are Advanced APMs. For example, some commenters stated there should be a public comment process before each APM is determined to be an Advanced APM. Other commenters stated that the public should have public input into how we determine which APMs are Advanced APMs. Other commenters simply stated that they want timely information necessary to be able to make educated decisions about the APM participation.

    Response: We agree with the commenters that transparency is important to the future development and determination of Advanced APMs. This rulemaking process is part of that public input process and gives stakeholders and the public an opportunity to provide input into the criteria and process by which we make and announce Advanced APM determinations, as well as develop new APMs. Our proposal described how we would make Advanced APM determinations publicly available as new models are announced. We also publish Requests for Information (RFIs) and proposed rules for purposes of developing certain APMs, which are further opportunities for public input as we make Advanced APM determinations. In addition, the PTAC, as described in section II.F.10. of this final rule with comment period, represents a significant new pathway for the public to offer new ideas for implementation as APMs.

    However, we do not find it practical or meaningful to hold a public comment process regarding each Advanced APM determination. These determinations will be factual applications of the Advanced APM criteria, as prescribed by statute and established in this final rule, to the design of a particular APM. The opportunities for meaningful input are in the development of the criteria and the APMs, but not in the administrative task of determining whether the APM meets the Advanced APM criteria. Soliciting input on Advanced APM determinations for individual models could also significantly delay when stakeholders would know whether an APM is an Advanced APM.

    Comment: Several commenters urged us to make the first round of official Advanced APM determinations either in this final rule or as soon as possible after this final rule is published with subsequent updates in a timely manner that allows for APM participation decisions based on those determinations. The commenters expressed that knowing the Advanced APM status of an APM is very important to decisions regarding participation. Some commenters expressed frustration that for 2017 they had to make APM participation decisions prior to the publication of this final rule with comment period. Commenters also stated that the expiration of the APM Incentive Payment after 6 years puts additional pressure on clinicians to join Advanced APMs as soon as possible, and expressed a wish for a greater number of immediate opportunities to participate in Advanced APM.

    Response: We appreciate the comments and support for our proposed policy regarding the timeliness of Advanced APM determinations, and we agree that it is essential going forward that we provide determinations as soon as practicable in order to support decision making by eligible clinicians and entities. Following publication of this final rule with comment period, we will release the 2017 list of Advanced APMs as soon as possible but no later than January 1, 2017. Then, we will update this list with each material APM amendment or new APM release.

    We understand the difficulties of a notice and comment rulemaking process when eligible clinicians and entities are trying to make business decisions that can be impacted by policy decisions in a final rule. The proposed rule offered our early thoughts as to which APMs might be considered Advanced APMs in 2017. We encourage stakeholders keep in mind that the designs of APMs themselves offer substantial rewards, and we believe that those design elements should be the primary considerations for eligible clinicians and entities in deciding whether or not to participate in a given APM. Also, in sections II.F.1. and II.F.10. of this final rule with comment period we discuss how we plan to increase the number of Advanced APM opportunities each year.

    Some concerns expressed by commenters about Advanced APM determinations are closely linked with our policies on the QP Performance Period and the MIPS performance period, and the interaction between these policies. We encourage commenters to see the discussion of the QP Performance Period in section II.F.5.a. of this final rule with comment period, which describes the operational relationships that address the need to make QP determinations in time for their exclusion from MIPS.

    Comment: Some commenters suggested that we collaboratively develop and vet the format and style of Advanced APM notifications with stakeholders to make them as helpful as possible to potential participants.

    Response: Although the development of education and outreach materials regarding the Quality Payment Program is a subregulatory activity, we agree with commenters and plan to actively engage relevant stakeholders in the development of our messages and materials. Materials related to particular APMs are outside the scope of this final rule with comment period.

    We are finalizing the process for determining Advanced APMs and notifying the public of those determinations as proposed. We will release an initial set of Advanced APM determinations as soon as possible but no later than January 1, 2017. For new APMs that are announced after the release of our initial set of Advanced APM determinations, we will include Advanced APM determinations in conjunction with the first public notice of the APM, such as the Request for Applications (RFA) or final rule. Likewise, if we make changes to an APM that change the determination of whether the APM is an Advanced APM, we will include public notice with the announcement. All determinations of Advanced APMs will be posted on our Web site and updated on an ad hoc basis, but no less frequently than annually, as new APMs become available and others end or change. Both the initial and ad hoc determinations will include descriptions of whether each track or option within an APM is or is not an Advanced APM.

    In section II.F.7. of this final rule with comment period, we finalize how we would identify Other Payer Advanced APMs. The Other Payer Advanced APM identification process goes into effect starting in the third QP Performance Period (applicable for payment year 2021) and aligns with the availability of the All-Payer Combination Option for QP determinations.

    b. Advanced APM Criteria

    Under the statute, for an APM to be an Advanced APM it must meet the criteria set forth in sections 1833(z)(3)(C) and (D) of the Act and discussed below. An Advanced APM must be an APM that:

    • Requires its participants to use certified EHR technology (CEHRT), as described in section II.F.4.b.(1) of this final rule with comment period;

    • Provides for payment for covered professional services based on quality measures comparable to measures under the quality performance category under MIPS, as described in section II.F.4.b.(2) of this final rule with comment period; and

    • Either (a) requires its participating Advanced APM Entities to bear financial risk for monetary losses that are in excess of a nominal amount, as described in section II.F.4.b.(3) of this final rule with comment period, or (b) is a Medical Home Model expanded under section 1115A(c) of the Act, as described in section II.F.4.b.(4) of this final rule with comment period.

    These requirements as set forth in the statute and as proposed must be met through the design of the APM. Whether an APM is an Advanced APM depends solely upon how the APM is designed, rather than on assessments of participant performance within the APM. Some stakeholders have suggested that actual performance (for example, on CQMs or on whether the Advanced APM Entity generates savings) be considered in the determination of QPs. However, the incentives for Advanced APM participation, as specified under section 1833(z) of the Act, do not provide for consideration of actual performance in making such determinations. Performance assessments are already part of APMs, and we believe it is important and consistent with the statutory framework to continue to foster flexibility in structuring the specific rewards and consequences of performance within each APM.

    For example, an APM that ties payments to performance on quality measures comparable to those under MIPS may be an Advanced APM regardless of an Advanced APM Entity's actual performance on those quality measures. If an Advanced APM Entity fails to meet quality performance standards under the Advanced APM, it would face consequences within the Advanced APM, such as financial penalties, loss of access to data or certain waivers, or termination of its participation agreement. The termination scenario would have the downstream effect of terminating Advanced APM Entity status and the eligible clinicians' potential eligibility for the APM Incentive Payment because the entity would no longer be participating in the Advanced APM. As another example, an Advanced APM Entity that bears more than nominal financial risk for monetary losses in accordance with the standards set forth in section II.F.4.b.(3) of this final rule with comment period would be an Advanced APM Entity regardless of whether it actually earns savings or generates losses under the Advanced APM. This would work similarly for an Other Payer Advanced APM.

    We do not intend to add additional performance assessments on top of existing Advanced APM standards. As stated in the discussion of policy principles at the beginning of section II.F.1. of this final rule with comment period, the proposed QP determination process assesses the relative degree of participation of the Advanced APM Entity and eligible clinician in Advanced APMs, not their performance as assessed under the APM. The Quality Payment Program would not alter how each particular APM measures and rewards success within its design. Rather, the APM incentive track of the Quality Payment Program rewards a substantial degree of participation in Advanced APMs.

    The following is a summary of the comments we received generally regarding our Advanced APM criteria proposals.

    Comment: Many commenters stated their belief that the Advanced APM criteria are generally too complex and restrictive and that they should be simpler and more flexible in order to: (1) Reflect the current level of clinician readiness for quality measurement, EHR use, and risk arrangements; (2) allow more APMs to meet the criteria; and (3) encourage broad participation in APMs. Some commenters believe that the most popular APMs should be Advanced APMs, and that if APMs are not Advanced APMs, clinicians will be deterred from participation in a way that could reverse recent progress towards greater APM participation. Some commenters stated that all Innovation Center models should be considered Advanced APMs, regardless of whether or not they meet the criteria. Other commenters suggested that we reward clinicians for demonstrating movement toward APMs or Advanced APMs or that we consider APMs that move toward meeting the Advanced APM criteria in the future to be deemed Advanced APMs in the interim. One commenter recommended that there should be two paths: One that most clinicians should strive for, and a more difficult path restricted to 20 percent of all clinicians. One commenter recommended that CMS allow small practices that report quality via a QCDR to be considered as participants in the Advanced APMs.

    Response: We understand that commenters consider the Advanced APM criteria too complex or too demanding. We agree with commenters that, all else equal, less complex criteria are preferable, regardless of the underlying difficulty for APMs to meet the criteria. Accordingly, in finalizing this rule, we have made several policy changes in order to simplify the Advanced APM criteria—for the CEHRT criterion by not changing over time the percentage of use an APM must require; and for the financial risk criterion by eliminating the marginal risk components of the nominal amount standard.

    Regarding the stringency of the criteria, we agree with some of the concerns raised by commenters. In particular, we agree in that the magnitude of the proposed requirements for the financial risk criterion was too high, and we are modifying our final policies accordingly. On the other hand, it has never been our expectation that all or most clinicians will participate in Advanced APMs immediately, nor do we believe that was the statutory intent. As such, we do not believe the fact that not all APMs qualify as Advanced APMs in itself implies that the criteria are overly stringent. We worked within the statutory structure to define the Advanced APM criteria, which inherently are meant to distinguish between APMs with more and less challenging terms. That said, we believe that all APMs offer meaningful opportunities and benefits to clinicians, particularly as on-ramps to eventual participation in Advanced APMs.

    Finally, it is important for commenters and stakeholders to keep in mind that no eligible clinicians or APM Entities are directly subject to these Advanced APM criteria. These are standards used to determine which APMs are Advanced APMs. The APM designs contain the terms under which APM Entities participate, and many Advanced APMs will have requirements that far exceed the Advanced APM criteria set in this final rule. Changing the Advanced APM criteria will not affect the requirements for participants in any particular Advanced APM.

    Comment: Some commenters stated that the Advanced APM criteria should be wholly different from those proposed. For instance, some commenters expressed that we should synchronize the criteria with the APM Framework developed by the Health Care Payment Learning and Action Network (LAN) APM Framework and Progress Tracking Work Group, which classified four categories of APMs.

    Some commenters were supportive of the Advanced APM framework but expressed support for additional criteria for determining Advanced APMs, such as demonstrating that the payment approach will reinforce the delivery of coordinated patient- and family-centered care; requiring a clinical care model that reinforces a strong primary care foundation; a focus on care coordination; shared care planning; health IT infrastructure development; population health management; risk management; emphasis on consumer experience; and several more. One commenter suggested that we remove the Advanced APM designation from Advanced APMs that fail to demonstrate successful outcomes.

    Response: We appreciate the input on how Advanced APMs should be determined and designed, and we agree that these concepts are important in the design of particular APMs. However, the statute specifies the criteria we must use to determine Advanced APMs, and we are implementing the statutory criteria in this rulemaking process. Although the comments on additional specifications for Advanced APMs are beyond the scope of this final rule with comment period, we remind the commenters of the PTAC, as described in section II.F.10. of this final rule with comment period, and note that commenters can submit ideas for APM designs directly to the Innovation Center.

    (1) Use of Certified EHR Technology

    The first criterion an APM must meet to be considered an Advanced APM is that it requires participants in the APM to use certified EHR technology (as defined in section 1848(o)(4) of the Act), as specified in section 1833(z)(3)(D)(i)(I) of the Act.

    (a) Definition of Certified EHR Technology

    For this Advanced APM criterion, we proposed to adopt the definition of CEHRT proposed for MIPS under § 414.1305. In the 2015 EHR Incentive Programs final rule (80 FR 62872 through 62873), we established the definition of CEHRT that must be used by EPs to meet the Meaningful Use objectives and measures in specific years. This definition is similar to the definition that applies to eligible hospitals, CAHs, and EPs in the Medicare EHR Incentive Programs. The definition includes the certification criteria for a wide range of standards for use in capturing patient health information like vital signs, medications and medication allergies, problem list, and lab results among other data elements included in the common clinical data set (CCDS). It also includes the certification criteria and standards for functions related to information exchange, patient engagement, quality reporting, and protecting the privacy of electronic protected health information. For further information on the certification criteria, see the 2015 Edition Certification Criteria final rule (80 FR 62602 through 62759) and for example Table 8: “Common Clinical Data Set” (80 FR 62696).

    This approach aligns the APM health IT certification requirements for Advanced APMs with those used by MIPS eligible clinicians. We understand this proposed CEHRT definition may include some EHR functionality used by MIPS eligible clinicians which may be less relevant for an APM participant, and likewise APM participants may use additional functions that are not required for MIPS participation. However, we observe that APM participants often work in the same office space, group, entity, or organization with eligible clinicians that are not APM participants. At times they might share common resources, such as the same EHR system. Using the same CEHRT definition for both MIPS and Advanced APMs would allow eligible clinicians to continue to use shared EHR systems and give eligible clinicians flexibility of participation as a MIPS eligible clinician or an eligible clinician in an Advanced APM without needing to change or upgrade EHR systems. Although updates to the certified health IT for APM participants, MIPS participants, or both may be necessary in future years, we believe that aligning the APM and MIPS definition for CEHRT is appropriate at this time.

    We solicited comment on the proposed definition of CEHRT for Advanced APMs.

    The following is a summary of the comments we received regarding our proposal to adopt the same definition of CEHRT for Advanced APMs as proposed for MIPS.

    Comment: Many commenters expressed strong support for aligning the definition of CEHRT for Advanced APMs with the definition of CEHRT used in MIPS. Several commenters suggested this alignment would reduce administrative costs and reduce confusion among clinicians. One commenter suggested the CEHRT definition be more specific and rigorous. Some commenters suggested specific features and functionality (for example, empanelment of patients, stratification of the patient population, display of eCQM results by clinician and practice site) should be included as required components of the CEHRT definition. One commenter indicated that all Advanced APMs will have different HIT needs; therefore, specific HIT features should not be required for all Advanced APMs.

    Response: We appreciate the commenters' suggestions and support for the proposed definition of CEHRT for Advanced APMs. Although a few commenters suggested the CEHRT definition include additional health IT capabilities not included in the CEHRT definition, we believe it is more important to maintain consistency across programs at this time. We also note that, although Advanced APMs must require eligible clinicians in Advanced APM Entities to use systems that at least meet the CEHRT definition, APMs have the flexibility to set additional health IT requirements as necessary to support specific criteria or goals under the APM.

    Comment: A commenter suggested that the care plan criterion finalized in the 2015 Edition Certification Criteria should also be included in the CEHRT definition.

    Response: The ONC health IT certification program defines the testing and certification criteria for a wide range of potential standards and functions for certified health IT beyond those used for the meaningful use objectives and measures. In some cases, these criteria support other specific CMS program needs; in other cases, they may relate to public quality improvement initiatives in the health care industry. For example, the filtering criteria for eCQMs may support advanced electronic clinical quality measurement by APMs, and the care plan certification criterion may support care coordination especially in chronic disease management. Both of these new functions are available for clinicians within the 2015 Edition, and clinicians may use health IT modules certified to these criteria to support quality measurement, clinical practice improvement activities and participation in an APM or other payer arrangement.

    The CEHRT definition merely sets the baseline requirements that eligible clinicians must have to meet the meaningful use objectives and measures, which are designed to be applicable for a wide range of clinician types in a diverse range of settings. These requirements are not intended to limit clinicians electing to use more advanced functions or to use health IT in other ways. Rather, the CEHRT definition is intended to ensure that a user has the tools needed to succeed in meeting the objectives and measures, without creating additional burden to obtain health IT unrelated to their practice. As stated in the proposed rule at 81 FR 28299, we intend to maintain continuity for APM participants with the definition recently finalized for the eligible clinicians participating in the MIPS advancing care information performance category, described at § 414.1305. This is also consistent with the EHR Incentive Program's CEHRT definition at 42 CFR 495.4. Therefore, we are finalizing the same definition of CEHRT under the Advanced APM CEHRT use criterion as we have finalized for MIPS at § 414.1305. We will consider whether to include the care plan and other potentially new or advanced certified health IT modules in future rulemaking.

    Comment: One commenter believes that a strong, broad Health IT infrastructure should be a key element used to identify Advanced APMs rather than the narrow proposed CEHRT criteria. This commenter defined this as the adoption of EHRs, patient registries, or an alternative IT architecture that allows for timely exchange of health data with other clinicians involved in a patient's care and generation of meaningful data analytics. One commenter recommended that CMS engage the health IT community before introducing additional APMs that rely heavily upon IT products and services, especially if those models have unique or specialized technology requirements.

    Response: We agree that Advanced APMs need a strong health IT infrastructure as a foundation for communicating and delivering comprehensive and coordinated care to their patients. However, we also believe that it is important to leave flexibility for individual models to tailor their health IT requirements to the needs of their particular population and goals. Section 1833(z)(3)(D)(i)(I) of the Act requires that Advanced APMs require their APM participants to use CEHRT (as defined in section 1848(o)(4) of the Act), and we continue to believe the definition we proposed meets this criterion while maintaining flexibility for individual APMs to set broader health IT requirements.

    We are finalizing the definition of CEHRT for Advanced APMs as proposed. We believe the CEHRT definition for Advanced APMs aligns the APM health IT certification requirements for Advanced APMs with those used by MIPS eligible clinicians and will permit eligible clinicians using shared systems to participate in both programs without requiring changes to their health IT systems.

    (b) Requiring the Use of CEHRT

    The statute does not specify the number of eligible clinicians who must use CEHRT or how CEHRT must be used in an Advanced APM. We believe we have discretion to define the ways in which an Advanced APM requires the use of CEHRT. In accordance with section 1833(z)(3)(D)(i)(I) of the Act, we proposed that an Advanced APM must require at least 50 percent of eligible clinicians (or each hospital if hospitals are the APM participants) to use the certified health IT functions outlined in the proposed definition of CEHRT to document and communicate clinical care with patients and other health care professionals. Communicating clinical care means that other eligible clinicians and/or the patient can view the clinical care information. We also proposed an alternative set of criteria that would be applicable to the Shared Savings Program to demonstrate the use of CEHRT by eligible clinicians participating in ACOs to allow the Shared Savings Program to be an Advanced APM, as discussed further below. We proposed the 50 percent CEHRT use threshold would be confined to the first QP Performance Period (proposed to be 2017, as discussed later in this final rule with comment period). That is, only in 2017 could APMs use the 50 percent threshold for eligible clinicians in each participating entity to meet the use of CEHRT requirement. We proposed that the threshold requirement for use of CEHRT would increase to 75 percent beginning for the second QP Performance Period (proposed to be 2018). The CEHRT requirement for Advanced APMs in which hospitals are the participants would remain the same over time because it is an all-or-nothing requirement of the hospital as a single entity.

    We believe there are a few reasons why having a lower threshold requirement for the use of CEHRT by the eligible clinicians participating in an APM Entity in the first year is appropriate. First, we wanted to ensure that APMs have sufficient time to alter their terms and conditions to meet this standard. We also acknowledge that eligible clinicians would be expected to upgrade from technology certified to the 2014 Edition to technology certified to the 2015 Edition for use in 2018, and some eligible clinicians who have not yet adopted CEHRT may wish to delay acquiring CEHRT products until a 2015 Edition certified product is available.

    This CEHRT requirement would be based on the requirements that an APM places on its participating APM Entities. In determining whether an APM meets this criterion, we did not propose to assess the level of use of each APM Entity or individual eligible clinician participating in the APM but rather whether the APM requirements meet the standard set forth in the proposed rule.

    We invited comment on whether the proposed thresholds for use of CEHRT for APM Entities that are not hospitals (50 percent for the first QP Performance Period (proposed 2017) and 75 percent for the second QP Performance Period (proposed 2018) and later are appropriate, or if we should consider additional options such as a higher or lower percentage in 2018, or an additional incremental increase for 2019. We also invited comment on whether we should consider higher thresholds for APMs that target eligible clinician populations with higher-than-average adoption of certified health IT, such as eligible clinicians in patient-centered medical homes. Finally, we invited comment on whether we should explore ways to set lower thresholds for those APMs targeting eligible clinician populations that may have lower average adoption of certified health IT, such as specialty-focused APMs.

    The following is a summary of the comments we received regarding the proposed thresholds for use of CEHRT for APM Entities that are not hospitals.

    Comment: Numerous commenters supported the proposed criterion for the 2017 QP Performance Period. However, the majority of those commenters stated that CMS should not raise the CEHRT use requirement to 75 percent in 2018 and later. A few commenters requested that CMS provide more time to meet the 50 percent requirement, that CMS should have lower thresholds for certain specialties, or that any increase be gradual. Many commenters indicated that raising the threshold in 2018 to 75 percent would be unattainable for some APM Entities. Some commenters also suggested that this criterion not apply if their MIPS advancing care information performance category weight is reduced to zero (for example, because they are hospital-based, have insufficient internet coverage, are non-patient facing, or were not previously included as an Eligible Professional in the Meaningful Use program). Another commenter supported the threshold but indicated some specialties should be excluded.

    Response: We appreciate the support for the proposed CEHRT use threshold of 50 percent for Advanced APMs for the 2017 performance period. We believe that setting the threshold at 50 percent of eligible clinicians allows APMs sufficient room to meet this requirement even if the APM includes some participants who do not have internet access, lack face-to-face interactions, or are hospital-based. We understand the commenters' concerns that raising the threshold to 75 percent in 2018 may create an overly rigorous standard for Advanced APMs and agree that it would be prudent to wait until we have more information on how the threshold would impact specific APMs, such as specialty APMs, before increasing the threshold, if at all. As a result, we are not finalizing our proposal to increase the requirement of APMs to require 75 percent CEHRT use after the first QP Performance Period.

    Comment: Alternatively, a few commenters supported raising the threshold for CEHRT, especially for APMs with above average health IT adoption among participants, and another commenter supported increasing the threshold for CEHRT use in Advanced APMs over time. Some commenters indicated that the requirement to use CEHRT should not be based on any threshold but instead be based only on an attestation of CEHRT adoption by the Advanced APM eligible clinicians. One commenter requested CEHRT use be limited to a 90-day period in 2017.

    Response: We agree with the commenter that certain APMs have APM Entities that may be able to meet a higher CEHRT use threshold. We note that some current APMs include CEHRT use requirements that exceed a 50 percent threshold. Since we expect many, widely varied APMs to be developed and implemented over the next few years, we believe we should use this time to gather more information on which APMs would be able to meet a higher Advanced APM CEHRT use requirement. We do not believe a 90-day period of use is a meaningful standard because the CEHRT is used by eligible clinicians principally as a medical record to document and communicate the clinical care they provide to their patients. Medical record documentation of clinical care is an ongoing activity and therefore we see no reason to limit the criterion of this activity to a 90 day period. We want to clarify for commenters that the requirement for CEHRT use in order for an APM to be an Advanced APM is applicable to the APM, not necessarily to all of the APM participants. The Advanced APM itself could have more stringent requirements and require the use of CEHRT in a variety of ways so long as it requires at least 50 percent of the eligible clinicians in each APM Entity use CEHRT. We do not discount the value of the commenters' suggestions but rather believe that they could or should be incorporated into APM design rather than adopted as the minimum requirement for an APM to be considered an Advanced APM. We appreciate the commenter's suggestion for a process to ascertain whether the CEHRT criterion is met by having the eligible clinicians who are participating in Advanced APMs attest that they have adopted CEHRT rather than including the use of the 50 percent threshold, but we believe the use of a threshold best defines how an Advanced APM must require its participants to use CEHRT in accordance with the statutory CEHRT use criterion.

    Comment: One commenter recommended that a different threshold regarding the adoption of certified HIT should apply to any potential pathology-focused APM because laboratory information systems are not considered certified HIT or EHR technology.

    Response: Presently, CMS does not have an Advanced APM that includes individual pathologists as participants of the APM. We will monitor this issue for new APMs and consider the applicability of the CEHRT requirement for APMs in which the majority of the eligible clinicians do not use CEHRT due to lack of certified systems for a particular specialty.

    Comment: A few commenters stated agreement with a uniform CEHRT use threshold for all Advanced APMs other than the Shared Savings Program.

    Response: We thank commenters for their support, and agree that the same thresholds should be consistent across APMs other than the Shared Savings Program for which we are finalizing a different use of CEHRT requirement.

    Comment: A few commenters urged CMS to provide flexibility so that an APM would meet the EHR criterion to be an Advanced APM if it allowed eligible clinicians working in a facility such as a hospital that has CEHRT to be deemed to be using CEHRT. A commenter requested that CMS consider models such as BPCI and CJR as meeting this criterion if participating hospitals are using CEHRT. A commenter also indicated that as a medical group participating in BPCI Model 2, it uses CEHRT and thus should meet this criterion. Another commenter stated that use of any technology within an APM should not imply ownership, control, or the ability of any single user to meet overarching, explicit criteria. The commenter stated that over 90 percent of the nation's hospitals have achieved Meaningful Use, but hospitalists are unlikely to be counted in the 50 percent threshold of “use” as currently proposed by CMS.

    Response: We reiterate that the use of CEHRT criterion applies to APMs and the requirements they impose on participating APM Entities, not to the individual APM Entities participating in APMs. For instance, the use of CEHRT criterion would be applied to the design of an APM to assess whether it has a requirement that its participants use CEHRT in a prescribed manner that meets this Advanced APM criterion. We assess the APM's requirements to determine whether an APM meets the Advanced APM CEHRT criterion. A participant cannot meet this criterion simply by using CEHRT; the APM must require the use of CEHRT in its terms and conditions, or a regulation or other legal vehicle through which APM Entities are held accountable. Conversely, an Advanced APM Entity that fails to meet the requirement to use CEHRT under the Advanced APM would have consequences under the terms of the Advanced APM, but such failure to meet the requirement has no bearing on whether or not the APM itself is an Advanced APM. Therefore, it would not be appropriate or practical to build in specific policies around attestation of CEHRT use by eligible clinicians or APM Entities, or to carve out policies for specific clinician types or settings. We further note that, as proposed, the 50 percent CEHRT use threshold pertains only to the requirements that the APM imposes on eligible clinicians within its participating APM Entities. However, if the APM is one in which hospitals are the main participants, then we proposed that the APM must require hospitals to maintain CEHRT in order for the APM to be an Advanced APM. We do not believe that the use of CEHRT requirement implies that the physicians or other participants must invest in duplicative technology to participate in the APM, but rather that the APM must require a certain threshold level of CEHRT use to document and communicate clinical care for their patient population. As is noted above, the use of CEHRT criterion for an Advanced APM is based on the requirements that an APM places on its participating APM Entities. Therefore, in APMs where an APM Entity may use CEHRT in its operations and participation in the APM, but the APM does not explicitly require the use of CEHRT by the APM Entity, the APM would not meet the use of CEHRT criterion for an Advanced APM.

    Comment: One commenter recommended that CMS require clinicians participating in the CJR and Bundled Payment for Care Improvement (BPCI) models report data for the advancing care information MIPS performance category and allow that reporting to meet the CEHRT requirement.

    Response: We appreciate the suggestion. We believe the proposals for CEHRT use can be applied to these APMs as proposed and therefore there is no need to establish additional detail for the mechanism of requiring CEHRT. As previously stated above it is the APM that must require the use of CEHRT in order to meet this Advanced APM requirement, and not individual entities or clinicians. Consequently, reporting advancing care information to MIPS is not a substitute for the APM to meet this Advanced APM requirement. We also considered these comments in developing proposed amendments to CJR (see 81 FR 50793).

    Comment: One commenter recommended that the Advanced APM use of CEHRT criterion be aligned with advancing care information in MIPS.

    Response: The definition of CEHRT for MIPS and Advanced APMs will be the same. However, to require that CEHRT use requirements in Advanced APMs be aligned with the MIPS advancing care information performance category would go beyond what the statute requires, and as we have stated, we generally want APMs to retain the flexibility to require activities performed using CEHRT that may vary from those prescribed under the advancing care information performance category in MIPS.

    Comment: Some commenters sought additional clarity in how APMs would identify their respective denominator of eligible clinicians. Commenters suggested that CMS represent the method for calculating the denominator of eligible clinicians using a mathematical expression, as well as how the level of proof required would translate to an entity-level percentage-based measurement.

    Response: We will assess for each APM whether the requirements for CEHRT use meet the threshold for an Advanced APM. We will require that each APM have procedures in place to ensure that its requirements for the use of CEHRT are met. Additionally, the methods used to ascertain whether the 50 percent CEHRT use threshold is met may be unique to each APM. We do not intend to prescribe for APMs the mechanism for enforcement of their CEHRT use requirement.

    We are finalizing our proposal that an Advanced APM must require at least 50 percent of eligible clinicians in each APM Entity to use the certified health IT functions outlined in the proposed definition of CEHRT to document and communicate clinical care with patients and other health care professionals. However, we are not finalizing our proposal to increase the requirement of Advanced APMs to require 75 percent CEHRT use in the subsequent year. We will maintain the 50 percent CEHRT use requirement for the second performance year and beyond and consider making any potential changes through future rulemaking. If the APM has hospitals as its APM Entities, the APM would need to require the hospitals to use CEHRT in order to be an Advanced APM, and the 50 percent threshold does not apply. We will monitor the level of CEHRT use that is required in current APMs and assess the applicability of this criterion to new APMs. We will continue to consider additional changes to the CEHRT use criterion for Advanced APMs in future rulemaking, particularly considering Other Payer Advanced APMs.

    (c) Requiring Use of CEHRT in the Shared Savings Program

    We also proposed an alternative criterion for determining whether an APM meets the CEHRT use requirement, exclusively for the Shared Savings Program. We believe this method is appropriate for the Shared Savings Program because although the Shared Savings Program requires ACOs to encourage and promote the use of enabling technologies (such as EHRs) to coordinate care for assigned beneficiaries, a specific level of CEHRT use is not required for participation in the program. Instead, the Shared Savings Program includes an assessment of EHR use as part of the quality performance standard which directly impacts the amount of shared savings/losses generated by the Shared Savings Program ACO. In contrast to APMs authorized by section 1115A of the Act, we would have to undertake significant rulemaking to adopt an eligibility standard for the Shared Savings Program that is consistent with the criterion for other APMs. Following such rulemaking, we would have to collect additional information from each existing and applying ACO outside the routine application process in the weeks prior to the start of the 2017 performance year. We believed this process could introduce uncertainty and burden for CMS, ACOs, and participating eligible clinicians. Moreover, we stated that we believed that the proposed alternative criterion would build on established Shared Savings Program rules and incentives that directly tie the level of CEHRT use to the ACO's financial reward which in turn has the effect of directly incentivizing ever-increasing levels of CEHRT use among eligible clinicians. We believe that the proposed alternative criterion for the Shared Savings Program is consistent with the goals of the APM incentive and reduces burden and uncertainty for the Shared Savings Program participants. Therefore, because most other APMs can accommodate a new CEHRT use requirement for eligible clinicians without modification to our regulations, we proposed to restrict this method to the Shared Savings Program. We proposed that this alternative would allow the Shared Savings Program to meet the criterion if it holds APM Entities accountable for their eligible clinicians' use of CEHRT by applying a financial penalty or reward based on the degree of CEHRT use (such as the percentage of eligible clinicians that use CEHRT or the engagement in care coordination or other activities using CEHRT). One of the quality measures used in the Shared Savings Program's quality performance standard assesses the degree to which certain eligible clinicians in the ACO successfully meet the requirements of the EHR Incentive program, which requires the use of CEHRT. Successful reporting of the measure for a performance year gives the ACO points toward its overall quality score, which in turn affects the amount of shared savings or shared losses an ACO could earn or be liable for, respectively. Because of this, ACOs in the Shared Savings Program actively promote and seek to improve upon the EHR measure annually, leading to greater use of CEHRT among eligible clinicians participating in Shared Savings Program ACOs. We explained that we believe our proposed criteria for APMs, generally, and our alternative for the Shared Savings Program, would satisfy requirements under the statute, as both hinge upon the Advanced APM requiring that its participants use CEHRT with consequences for failure to meet the APM's standards. We solicited comment on our proposed methods for the Shared Savings Program to meet the Advanced APM CEHRT use criterion.

    Comment: Commenters supported using the proposed alternative criterion to determine whether the Shared Savings Program meets the CEHRT use requirement.

    Response: We thank the commenters for their support.

    We are finalizing this alternative criterion exclusively for the Shared Savings Program as proposed. This alternative criterion would allow the Shared Savings Program to meet the criterion if it holds APM Entities accountable for their eligible clinicians' use of CEHRT by applying a financial penalty or reward based on the degree of CEHRT use.

    The Shared Savings Program meets this criterion by tying performance on ACO-11, a quality measure that assesses the meaningful use of EHR technology by certain eligible clinicians in the ACO, to the amount of shared savings earned or shared losses incurred by an ACO. We will use data submitted to us through the MIPS advancing care information performance category for purposes of assessing performance on ACO-11 under all tracks of the Shared Savings Program.

    Eligible clinicians who become QPs by participating in an Advanced APM will be exempt from reporting in the advancing care information performance category for purposes of MIPS. However, under § 425.500(c) of our regulations, Shared Savings Program ACOs must submit data on ACO quality performance measures according to the method of submission established by CMS. Thus, certain eligible clinicians, as designated in the specifications of ACO-11, participating in ACOs under all tracks of the Shared Savings Program must report for purposes of the advancing care information performance category according to MIPS specifications, regardless of whether they are excluded from MIPS for the year, in order for the Shared Savings Program to assess the ACO's performance on ACO-11, as required by the Advanced APM CEHRT use criterion. As discussed above, we will establish our final policies with respect to the specifications of ACO-11 in the forthcoming CY 2017 PFS final rule with comment period.

    We also note that in the CY 2017 PFS Proposed Rule, we propose certain modifications to the EHR measure under the Shared Savings Program (81 FR 46429 through 46430). We will establish our final policies for the specifications of ACO-11 that will be used to assess ACO performance on this measure in 2017 and subsequent years as finalized in the forthcoming CY 2017 PFS final rule.

    In addition to the previous proposals, we were interested in what other health IT functionalities APM participants might need to effectively provide care to their patients and how the use of interoperable health IT can strengthen and encourage higher quality patient care and more effective care coordination across all APMs. Recent research and input from experts, practitioners, and the public have identified priority health IT capabilities that would be important for participants in APMs but are not yet widely available in current health IT systems, such as the ability to manage and track status of referrals and create and maintain electronic shared care plans for team-based care management. More information about this research is available at https://www.healthit.gov/facas/sites/faca/files/HITPC_AHMWG_Meeting_Slides_Final_Version_9_2015-11-10.pdf.

    We believe that all patients, families, and healthcare professionals should have consistent and timely access to health information in a standardized format that can be securely exchanged between these parties (See HHS August 2013 Statement, “Principles and Strategies for Accelerating Health Information Exchange”). The secure, appropriate exchange of health information can help health care professionals improve quality of care through more robust care coordination, and improve the efficiency of care through access to patient information across settings. Interoperability is a key priority for the healthcare industry. HHS recently received pledges from companies that provide 90 percent of the EHRs used by hospitals nationwide, available at https://www.healthit.gov/commitment, as well as the top five largest health care systems in the country, to help consumers easily and securely access their electronic health information; help clinicians share individuals' health information for care with other clinicians and their patients whenever permitted by law and not block electronic health information; and implement federally recognized, national interoperability standards, policies, guidance, and practices for electronic health information.

    A growing number of organizations across the country are now focused on facilitating health information exchanges (HIEs) among healthcare professionals at the national, state, and community levels. There were 267 organizations providing HIE services operating in the U.S. in 2014, including community-based organizations, statewide efforts, and other healthcare delivery entities supporting exchange, according to https://ehi-rails-app.s3.amazonaws.com/uploads/article/file/476/2014_eHI_Data_Exchange_Survey_Results_Webinar_Slides.pdf. While representing a wide variety of stakeholders, services and structures, these organizations play an important role in facilitating care coordination and data sharing for many health care professionals across the country. We encourage the growth of these services and encourage health care professionals to explore partnering with organizations offering HIE services.

    We solicited comment on how requirements for the use of CEHRT within APMs could evolve to support expanded participation in organizations supporting HIEs. The following are the comments received in response to our request for comment related to advancing participation in HIE through the use of CEHRT in Advanced APMs.

    Comment: Regarding the future incorporation of HIE participation into the health IT requirements for APMs, a commenter supported recognition for this participation, but suggested that CMS also determine whether a clinician has achieved better care coordination. One commenter recommended that participation in HIEs be required as part of CEHRT standards. Several commenters suggested that CMS identify interoperability measures or standards that easily align with the use of health IT and the achievement of interoperability goals, perhaps focused on specialty-specific use cases. Another commenter suggested that interoperability goals could be achieved through focusing on specialty-specific use cases rather than data quantity evaluations, and that these use cases should be developed in consultation with stakeholders. One commenter supported additional emphasis on usability and compatibility of electronic data collected by HIEs, but the commenter was concerned that HIE data are not always readable by EHR systems. The commenter stated that meaningful health information exchange requires sending the information, receiving the information, and being able to use the information for patient care. Another commenter urged CMS to state its goals before asking how the use of interoperable health IT strengthens and encourages higher quality patient care and more effective care coordination across all APMs. One commenter did not believe new health IT standards and certification criteria are needed; rather, the existing standards need to be recognized and adopted in a consistent manner that does not vary by vendor. Implementation guides promulgated by standards organizations may be helpful in this regard. The commenter also urged more research on how EHRs affect workflows, both positively and negatively, particularly as workflows are changing due to reporting requirements.

    Response: We thank commenters for supporting the idea of recognizing participation in an organization facilitating HIE as part of future CEHRT requirements for APMs, and agree that care coordination through the secure, electronic exchange of health information is an important capability for providers participating in an Advanced APM. We note that while Advanced APMs are required to base payment on quality measures comparable to those in MIPS, in order to encourage flexibility and innovation for APMs, CMS is not identifying the specific measures which APMs must use. In future rulemaking, we will consider how to incorporate participation in an organization facilitating HIE into the Advanced APM CEHRT requirement.

    Comment: Commenters provided a variety of recommendations regarding the health IT capabilities that APM participants will need to effectively provide care to patients. Commenters focused on improved capabilities to manage and incorporate data, including improved capacity to manage and present interoperable health information in usable workflow and more standardization around how data is extracted from different systems. Commenters also suggested further work on health IT capabilities to improve referral processes, such as the ability to look up information about other clinicians, including specialty, commitment to care coordination, patient preference, and alignment with the patient's health plan network; ability to cross-reference the organization's preferred providers and preferred providers identified by the patient or plan; and better integration of preferred provider lists into document templates used in the referral process.

    Response: We thank the commenters for their responses and will take these recommendations into consideration in the future as we continue to examine the CEHRT use requirement for Advanced APMs.

    (2) Comparable Quality Measures

    The second criterion for an APM to be an Advanced APM is that it provides for payment for covered professional services based on quality measures comparable to measures under the performance category described in section 1848(q)(2)(B)(i) of the Act, which is the MIPS quality performance category. We interpret this criterion to require the APM to incorporate quality measure results as a factor when determining payment to participants under the terms of the APM.

    Our proposed policy for this criterion was informed by our proposed policy for the MIPS quality performance category. Quality measures under the MIPS quality performance category are discussed in section II.E.3.b. of this final rule with comment period. In that section, we discuss our proposal for eligible clinicians to select quality measures from the MIPS measures list for the first MIPS performance period. We indicated that we would publish a list of quality measures annually, through notice and comment rulemaking, from which MIPS eligible clinicians may choose measures for assessment under the MIPS quality performance category. The measures included in the annual list of MIPS measures must adhere to specific criteria that include the following: (1) Measures must have an evidence-based focus if the measures are not endorsed by a consensus-based entity as described in section 1848(q)(2)(D)(v) of the Act; and (2) new measures and the method for developing and selecting such measures, including clinical and other data supporting such measures, must be submitted to a specialty-appropriate, peer-reviewed journal prior to inclusion of the measure in MIPS as described in section 1848(q)(2)(D)(iv) of the Act.

    The statute also established priorities for both the quality domains of measures to be developed and the types of measures to be prioritized in the measure development plan, which are located, respectively, at sections 1848(s)(1)(B) and (D) of the Act. The priority measure types include outcome, patient experience, care coordination, and measures of appropriate use of services such as measures of overuse.

    We wanted to ensure that APMs have the latitude to base payment on quality measures that meet the goals of the APM and assess the quality of care provided to the population of patients that the APM participants are serving. It is important to note that many APMs include some common measures that are proposed for inclusion in MIPS. For example, many of the quality measures used in the Shared Savings Program and the Next Generation ACO Model are also proposed for inclusion in MIPS.

    However, APMs that focus on patients with specific clinical conditions such as end-stage renal disease (ESRD), or on patients undergoing specific surgical procedures, would have valid reasons for including different quality measures than those that target more general populations. Similarly, some APMs may focus on specialist eligible clinicians for whom there may be only a small number of valid and relevant quality measures. Lastly, we cannot predict the specific care goals and payment designs of future PFPMs and other APMs. Consequently, we did not want to impose measure requirements that would prevent us from including quality measures that may be better suited to the specific aims of new innovative APMs.

    We proposed that the quality measures on which the Advanced APM bases payment must include at least one of the following types of measures provided that they have an evidence-based focus, and are reliable, and are valid:

    (1) Any of the quality measures included on the proposed annual list of MIPS quality measures;

    (2) Quality measures that are endorsed by a consensus-based entity;

    (3) Quality measures developed under section 1848(s) of the Act;

    (4) Quality measures submitted in response to the MIPS Call for Quality Measures under section 1848(q)(2)(D)(ii) of the Act; or

    (5) Any other quality measures that CMS determines to have an evidence-based focus and be reliable and valid.

    We believe that quality measures that are endorsed by the National Quality Forum (NQF) would meet these criteria. Because each APM Entity is different, there needs to be the flexibility to determine which measures are most appropriate for use in their respective APM for the purpose of linking those measures to payment under the APM. Measures that could be used in both MIPS and APMs are beneficial to eligible clinicians who may switch from one program to the other, but we also do not want to restrict APMs from including new innovative measures that may not be included in MIPS initially, or until later years of the program.

    We also proposed to establish an Innovation Center quality measure review process for those measures that are not NQF-endorsed or included on the final MIPS measure list to assess whether the quality measures have an evidence-based focus, and are reliable and valid. For example, the Comprehensive ESRD Care Model includes NQF# 0226 Influenza Immunization for the ESRD Population which is not a measure included for reporting in MIPS but meets the proposed criteria for MIPS-comparable quality measures. We stated that we believe, under the proposed categories, MIPS-comparable quality measures could include measures that are fully developed after being tested in an APM and found to be reliable and valid. Similarly, we indicated that we believe MIPS-comparable quality measures could include QCDR measures provided that the QCDR measures used by the Advanced APM for payment have an evidence-based focus and are reliable and valid.

    The statute identifies outcome measures as a priority measure type, and we wanted to encourage the use of outcome measures for quality performance assessment in APMs. Therefore, we proposed that in addition to the general comparable quality measure requirements proposed, an Advanced APM must include at least one outcome measure if an appropriate measure (that is, the measure addresses the specific patient population and is specified for the APM participant setting) is available on the MIPS list of measures for that specific QP Performance Period, determined at the time when the APM is first established. If there is no such measure available on the MIPS list at the time the APM is established, then we would not require an outcome measure be included after APM implementation.

    We also noted that under the statute and in this proposal, not all quality measures under which an APM is assessed are required to be “comparable” and not all payments under the APM must be based on comparable measures. However, at least some payments must be tied to measures comparable to MIPS, regardless of whether those comparable measures are the only ones the APM uses. Under this proposal, APMs retain sufficient freedom to innovate in paying for services and measuring quality. For instance, an APM may have incentive payments related to quality, total cost of care, participation in learning activities, and adoption of health IT. The existence of all of the payments associated with non-quality aspects does not preclude the APM from meeting this Advanced APM criterion. In other words, this criterion only sets standards for payments tied to quality measurement, not other methods of payment. Conversely, an APM may, as current models at the CMS Innovation Center currently do, test new quality measures that do not fall into the MIPS-comparable standard. So long as the APM meets the requirements set forth in this criterion, there is no additional prescription for how the APM tests additional measures that may or may not meet the standards under this criterion.

    We indicated that we believe this framework would provide the flexibility needed to ensure APM quality performance metrics meet the APM's goals. We invited comments on whether measures to be considered comparable to MIPS should all be reliable and valid and have an evidenced-based focus.

    The following is a summary of the comments we received regarding our proposed Advanced APM quality measures criterion.

    Comment: The majority of commenters support CMS' proposal. Commenters sought additional insight and specificity on the types of quality measures that would be tied to Advanced APM payments and also suggested CMS seek stakeholder input regarding what measures are included in Advanced APMs. One commenter stated that while they support the proposed requirement that MIPS-comparable measures for the Advanced APM criteria be evidence-based, reliable and valid, they believe a minimum number of 10 measures should be required to be included in the Advanced APM.

    Response: Examples of measures that would meet the proposed criterion for MIPS-comparable measures include almost any quality measure that is NQF endorsed, or measures included in the final list of MIPS quality measures, provided that the measure has an evidence-based focus, is reliable, and is valid. The Advanced APM criterion to include measures comparable to MIPS does not require CMS seek stakeholder input on the measure(s) used in Advanced APMs, but we do welcome stakeholder input on our selected measures for inclusion in Advanced APMs through other vehicles, for example, RFIs or subsequent proposed rules. With respect to the number of measures for performance assessment included in Advanced APMs, there is no statutory requirement that a specific number of measures need to be included in order for the APM to provide for payment for covered professional services based on MIPS-comparable quality measures, and we believe Advanced APMs generally should retain flexibility to require the appropriate number of measures for its goals.

    Comment: One commenter supported principles that left selection of quality measures to the Advanced APM in our reference to “any other quality measures that CMS determines to have an evidence-based focus and be reliable and valid.” However, the commenter urged CMS to always have the goal that any measure that has an evidence-based focus and is reliable and valid would also either: (1) Be on the annual list of MIPS measures; (2) be endorsed by an consensus-based entity; (3) be a quality measure developed under section 1848(s) of the Act; or (4) be a quality measure submitted in response to the MIPS Call for Quality Measures. The commenter recommended that CMS clarify its intent to have no measure qualify as MIPS-comparable for more than 2 years based solely on meeting the specifications as “any other quality measures that CMS determines to have an evidence-based focus and be reliable and valid.”

    Response: We thank the commenters for their support and suggestion regarding the types of measures that we consider MIPS-comparable. We proposed that the quality measures on which the Advanced APM bases payment must include at least one of the following types of measures provided that they have an evidence-based focus, and are reliable and valid: (1) Any of the quality measures included on the proposed annual list of MIPS quality measures; (2) quality measures that are endorsed by a consensus-based entity; (3) quality measures developed under section 1848(s) of the Act; (4) quality measures submitted in response to the MIPS Call for Quality Measures under section 1848(q)(2)(D)(ii) of the Act; or (5) any other quality measures that CMS determines to have an evidence-based focus and be reliable and valid. We believe the fifth “principle” above provides us the flexibility to view a measure that is submitted to a consensus-based entity for endorsement as comparable to MIPS quality measures, even if the measure has not received endorsement at the time it is proposed for inclusion in the Advanced APM. We do not believe we need to combine principle number 5 with one of the other principles as long as the measure is reliable, valid and has an evidenced-based focus. We do not believe it is necessary to place a time limit on the use of a MIPS-comparable measure that does not already meet one of the four other principles. However, we would strongly encourage stakeholders to submit measures for inclusion on the MIPS measure list once they have been tested in an APM.

    Comment: One commenter was opposed to the proposed definition of measures that are comparable to MIPS. The commenter did not agree with the proposed measure types for the MIPS-comparable set of quality measures, stating that CMS should not include quality measures that have merely “an evidence-based focus.” The commenter is concerned that CMS in the past has pressed for a quality measure that incentivizes lower quality care, under the guise of evidence and suggested it would be better for CMS to add additional considerations such as whether the measure has achieved its purpose to affect physicians' behavior.

    Response: The proposal to include an evidence-based focus as one of the requirements for measures to be comparable to MIPS measures is consistent with the statutory requirement for MIPS measures, except for those measures originating from a QCDR. While we believe that measures that qualify as MIPS-comparable under our proposed criteria can include measures that also have a demonstrable track record of influencing physician behavior, we do not believe it would be consistent with the MIPS statute to include this as a consideration.

    Comment: A commenter recommended that CMS put in place a more robust framework to ensure that Advanced APMs utilize quality measures that accurately and reliably reflect the care an individual patient receives under these models. The commenter believes that, as Advanced APM participants bear financial risk for monetary losses that are in excess of a nominal amount, the quality measures in place are all the more important as a protection for patients against a narrow focus on cost-containment. The commenter was also concerned that CMS' proposed framework is not sufficient to ensure that Advanced APMs utilize robust quality measure sets, and that framework skews too much toward providing flexibility to these APMs. Another commenter encouraged CMS to continually look at measures that monitor for any perverse incentives that may occur as CMS experiments with Advanced APMs. For example, stinting on, or forgoing, care to save costs in the short term is a risk not usually prevalent in FFS, but could be a risk in certain Advanced APMs. In developing all APMs, the commenter stated that CMS should always ensure that they contain a quality component that meets the proposed criteria and that the measures in the APM reflect monitoring for the desired outcomes of the model.

    Response: We assess all APM designs for possible perverse incentives and the potential for care stinting activities prior to implementation. We agree that we should continually monitor for perverse incentives and behaviors such as care stinting, and we actively perform these assessments now. We believe that both the inclusion of payment based on performance on quality measures in the Advanced APMs and the ongoing monitoring and evaluations conducted on all APMs are mechanisms for identifying whether appropriate care is withheld to save costs. The Advanced APM requirement for inclusion of MIPS-comparable measures does not represent a quality measure strategy for Advanced APMs. It is a statutory requirement that an APM must meet in order to be an Advanced APM. Rather, the Advanced APM quality strategy typically includes quality and/or utilization measures that correspond with the key payment and practice transformation activities being tested in the APM. This is why the majority of APMs include more than just one quality measure and many different types of quality performance measures (for example, process, clinical outcome, patient experience of care or patient reported outcome measures) to assess the clinical care provided by eligible clinicians under the APM. Our goal in developing APMs is to ensure that all patients realize better care, improved clinical outcomes and more efficient cost-effective care. We believe our existing quality standards and strategies promote these goals and the statutory requirement to include MIPS-comparable measures to be an Advanced APM further reinforces these goals.

    Comment: One commenter requested additional transparency regarding the quality measures that an individual Advanced APM includes, and suggested that CMS should establish a Web page on which Advanced APMs will identify the quality measures they include and how these measures meet the “similar to” standard. This information should include: details of how the measure is calculated; its limitations; whether the measure is included in the current (or any former) MIPS measure sets; how the measure was developed, and by whom; and whether it is endorsed by a national standards-setting organization (for example, NQF).

    Response: We appreciate this suggestion. Many, if not all, APMs include their quality measures list on either the CMS or Innovation Center Web site. Because the Advanced APM MIPS-comparable quality measure requirement is a new requirement, we will assess the need to develop a public-facing site with the information the commenter suggests.

    Comment: Some commenters suggested CMS provide additional flexibility to Advanced APMs in the selection of outcome measures and measures used for specialty APMs. One commenter requested that CMS not require any outcome measures for 2 to 3 years. Yet another commenter agreed that all measures should have an evidence-based focus to be included in the Advanced APM.

    Response: We believe the proposed criteria for inclusion of measures that are comparable to MIPS provides CMS and Advanced APMs the flexibility the commenter recommends. The measure(s) included to meet this criterion can be a measure on the MIPS measure list or can be selected from another program or source such as the list of consensus-endorsed measures maintained by the NQF. We believe that outcome measures should be included in all APMs wherever possible and that there is no need to wait 2 to 3 years before including outcome measures in Advanced APMs. Presently, many APMs include one or more outcome measures in their quality measure set; therefore, we do not anticipate that this policy will prevent any APMs from being Advanced APMs in the first QP Performance Period.

    Comment: One commenter expressed concern that providing Advanced APMs with the proposed degree of flexibility will allow quality performance to slip, and stated that current quality measures used by Advanced APMs fall short of providing useful information.

    Response: Most APMs are designed to include quality and cost/utilization measures that are aligned with the goals of the APM, and that address the populations and clinical care delivered by the APM participants to their patients. However, there may be new APMs for which CMS would have limited quality measures to choose from that are reliable, valid and have an evidence-based focus. For example, models that target specific patient populations or a subset of services may have few relevant measures available. We believe the flexibility included in our proposed criteria will allow us to include measures that meet this requirement and continue to develop and implement new APMs in support of HHS' goals. Furthermore, most APMs include many types of measures that meet several of the criteria we proposed for Advanced APM “comparable to MIPS measures.” These measures come from a variety of sources including other CMS programs, and the NQF list of endorsed measures and in some instances were vetted by external stakeholders and technical expert panels to ensure they were suitable for use in the APM.

    Comment: One commenter asked for clarification as to whether non-MIPS measures approved for use in a QCDR qualify as MIPS comparable quality measures. A few commenters supported the use of QCDR measures for Advanced APMs.

    Response: Yes, measures that are already approved by CMS for use in a QCDR may also be used to meet this Advanced APM criterion as long as the non-MIPS QCDR measure is reliable, valid, and has an evidence-based focus.

    Comment: One commenter requested clarification regarding submission methods available to APMs and Advanced APMs because the commenter believes that QCDRs and qualified registries should be available for submission of quality data. The commenter noted that the CMS Web Interface uses a sample of patients that represents a fraction of the APM Entity's overall patient population whereas QCDRs and qualified registries would be able to consolidate and submit a statistically relevant population of patients, that is, up to 90 percent of all patients across all payers. The commenter believes this would allow Advanced APMs and eligible clinicians in Advanced APMs to more accurately report on their population and compare themselves to MIPS eligible clinicians for purposes of finding actionable areas for quality improvement. The commenter also believes that QCDRs would be able to assist with development of measures specific to the goals of APMs and Advanced APMs.

    Response: As proposed, QCDR measures are considered to be MIPS-comparable measures as long as the QCDR measure used in the APM is also evidence-based, reliable and valid. There may be some QCDR measures that do not meet the requirements to be reliable, valid, or have an evidence-based focus, and therefore, would not be considered comparable to MIPS quality measures for purposes of identifying Advanced APMs. When CMS designs new APMs, we must select specific submission method(s) for quality data within the policy and operational context of a given APM as well as the resources and systems available at CMS. Historically, this has included registry submission for some APMs. We hope that QCDRs will continue to develop new measures that both MIPS and other CMS programs can use to assess quality performance and appreciate their efforts to expand the inventory of measures available to our programs.

    Comment: One commenter stated concern that the proposal creates an additional process for assessing quality measures when there are already other established processes that determined whether measures are evidence-based, reliable, and valid, such as the National Quality Forum (NQF) Measures Application Partnership (MAP).

    Response: This proposal does not change the processes that are used by CMS to adopt measures for use in CMS programs. Rather the inclusion of an Innovation Center internal review process is to assess whether the measure meets the criteria to be a MIPS-comparable measure for purposes of identifying Advanced APMs. For example, there may be instances where CMS may elect to use a quality measure in the design of an APM to meet the MIPS-comparable measure criterion, and that measure is not currently included in the final list of MIPS measures for use in MIPS. Our proposed policy provides CMS the flexibility to identify a measure used in an APM as MIPS-comparable even if the measure is not used in MIPS as long as it meets the requirement that it is reliable, valid and has an evidence-based focus.

    Comment: One commenter encouraged CMS to urge private payers to also adopt core measure sets, and other commenters requested that CMS consider appropriate Medicare Advantage quality measures. Commenters urged CMS to streamline and standardize its quality measures to focus on a core set of measures that are nationally endorsed and not overly burdensome to administer, and another commenter suggested that CMS have one process to determine acceptability of both APM measures and QCDR measures. Another commenter encouraged CMS to seek guidance from NQF in order to maintain a rigorous level of measure assessment. Several commenters suggested that CMS use measures developed by other entities, including the Core Quality Measure Collaborative, Qualified Clinical Data Registries (QCDRs) and NQF. One commenter indicated that measures in APMs vary widely and that there is no consistency across APMs in obtaining stakeholder feedback on the quality measure sets; the commenter suggested the Measure Application Partnership (MAP) might be such a venue for obtaining stakeholder feedback in the future.

    Response: We believe the proposed criteria for the MIPS-comparable measures used in Advanced APMs does not prevent an APM from using a core measure set or using measures developed and included in other CMS programs, but instead provides the criteria for what constitutes a MIPS-comparable measure to meet the Advanced APM requirement. As noted above, not all quality measures upon which an APM bases payment are required to be MIPS-comparable, and not all payments under the APM must be based on MIPS-comparable measures. However, at least some payments must be tied to MIPS-comparable measures, regardless of whether those measures are the only ones the APM uses. We agree with the commenters that the Core Quality Measure Collaborative, led by America's Health Insurance Plans (AHIP) is an excellent source of measures for inclusion in Advanced APMs and other CMS programs. We also agree that identifying a core set of measures to be used in Advanced APMs would have advantages, but recognize the need to allow for inclusion of measures that are appropriate to assess performance for the specific patient population, for which the Advanced APM participants are providing care. We have heard repeatedly from clinicians that they need specific measures that address their patient population and a single core set used by all Advanced APMs may not meet that goal. Because CMS typically identifies measures that are appropriate for use in APMs by first looking at measures used in other CMS programs we do not believe there needs to be a separate process for identifying measures for use in APMs that there is the need to obtain additional input from other entities such as the MAP. Consequently, we do not believe we need to establish additional reviews by external organizations to vet MIPS-comparable measures as these processes are already established for measures used in MIPS and other CMS programs, and not all measures used in the Advanced APM need to be MIPS-comparable measures.

    Comment: Several commenters supported measurement innovation and recommended engaging stakeholders in the development of quality measures. One commenter suggested that meeting measure requirements should not be tied to reporting a certain number of metrics. Some commenters also addressed specific types of APMs or potential APMs. For example, two commenters urged that CMS make modifications to BPCI so that it could become an Advanced APM. One commenter urged CMS to broaden the definition of how payments can be based on quality measures, which would allow for additional Advanced APMs. Specifically, the commenter referred to the CMS fact sheet that CMS is “committed to ensuring beneficiaries receiving care from providers participating in BPCI receive high quality care,” which supports the case that BPCI meets this criterion. Some commenters suggested new APMs and the development of relevant measures, such as palliative and end-of-life care and anesthesia.

    Response: We appreciate the commenters' support for emphasizing innovation in the development of quality measures and have already included this type of innovation in some of our new APMs, such as the Comprehensive Primary Care Plus (CPC+) model. We plan to develop one or more patient-reported outcome measures in CPC+ after it is implemented in 2017. We agree with the commenter that there is no need to specify the number of measures, and our proposed criteria for MIPS-comparable measures do not specify that a particular number of measures be used. We thank the commenters for their specific APM and measure suggestions, and remind readers of the PTAC, as described in section II.F.10. of this final rule with comment period. We also note that ideas for new APMs can be submitted directly to the CMS Innovation Center. Regarding BPCI, episode payments are based solely on episode spending performance, although we expect that reductions in spending would generally be linked to improved quality through reductions in hospital readmissions and complications. Building on the BPCI initiative, the Innovation Center is considering new episode payment models that could meet the Advanced APM criteria, including the requirement to provide for payment based on MIPS-comparable quality measures, potentially including a new voluntary bundled payment model for CY 2018. The following is a summary of the comments we received regarding our proposal to establish an Innovation Center quality measure review process for those measures that are not NQF-endorsed or included on the final MIPS measure list to assess whether the quality measures have an evidence-based focus, are reliable, and are valid.

    Comment: A few commenters supported the proposal to create an Innovation Center quality measure review process for measures that are not NQF-endorsed or on the final MIPS measure list.

    Response: We appreciate the commenters' support of the proposal to create an Innovation Center quality measure review process for measures that are not NQF-endorsed or on the final MIPS measure list.

    Comment: One commenter requested that, to the extent that CMS moves forward with the proposed Innovation Center quality measure review process, the Agency should identify the details of the process (for example, timelines, standards for consideration/approval, and opportunities for stakeholder input), and allow stakeholders the chance to comment on those details before the process is finalized.

    Response: We do not believe a formal mechanism for public input is necessary or appropriate in this case. We note that this process is intended merely to make a factual determination of whether a measure meets the Advanced APM criterion articulated in this final rule. This process will not determine which measures are included in APMs, nor will it determine how these measures will be linked to payment under an APM. Those determinations will be made and communicated through APM documents. In the case of APMs that are mandatory for participants, these decisions will continue to be made through rulemaking with opportunity for public comment.

    The following is a summary of the comments we received regarding our proposal to require that an Advanced APM must include at least one outcome measure if an appropriate measure is available on the MIPS list of measures for that specific QP Performance Period, as determined at the time when the APM is first established.

    Comment: Commenters generally supported the proposal to require at least one outcome measure. One commenter requested we delay this requirement until future years of the program. One commenter supported flexibility in allowing those designing the Advanced APM to select and/or design the most appropriate outcome measures for that Advanced APM. Another commenter expressed support for not requiring an outcome measure if no applicable measures are available at the time an Advanced APM is established. Alternatively, two commenters suggested that at least one outcome measure be included even if there was no applicable outcome measure on the MIPS final list of measures.

    Response: We thank commenters for their support of the proposal to include the requirement for one outcome measure in the Advanced APM if an appropriate measure is available on the MIPS list of measures for that specific QP Performance Period at the time the APM is first established. We proposed that if no appropriate outcome measure is available on the MIPS list at the time the APM is established, the APM does not need to include an outcome measure. Furthermore, if there is a MIPS outcome measure available on the MIPS list for that specific QP Performance Period, but CMS determines there is another more appropriate non-MIPS outcome measure, the non-MIPS outcome measure can be used. Given the dearth of appropriate outcome measures for some specialties, we believe it is reasonable at this time to maintain the policy as proposed requiring inclusion of an outcome measure in Advanced APMs only if there is an appropriate measure included on the MIPS final measure list at the time the APM is first established.

    We are finalizing as proposed that to be an Advanced APM, an APM must base payment on quality measures that are evidence-based, reliable, and valid; and that at least one such measure must be an outcome measure unless there is not an applicable outcome measure on the MIPS quality measure list at the time the APM is developed. The required outcome measure does not have to be one of those on the MIPS quality measure list. We are also finalizing the proposal to establish an internal Innovation Center quality measure review process for measures that are not NQF-endorsed or on the final MIPS measure list in order to assess whether the measures meet these criteria.

    (3) Financial Risk for Monetary Losses (a) Overview

    The third criterion that an APM must meet to be an Advanced APM is that it must either be a Medical Home Model expanded under section 1115A(c) of the Act as described below, or the APM Entities under the APM must bear financial risk for monetary losses under such APM that are in excess of a nominal amount. We refer to the latter criterion as the “financial risk criterion.” The correlating financial risk criterion for Other Payer Advanced APMs is described in section II.F.7. of this final rule with comment period along with the requirements for consideration under the All-Payer Combination Option that is applicable in payment years 2021 and later.

    The financial risk criterion we proposed for Advanced APMs would apply to the design of the APM financial risk arrangement between CMS and the participating APM Entity. If the structure of the arrangement meets the proposed financial risk requirements, then this criterion would be met. This proposal would not impose any additional performance criteria related to bearing financial risk. For example, eligible clinicians under the Advanced APM Entity would not need to bear financial risk under the APM so long as the APM Entity bears that risk. Furthermore, an APM Entity would not need to actually achieve savings or other metrics for success under the APM in order for the APM to meet this criterion.

    In describing our proposal, we divided the discussion into two main topics: (1) what it means for an APM Entity to bear financial risk for monetary losses under an APM; and (2) what levels of risk we would consider to be in excess of a nominal amount. In developing our proposed policies we prioritized keeping these standards consistent across different types of APMs, including Other Payer Advanced APMs as described in section II.F.7.b.(6) of this final rule with comment period. We believe that keeping these standards consistent to the extent possible would make it easier for stakeholders, APM Entities, and eligible clinicians to understand the type of financial risk required for an APM to be an Advanced APM. However, we proposed to specify small variations in the requirements to accord with the differing characteristics of certain types of APMs.

    (b) Bearing Financial Risk for Monetary Losses

    We proposed a generally applicable financial risk standard for Advanced APMs and a unique standard that would apply only for Advanced APMs that are identified as Medical Home Models.

    (i) Generally Applicable Advanced APM Standard

    First, we proposed that the generally applicable financial risk standard for Advanced APMs would be that an APM must include provisions that, if actual expenditures for which the APM Entity is responsible under the APM exceed expected expenditures during a specified performance period, we can:

    • Withhold payment for services to the APM Entity and/or the APM Entity's eligible clinicians;

    • Reduce payment rates to the APM Entity and/or the APM Entity's eligible clinicians; or

    • Require the APM Entity to owe payment(s) to CMS.

    The proposed financial risk standard for Advanced APMs reflected our interpretation of the statutory requirement that Advanced APM Entities must bear financial risk for monetary losses to encompass “losses” that could be incurred through either direct repayments to CMS or reductions in payments for services. The former would cover two-sided risk arrangements such as shared savings initiatives in which an Advanced APM Entity may receive shared savings or be liable for shared losses. The latter would cover a range of alternative methods for linking performance to payment, such as payment withholds subject to successful performance, or discounts in payment rates retrospectively applied at reconciliation similar to those in many episode-based bundled payment models.

    We solicited comments on how we could potentially create an objective and meaningful financial risk criterion that would define financial risk for monetary losses differently.

    The following is a summary of the comments we received regarding our proposal for the generally applicable Advanced APM financial risk standard.

    Comment: Many commenters expressed support for the proposed generally applicable Advanced APM financial risk standard as meaningful and appropriate. In particular, commenters supported that the standard only captures APMs with downside financial risk. Some commenters believe that all APMs should have downside risk or capitation-style payment arrangements in order to spur greater transformation and better value to consumers. Other commenters agreed with that sentiment, but believed that movement to downside risk takes time and requires an on-ramp or path for clinicians to move to greater levels of risk. Some commenters also supported our proposal to focus the financial relationship between CMS and the APM Entity, rather than downstream risk relationships between the APM Entity and its participants, when assessing whether an APM satisfies the financial risk standard.

    Some commenters suggested that we increase the rigor of the financial risk standard so that Advanced APM performance is considered in addition to its financial risk design. For instance, an Advanced APM would have to demonstrate that its payment model is driving care delivery improvements for better outcomes and patient experience. They also suggested design changes for APMs such as enhancing consumer protections as APMs expand in scope or allowing sharing savings with beneficiaries.

    Response: We appreciate the general support for the design of the generally applicable financial risk standard. We agree that downside risk is an important distinction and an aspect of APM design that can contribute to improved costs and outcomes for beneficiaries. We also recognize that developing risk-bearing capacity is a long-term undertaking and that entities are currently at different states of readiness for bearing risk. Therefore, as we discuss throughout this final rule, we have emphasized technical assistance for small and rural practices and intend to offer an array of APMs and Advanced APMs so that clinicians can find the right fit for their practice now and in the future.

    For suggestions that we add more layers of requirements for an APM to become an Advanced APM, we do not believe that is the purpose of the APM incentive. In particular, as stated in our principles under section II.F.1. of this final rule with comment period, we believe the APM Incentive Payment is a time-limited incentive (with the combination of the favorable fee schedule update and the potential rewards inherent to APMs being the long-term incentives) intended to encourage movement into the most challenging and potentially most rewarding APMs available as defined by the three Advanced APM criteria described in this section. Each APM has many unique characteristics other than those involving CEHRT use, quality measurement, and financial risk, and we believe that it is important to support rather than constrain flexibility in APM design to the extent feasible. Additionally, we assess all of our APMs continuously, and the measurable success of an APM will determine our ability to expand it in the future, not whether the APM is determined to be an Advanced APM. Moreover, the ultimate evaluation of APM success is: (1) Retrospective in nature, so that if Advanced APM status were to hinge on such results, an APM's status would be uncertain until several years after its launch; and (2) distinct from the challenge of participating in a model with CEHRT use requirements, payment based on MIPS-comparable quality measures, and more than nominal financial risk.

    Regarding the comments on consumer protections, just as each APM has its own set of requirements and rewards, it also has its own set of program integrity protections, in addition to those for the overall Medicare program, in which we operate rigorous monitoring programs for each APM.

    Comment: Conversely, other commenters expressed their desire for CMS to consider costs not explicitly part of the financial risk arrangement of an APM as financial risk for purposes of this standard. The APM status of Track 1 of the Shared Savings Program was particularly salient for commenters in this respect. For instance, two commenters believe that CMS should consider Track 1 ACOs that have demonstrated high quality of care with quality performance scores of 86 percent or greater and generated cost savings that exceed their minimum savings rate to be participating in an Advanced APM. Many commenters cited up-front costs or investments in infrastructure and care redesign related to the pursuit of success under the APM incurred by ACOs participating under Track 1. Some of these “business risk” costs can include IT acquisition, hiring of care coordination and case management personnel, business and clinical process development, population management analytics, and other administrative costs. Some commenters believe that any operational costs related to APM participation should be considered risk. One commenter suggested that CMS consider Track 1 ACOs in Maryland that are subject to the Maryland All-Payer Model to be bearing downside risk.

    Some commenters similarly suggested that uncompensated care costs be considered financial risk. Other commenters suggested that we use the Medical Home Model financial risk standard for all APMs, such that the Track 1 adjustment to shared savings based on quality scores would be considered financial risk. Another commenter recommended that APMs that do not have downside risk be considered Advanced APMs for the first 2 years of the Quality Payment Program.

    One commenter submitted research suggesting that there is limited uptake and performance in ACOs with downside risk in comparison to Track 1 of the Shared Savings Program, and recommended that CMS recognize the shortcomings of the current two-sided ACO risk models and develop a new APM that includes more appropriate levels of risk. One commenter believes the proposed financial risk standard is inconsistent with the statutory intent to encourage proliferation of, and participation in, Advanced APMs. One commenter suggested that the focus of the financial risk standard should be on the motivation of APM participants to reduce costs rather than whether or not they bear financial risk.

    Response: We appreciate the comments suggesting broadening the scope of the Advanced APM financial risk standard, which appear to be largely driven by the desire to identify Track 1 of the Shared Savings Program as an Advanced APM. We recognize the substantial time and money commitments that APM Entities invest to become successful APM participants. However, we disagree with commenters that costs not encompassed by an APM's financial risk arrangements should be considered when assessing financial risk under the APM. First, we do not believe we can objectively and accurately assess business risk without exceptional administrative burden on both CMS and APM Entities to quantify such expenditures and verify that they were made solely for participation in a particular APM. Any such assessment would be at risk of being methodologically unsound because we do not believe we could set simple, clear standards for which expenditures would be included as “business risk” for the purposes APM participation and not also of benefit to other activities that a practice may engage in.

    Second, although the cited activities and investments may be geared toward success in an APM, we believe the same activities and investments are likely to be aligned with success under any value-based payment system such as MIPS.

    Third, business risk is generally a sunk cost that is unrelated to performance-based payment under an APM. No matter how well or poorly an APM Entity performs, those costs are not reduced or increased correspondingly. Therefore, business “risk” is not analogous to performance risk in the APM context because those activities and investments are simply costs that are not incorporated into the financial calculations of an APM. In fact, we believe the placement of any objective monetary standard for how much investment could be considered more than nominal would inherently offer an incentive for excessive or wasteful investment that might be unrelated to performance.

    We also believe that maintaining a clear distinction between APMs and Advanced APMs is consistent with the statute, which did not envision that all APMs would meet this standard. We believe that section 1833(z) of the Act recognizes that not all APMs would meet this criterion. We believe the purpose of the APM incentives is to provide a boost for participation in the most challenging APMs, not to provide funding for infrastructure support for participation in any APM. Several APMs such as the ACO Investment Model, Next Generation ACO Model, and CPC+ model have those investment funds built into the APM.

    In addition, we have a stated interest in encouraging movement from one-sided risk arrangements to two-sided risk arrangements, that is, for example from Track 1 to Track 2 or 3 of the Shared Savings Program. Designating a Track 1 ACO as an Advanced APM by permitting business risk to meet the financial risk standard would provide no additional incentive for Track 1 ACOs to transition to two-sided risk models.

    With respect to uncompensated care, we do not wish to downplay the financial impact of uncompensated care, but we believe that addressing such costs in the context of APMs is beyond the scope of this final rule with comment period. We do not believe that such costs can be considered as financial risk under an APM in any systematic, quantifiable manner. Even more than with business risk, we do not believe uncompensated care can be considered “monetary losses under such alternative payment model” as stated in section 1833(z)(3)(D)(ii)(I) of the Act. Further, we do not believe that an APM Entity that provides uncompensated care and also participates in an APM that does not meet the financial risk criterion should be considered to be participating in an Advanced APM. Losses resulting from the provision of uncompensated care would be unrelated to the performance requirements under the APM.

    With respect to the Medical Home Model financial risk standard, we believe that it is important to maintain the distinction between Medical Home Models and other APMs because we believe that Medical Home Models are categorically different than other types of APMs, as supported by specific provisions in the statute enabling unique treatment of Medical Home Models. Also, Medical Home Model participants tend to be smaller in size and have lower Medicare revenues relative to total Medicare spending than other APM Entities, which affects their ability to bear substantial risk, especially in relation to total cost of care. We believe that the meaning of nominal financial risk varies according to context, and that smaller practices participating in Medical Home Models, as a category, experience risk differently than much larger, multispecialty-focused organizations do. To date, Medical Home Model participants have not been required to bear financial risk, which means the assumption of any financial risk presents a new challenge for these entities. We are providing special standards for Medical Home Models that are exceptions to the generally applicable standards because of these unique characteristics.

    Comment: Many commenters suggested two additional interrelated policies to improve access to Advanced APMs. First, many commenters requested that we amend the Shared Savings Program regulations so that ACOs may move from Track 1 to either Track 2 or Track 3 prior to the completion of their 3-year agreement period in order to allow ACOs to accept downside risk and participate in an Advanced APM sooner than they otherwise would be able. Some commenters suggested that this be a one-time opportunity in order to allow ACOs the chance to move “up” to a higher track. Other commenters requested an extension of the application cycle for 2017 participation in the Shared Savings Program.

    Several commenters suggested that we create a new Shared Savings Program track that closely aligns with the finalized Advanced APM nominal amount standard in this final rule so that there is an option for ACOs, particularly ACOs with relatively low revenue or small numbers of participating eligible clinicians, to participate in an Advanced APM without accepting the higher degrees of risk involved in Tracks 2 and 3. Commenters believe this would be an attractive and meaningful middle path between Tracks 1 and 2 and would be a viable on-ramp for assuming greater amounts of risk in the future. Commenters suggested this opportunity should be coupled with the ability for Track 1 ACOs to move into this new “Track 1.5” before the end of their current agreement periods. Another commenter specifically suggested an asymmetrical ACO model with a low marginal risk rate for losses, such as 25-30 percent of shared losses, and a higher marginal risk rate for savings, such as 70-75 percent of shared savings, with no minimum savings rate or minimum loss ratio.

    Response: We thank the commenters for their suggestions and comments regarding how to align the Shared Savings Program rules with the Quality Payment Program and enhance the opportunities for ACOs to participate in an Advanced APM. In the November 2011 final rule establishing the Shared Savings Program (76 FR 67909) as updated in the June 2015 final rule (80 FR 32692), we have created three tracks in which ACOs can choose to participate: A one-sided risk model (Track 1) that incorporates the statutory payment methodology under section 1899(d) of the Act; and two, two-sided models (Tracks 2 and 3) that are also based on the payment methodology under section 1899(d) of the Act but incorporate performance-based risk using the authority under section 1899(i)(3) of the Act to use other payment models. We explained that offering a choice of tracks would create an “on-ramp” for the program to attract both providers and suppliers that are new to value-based purchasing, as well as more experienced entities that are ready to share performance-based risk. We stated our belief that the one-sided model would have the potential to attract a large number of participants to the program and introduce value-based purchasing broadly to providers and suppliers, many of whom may never have participated in a value-based purchasing initiative before. Another reason we included the option for a one-sided track with no downside risk was that this model would be accessible to and attract small, rural, safety net, and physician-only ACOs.

    However, we also noted that although a one-sided model could provide incentives for participants to improve quality, it might not be sufficient incentive for participants to improve the efficiency and cost of health care delivery (76 FR 67904 and 80 FR 32759). Therefore, we have used our authority under section 1899(i)(3) of the Act to create two performance-based risk options, Track 2 and Track 3, where ACOs are not only eligible to share in savings, but also must share in losses. We believe performance-based risk options have the advantage of providing more experienced ACOs an opportunity to enter a sharing arrangement that provides greater reward for greater responsibility, and we have designed our policies for the Shared Savings Program to offer a pathway for ACOs to transition from the one-sided model to performance risk-based arrangements. Therefore, we require that ACOs that elect to enter the Shared savings Program under Track 1 can remain in Track 1 for no longer than 2 agreement periods, and must transition to Track 2 or Track 3 for all subsequent agreement periods. We believe this approach increases interest in the Shared Savings Program by providing a gentler on-ramp while maintaining the flexibility for more advanced ACOs to take on greater performance-based risk in return for a greater share of savings immediately upon entering the program.

    Many of the program requirements that apply to ACOs in Tracks 1, 2, and 3 are the same but there are some significant differences that encourage progression along the risk continuum. For example, the financial reconciliation methodology was designed so that ACOs that accept performance-based risk under Track 2 or Track 3 have the opportunity to earn a greater share of savings, in exchange for their willingness to accept performance-based risk. Specific differences between the tracks are summarized in the June 2015 final rule at (80 FR 32811 through 32812).

    In June 2016, we issued a final rule (81 FR 37950) to incorporate regional FFS expenditures into the methodology for establishing, adjusting, and updating the benchmarks of ACOs that continue their participation in the Shared Savings Program after an initial 3-year agreement period. In an effort to continue to provide a pathway to increasing performance-based risk, the June 2016 final rule also added a participation option to encourage ACOs to transition to performance-based risk arrangements. Specifically, in the June 2016 final rule, we finalized a policy to give ACOs that participate in Track 1 for their first agreement period an additional option when they apply to renew for a second agreement period under a two-sided model (Track 2 or Track 3). If the ACO's renewal request is approved, the ACO may request that its initial participation agreement under Track 1 be extended for an additional year (that is, the ACO would enter a fourth performance year under Track 1). As a result of this deferral, we will also defer rebasing the ACO's benchmark for 1 year. At the end of this fourth performance year under Track 1, the ACO will transition to the selected performance-based risk track for a three-year agreement period. This option became available beginning with the 2017 application cycle.

    However, even with this pathway to performance-based risk, we have heard from stakeholders, as exemplified by the comments above, that we should consider offering ACOs an even more gradual transition to performance-based risk. In the June 2016 final rule, we signaled that we are committed to facilitating entry and continued participation in the Shared Savings Program by ACOs with varying levels of experience with accountable care models and differing degrees of readiness to take on performance-based risk, and to encourage ACOs to transition to performance-based risk tracks. Given that the overwhelming majority of ACOs still participate in the one-sided model, we continue to explore how to move ACOs to performance-based risk more quickly.

    Therefore, we are considering using our authority under section 1115A of the Act to develop and test a “Medicare ACO Track 1+ Model” starting for the 2018 performance year. The Track 1+ Model would test a payment model that incorporates more limited downside risk than is currently present in Tracks 2 or 3 of the Shared Savings Program in order to encourage more rapid progression to performance-based risk. In other words, this potential Track 1+ Model is envisioned as an on-ramp to Tracks 2 or 3. The model could be open to Track 1 ACOs that are within their current agreement period, initial applicants to the Shared Savings Program, and Track 1 ACOs renewing their agreement that meet model eligible criteria. The model would be voluntary for organizations currently participating in Track 1 or seeking to participate in the Shared Savings Program. For Track 1 ACOs that have renewed their agreements, the benchmark that would apply under the model could also incorporate a regional benchmark adjustment consistent with the timing and phase-in of the regional benchmark adjustment as outlined in the June 2016 final rule for the Shared Savings Program. We will announce additional information about the Track 1+ Model in the future.

    We are finalizing the Advanced APM financial risk standard as proposed. To be an Advanced APM, an APM must provide that, if actual expenditures for which an APM Entity is responsible under the APM exceed expected expenditures during a specified performance period, CMS can:

    • Withhold payment for services to the APM Entity and/or the APM Entity's eligible clinicians;

    • Reduce payment rates to the APM Entity and/or the APM Entity's eligible clinicians; or

    • Require the APM Entity to owe payment(s) to CMS.

    We note that this generally applicable financial risk standard does not include reductions in otherwise guaranteed payments made under the terms of the APM—such as care management fees that vary based on quality performance—whereas, as described below, the Medical Home Model financial risk standard does take into consideration reductions in otherwise guaranteed payments under certain circumstances. As such, one-sided risk arrangements would not meet this financial risk criterion.

    (ii) Medical Home Model Financial Risk Standard

    We proposed to adopt a slightly different financial risk standard for Medical Home Models. For a Medical Home Model to be an Advanced APM, it must include provisions that CMS:

    • Withhold payment for services to the APM Entity and/or the APM Entity's eligible clinicians;

    • Reduce payment rates to the APM Entity and/or the APM Entity's eligible clinicians;

    • Require the APM Entity to owe payment(s) to CMS; or

    • Cause the APM Entity to lose the right to all or part of an otherwise guaranteed payment or payments, if either:

    ++ Actual expenditures for which the APM Entity is responsible under the APM exceed expected expenditures during a specified performance period; or

    ++ APM Entity performance on specified performance measures does not meet or exceed expected performance on such measures for a specified performance period.

    With regard to the proposed financial risk standard for Medical Home Models, we believe that the Medical Home Model is a unique type of APM that is treated differently under both the MIPS and APM programs. For example, under the MIPS clinical practice improvement activity performance category, as described in section II.E.3.f. of this final rule with comment period, eligible clinicians participating in medical homes receive an automatic 100 percent score, whereas eligible clinicians participating in other APM Entities receive a minimum of a 50 percent score. Additionally, Medical Home Models are distinct from other APMs in that, if they are models tested under section 1115A of the Act, there is the possibility of having an alternate pathway to meet the financial risk criterion through expansion under section 1115A(c) of the Act; and the presence of Medicaid Medical Home Models in a state can affect whether Medicaid payments or patients are excluded in the All-Payer Combination Option for QP determinations (see sections 1833(z)(2)(B)(ii)(I)(bb) and (iii)(II)(cc)(BB), 1833(z)(2)(C)(ii)(I)(bb) and (iii)(II)(cc)(BB), 1833(z)(3)(C)(ii)(II), and 1848(q)(5)(C)(i) of the Act). Medical Home Models and their participating APM Entities (medical homes) are different from other APMs and their respective APM Entities in that: (1) Medical homes tend to be smaller in size and have lower Medicare revenues relative to total Medicare spending than other APM Entities, which affects their ability to bear substantial risk, especially in relation to total cost of care; and (2) to date, neither publicly nor commercially-sponsored medical homes have been required to bear the risk of financial loss, which means the assumption of any financial risk presents a new challenge for medical homes. For example, a common group practice in the Comprehensive Primary Care (CPC) initiative may consist of less than 20 individuals, including physicians, non-physician practitioners, and administrative staff. Making large lump sum loss payments or going without regular payment for a substantial period of time could put such practices out of business, whereas large ACOs may comprise an entire integrated delivery system with sufficient financial reserves to weather direct short-term losses.

    We therefore believe that the unique characteristics of Medical Home Models warrant the application of a financial risk standard that reflects these differences to provide incentives for participation in the most advanced financial risk arrangements available to medical homes practitioners.

    The proposed financial risk standard for Medical Home Models is similar to the generally applicable Advanced APM standard in its first three conditions. The difference is in the inclusion of the fourth condition for the proposed financial risk standard for Medical Home Models, which would allow a performance-based forfeiture of part or all of a payment under an APM to be considered a monetary loss. For example, a Medical Home Model would meet this standard if it conditions the payment of some or all of a regular care management fee to APM Entities upon meeting specified performance standards. Because the APM does not require any direct payment or repayment to us, a medical home penalized in such a manner would not necessarily be in a weaker financial position than it had been prior to the decreased payment; however, it would be in a comparatively worse position in the future than it otherwise would have been had it met performance standards. We believe that this financial risk standard respects the unique challenges of medical homes in bearing risk for losses while maintaining a more rigorous standard than business risk.

    We solicited comment on the proposed standards set forth for both Advanced APM Medical Home Models and for all other APMs, including any comments on alternative standards suggested by the public that could achieve our stated goals and the statutory requirements. We also solicited comment on types of financial risk arrangements that may not be clearly captured in this proposal.

    The following is a summary of the comments we received regarding our proposal for a unique Advanced APM financial risk standard for Medical Home Models.

    Comment: Many commenters believe that Medical Home Models should not have any financial risk requirement in order to be an Advanced APM. As with the generally applicable financial risk standard, commenters cited up-front costs related to participation. Some commenters also stated a belief that the proposed rule inappropriately imposes financial risk upon clinicians and could have unintended consequences for those serving vulnerable populations. Other commenters believe that instead of a separate risk standard for Medical Home Models, we should more generally focus on developing APMs for small organizations and consider targeted accommodations for rural practices.

    Response: As with the comments suggesting that we consider expenses and investments related to the APM, we appreciate the desire to expand the availability of Advanced APMs but ultimately believe that considering these as financial risk would not respect the statutory distinction between APMs and Advanced APMs. However, the Medical Home Model financial risk standard acknowledges that risk under the terms of an APM can be structured uniquely for smaller entities participating in Medical Home Models in a way that offers the potential for losses without threatening their financial viability.

    We disagree with comments stating that the statute supports no financial risk for Medical Home Model participants. Section 1833(z)(3)(D)(ii)(II) of the Act is clear that a Medical Home Model must be actually expanded under section 1115A(c) of the Act to meet the financial risk criterion without requiring APM Entities to bear more than nominal financial for monetary losses. The expanded Medical Home Model aspect of the financial risk criterion is described in full below in section II.F.4.(b) of this final rule with comment period.

    We also disagree that our financial risk criterion for Medical Home Models to be Advanced APMs imposes undue risk on clinicians. This financial risk requirement only pertains to how a Medical Home Model must generally be structured in order to be an Advanced APM. There is no requirement that all Medical Home Models be Advanced APMs, and, to date, we have not created any mandatory Medical Home Models. Clinicians receive substantial credit under MIPS in the improvement activities performance category for participation in Medical Home Models or receiving certain certified patient-centered medical home certifications, regardless of whether they participate in an Advanced APM.

    In fact, the financial risk policy that we finalize here for Medical Home Models is an exception to the generally applicable rule in recognition that Medical Home Models are categorically different than other types of APMs. However, we do not have the authority to dispense with the statutory requirement that an Advanced APM is one in which participating APM Entities bear more than nominal financial risk for monetary losses unless the APM is a Medical Home Model expanded as permitted under section 1115A(c).

    Lastly, we agree with commenters that we should focus on improving APM and Advanced APM participation opportunities for small and rural practices. However, we do not believe that pursuing those goals is mutually exclusive with creating Advanced APM participation opportunities through the Medical Home Model financial risk standard.

    Comment: Some commenters expressed their support for the separate Medical Home Model financial risk standard as placing a high value on the provision of primary care, and offered suggestions for further improvements such as improving the Relative Value Unit system that undergirds payment under the PFS even as we move away from entirely FFS payment. Other commenters supported the Medical Home Model financial risk standard but suggested that the entire financial risk criterion not apply to Medical Home Models until the 2018 QP Performance Period.

    Response: We thank the commenters for their support of the proposed policy, but note that modifying the RVU system under the PFS is beyond the scope of this final rule with comment period. We also appreciate the suggestion to delay the application of the financial risk criterion but do not believe that we have the authority to set aside the statutory criterion. Nevertheless, a delay in the assessment of the financial risk criterion for Medical Home Models to be considered Advanced APMs would not change any risk requirements imposed by the Medical Home Models. Risk is a component of the design of the APMs themselves, not something imposed by the Quality Payment Program. For instance, the financial risk for participants under the CPC+ model will be the same regardless of whether or not the model meets the Advanced APM financial risk criterion.

    We are finalizing the Advanced APM financial risk standard for Medical Home Models as proposed. For a Medical Home Model to be an Advanced APM, it must include provisions such that CMS could:

    • Withhold payment for services to the APM Entity and/or the APM Entity's eligible clinicians;

    • Reduce payment rates to the APM Entity and/or the APM Entity's eligible clinicians;

    • Require the APM Entity to owe payment(s) to CMS; or

    • Cause the APM Entity to lose the right to all or part of an otherwise guaranteed payment or payments,

    if either:

    • Actual expenditures for which the APM Entity is responsible under the APM exceed expected expenditures during a specified performance period; or

    • APM Entity performance on specified performance measures does not meet or exceed expected performance on such measures for a specified performance period.

    (4) Nominal Amount of Risk

    If the APM risk arrangement meets the proposed financial risk standard, we would then consider whether the amount of the risk is in excess of a nominal amount in order for this Advanced APM criterion to be met. We believe the statutory requirement that an APM Entity bear risk under an APM in excess of a nominal amount (which we would term the “nominal amount standard”) relates to a particular quantitative risk value at which we would consider the risk arrangement to involve potential losses of more than a nominal amount. Similar to the financial risk portion of this assessment, we proposed to adopt a generally applicable nominal amount standard for Advanced APMs and a unique nominal amount standard for Medical Home Models. Under the proposed generally applicable nominal amount standard, the total risk percentages are of the APM Entity benchmark or, in the case of episode payment models, the target price, which is the amount of Medicare expenditures (which can vary as to the involvement of Parts A and B depending on the APM) above which an APM Entity owes losses and below which an APM Entity earns savings. In the case of Medical Home Models, the proposed risk percentages for Medical Home Models are based on Medicare Parts A and B revenue. As an alternative, we considered assessing total risk under the generally applicable nominal amount standard (for APMs other than episode payment models) in relation to the APM Entity's Parts A and B revenue instead of in relation to the APM benchmark. We note that the ratio between entity revenue and the expenditures reflected in an APM's benchmark may vary across different types of entities, such as when the APM benchmark is based on total cost of care. We did not propose, but we sought comment on, the alternative of basing the generally applicable standard on Parts A and B revenue. We were concerned that assessing total risk based on an APM Entity's revenue instead of the APM benchmark could require case-by-case determinations at the APM Entity level that could change from year to year, and would set meaningfully different standards for different types of entities regarding the extent to which they must be held financially responsible if expenditures exceed the benchmark. That said, we understand that setting the total risk standard too high could create challenges for smaller organizations for which a total cost of care benchmark represents more risk in relation to revenue than it does for larger organizations.

    (a) Advanced APM Nominal Amount Standard

    In general, we believe that the meaning of “nominal” is, as plain language implies, minimal in magnitude. However, in the context of financial risk arrangements, we do not believe it to be a mere formality. For instance, we do not believe the law was intended to consider one dollar of risk to be more than nominal. That would create an arbitrary distinction between an APM that has only upside reward potential and one that has the same upside reward potential with a fractional and relatively meaningless downside risk. Therefore, in arriving at the proposed values, we sought amounts that would be meaningful for the entity but not excessive. As reference points to anchor the proposed values, we used the percentage amounts of MIPS adjustments in the MACRA and surveyed current APM risk arrangements, including those in Tracks 2 and 3 of the Shared Savings Program, the Pioneer ACO Model, and the Bundled Payments for Care Improvement (BPCI) Initiative. We considered the potential losses and marginal risk rates of those initiatives to be optimal in that they have been vetted through the APM development process and determined to be the appropriate amount of risk for each initiative such that, in the context of the APM, it is anticipated that the amount of risk would motivate the desired changes in care patterns to reduce costs and improve quality. As stated previously, we believe that the term “nominal” is clearly an amount that is lower than optimal but substantial enough to drive performance. In other words, we are confident that risk levels in current APMs with downside risk are sufficient for a wide variety of providers and suppliers, but in certain circumstances, we would want to encourage participation in APMs with slightly lower levels of risk, though not levels of risk that are so low that an APM becomes no more effective at motivating desired changes than APMs with no downside risk.

    Except for risk arrangements described under section II.F.4.b.(4) of this final rule with comment period, we proposed to measure three dimensions of risk described in this section to determine whether an APM meets the nominal amount standard: (a) Marginal risk, which is a common component of risk arrangements—particularly those that involve shared savings—that refers to the percentage of the amount by which actual expenditures exceed expected expenditures for which an APM Entity would be liable under the APM; (b) minimum loss rate (MLR), which is a percentage by which actual expenditures may exceed expected expenditures without triggering financial risk; and (c) total potential risk, which refers to the maximum potential payment for which an APM Entity could be liable under the APM. Except for risk arrangements described under section II.F.4.b.(3) of this final rule with comment period, we proposed that for an APM to meet the nominal amount standard the specific level of marginal risk must be at least 30 percent of losses in excess of expected expenditures, and a minimum loss rate, to the extent applicable, must be no greater than 4 percent of expected expenditures, and total potential risk must be at least 4 percent of expected expenditures. As described in greater detail in section II.F.7. of this final rule with comment period, the proposed Other Payer Advanced APM nominal amount standard paralleled the proposed standard described here for Advanced APMs. In general, we proposed to define expected expenditures to be the level of expenditures reflected in the APM benchmark. However, for episode payment models, we proposed to define expected expenditures to be the level of expenditures reflected in the target price.

    To determine whether an APM satisfies the marginal risk portion of the nominal amount standard, we would examine the payment required under the APM as a percentage of the amount by which actual expenditures exceeded expected expenditures. We proposed that we would require that this percentage exceed the required marginal risk percentage regardless of the amount by which actual expenditures exceeded expected expenditures. APM arrangements with less than 30 percent marginal risk would not meet the nominal amount standard. We believed that meaningful risk arrangements can be designed with marginal risk rates of greater than 30 percent. We believed that any marginal risk below 30 percent could create scenarios in which the total risk could be very high, but the average or likely risk for an APM Entity could actually be very low. We also proposed that the payment required by the APM could be smaller when actual expenditures exceed expected expenditures by enough to trigger a payment greater than or equal to the total risk amount required under the nominal amount standard. This was essentially an exception to the marginal risk requirement so that the standard would not effectively require APMs to incorporate total risk greater than the amount required by the total risk portion of the standard.

    We proposed a maximum allowable “minimum loss rate” (MLR) of 4 percent in which the payment required by the APM could be smaller than the nominal amount standard would otherwise require when actual expenditures exceed expected expenditures by less than 4 percent; this exception accommodates APMs that include zero risk with respect to small losses but otherwise satisfy the marginal risk standard. If actual expenditures exceed expected expenditures by an amount exceeding the MLR, then all excess expenditures (including excess expenditures within the MLR) would be subject to the marginal risk requirements. For example, ACOs participating in performance-based risk arrangements under Tracks 2 and 3 of the Shared Savings Program are permitted to choose their own minimum savings rate (MSR) and MLR between zero and 2.0 percent or a variable MSR and MLR up to 3.9 percent based on the number of assigned beneficiaries as long as the MSR and MLR are symmetrical. If losses do not exceed the chosen MLR, the ACO is not held responsible for losses. If the ACO has a very large MLR, there may be little to no risk with respect to losses below a certain percentage of the benchmark. Therefore, we believed it was appropriate to propose a maximum allowable MLR. We recognize that there may be instances where an APM could satisfy the marginal risk portion of the nominal amount standard even with a high MLR. Therefore, we also proposed a process through which we could determine that a risk arrangement with an MLR higher than 4 percent could meet the nominal amount standard, provided that the other portions of the nominal amount standard are met. In determining whether such an exception would be appropriate, we proposed to consider: (1) whether the size of the attributed patient population is small; (2) whether the relative magnitude of expenditures assessed under the APM is particularly small; and (3) in the case of a test of limited size and scope, whether the difference between actual expenditures and expected expenditures would not be statistically significant even when actual expenditures are 4 percent above expected expenditures. We noted that we would grant such exceptions rarely, and we would expect APMs considered for such exceptions to demonstrate that a sufficient number of APM Entities are likely to incur losses in excess of the higher MLR. In other words, the potential for financial losses based on statistically significant expenditures in excess of the benchmark must remain meaningful for participants.

    To determine whether an APM satisfies the total risk portion of the nominal amount standard, we would identify the maximum potential loss an APM Entity could be required to incur as a percentage of expected expenditures under the APM. If that percentage exceeded the required total risk percentage, then the APM would satisfy the total risk portion of the nominal amount standard.

    In evaluating both the total and marginal risk portions of the nominal amount standard, we would not include any payments the APM Entity or its eligible clinicians would make to us under the APM if actual expenditures exactly matched expected expenditures. In other words, payments made to us outside the risk arrangement related to expenditures would not count toward the nominal amount standard. This requirement ensures that perfunctory or pre-determined payments do not supersede incentives for improving efficiency. For example, an APM that simply requires an APM Entity to make a payment equal to 5 percent of the APM benchmark at the end of the year, regardless of actual expenditure performance, would not satisfy the nominal amount standard.

    In particular, the financial risk an Advanced APM Entity would bear under an Advanced APM need not take a shared savings structure in which the financial risk increases smoothly based on the amount by which an Advanced APM Entity's actual expenditures exceed expected expenditures. Examples of a risk arrangement based on shared savings are Tracks 2 and 3 of the Shared Savings Program, where the greater the losses in relation to the expenditure benchmark, the greater the potential amount of shared losses an ACO would be required to repay us. On the other hand, an Advanced APM could require APM Entities to pay a penalty based on expenditure targets, regardless of the degree to which the APM Entity actually exceeded those expenditure targets, provided that the payments are otherwise structured in a way that satisfies both the marginal and total risk requirements under the nominal amount standard.

    We solicited comment on appropriate levels for the allowable minimum loss rate and the parameters we should consider when determining whether a risk arrangement should warrant an exception from the minimum loss rate portion of the nominal amount standard.

    We solicited comment on the Advanced APM nominal amount standard. In particular, we solicited comment on whether the Advanced APM benchmark or the Advanced APM Entity revenue is a more appropriate basis for assessing total risk and on the proposed amounts of total potential risk, marginal risk, and maximum allowable minimum loss rate. In particular, we solicited comment on whether 30 percent is a sufficient level of marginal risk to be considered “more than nominal.” We also solicited comment on whether there could be a meaningful standard that we could adopt that only includes total and marginal risk without the minimum loss rate component. Finally, we solicited comment on a tiered nominal risk structure in which different levels of marginal risk could be paired with different levels of total risk.

    In commenting on possible alternatives, we encouraged commenters to refer to the policy principles articulated in section II.F.1. of this final rule with comment period and to consider the extent to which their proposed alternatives would be more or less consistent with those principles.

    The following is a summary of the comments we received regarding our proposal to set the generally applicable nominal amount standard such that, to be an Advanced APM, an APM must have total risk of at least 4 percent of expected expenditures, marginal risk of at least 30 percent, and, if applicable, a minimum loss rate (MLR) of no more than 4 percent, for which we would also have a process to determine whether a higher MLR is appropriate for particular APMs.

    Comment: The comments on the nominal amount standard split into three main themes—complexity, magnitude of risk, and basis of the percentage of risk—but all three elements are closely related. Most commenters expressed their belief that the generally applicable nominal amount standard is excessively complex and should be simplified. In particular, several commenters thought the inclusion of marginal risk and minimum loss ratio components to be especially complicated.

    Many commenters also believe that the proposed standard's amount of risk was too high because 4 percent of total cost of care could equate to upwards of 20 percent of an entity's revenue depending on the composition of the APM Entity, and discourages all but the most highly-resourced organizations from Advanced APM participation. Some commenters suggested starting at a lower amount of total risk and increasing over time. Many commenters believe that between 1 and 3 percent of Parts A and B revenue would be a reasonable definition of “more than nominal,” particularly in light of not including up-front or investment costs in the determination. Some commenters recommended tailoring risk standards based on various factors or adjusting marginal risk and total risk in relation to one another to the degree that marginal risk could be paired with lower total risk. One commenter stated that level of risk is too high because clinicians would not have access to information on the expenditures outside an APM Entity until the end of a given year. One commenter was concerned that the nominal amount standard would be burdensome for rural practices and potentially reduce access to care in rural settings. Some commenters requested that CMS make the generally applicable nominal risk definition more like that proposed for medical homes.

    Finally, many commenters stressed that basing the nominal amount standard on APM Entity revenue, rather than expected expenditures as proposed, would be a more meaningful standard that allows for tailoring risk to the size of APM Entities. Several commenters suggested values of between 2 and 15 percent of eligible clinician or APM Entity revenue would be an appropriate standard. Some commenters noted that this would also make the standard more comparable to MIPS.

    Response: We appreciate the response to this proposed policy. With respect to the marginal risk and MLR portions of the standard, we understand commenters' concerns that, despite being technically robust, these aspects of the standard are complex enough to require additional time to understand. For that reason, we are not finalizing the marginal risk and MLR requirements as proposed for year 1. We also believe that marginal risk and MLR components are not necessary to explicitly include in the nominal amount standard because we are committed to creating Advanced APMs with strong financial risk designs that incorporate risk adjustment, benchmark methodologies, sufficient stop-loss amounts, and sufficient marginal risk; and that all APMs involving financial risk that we operate now or in the future will meet or exceed the proposed marginal risk and MLR requirements. In section II.F.7.b.(6) of this final rule with comment, we are finalizing these marginal risk and MLR requirements with respect to Other Payer Advanced APMs for QP Performance Period in 2019 and later, as we believe such requirements are important to preventing possible engineering of the nominal amount standard for payment arrangements designed by other payers via manipulation of marginal risk, MLRs, or attribution methodologies in order to make the possibility of reaching a stop-loss cap very unlikely. We believe that this additional time will help mitigate commenters' concerns about complexity.

    Regarding the total risk portion of the proposed standard, we agree with commenters that the meaning of “nominal” can be relative and that for many APM Entities, 4 percent of a total cost of care benchmark could represent a significant fraction of an APM Entity's revenue. We believe such amounts of risk would be more than nominal for all APM Entities, but much more substantial for some APM Entities. We recognize that a revenue-based standard would provide an alternative approach under the nominal amount standard that would be particularly meaningful to practices of certain sizes. However, we caution that a revenue-based standard is not easily applied to most current APMs, which tend to base risk arrangements on expenditure benchmarks that are unrelated to a particular APM Entity's revenue. We believe that total cost of care benchmarks are optimal for many APMs, and those will continue to represent the preferred standard for assessing performance in terms of cost. We also caution that, under a revenue-based standard, certain types of APM Entities may have a significant probability of incurring losses outside the stop loss and thus bear no responsibility for increases in expected expenditures beyond that point, which may undermine the ability of such APMs to drive performance for those APM Entities. In seeking a risk standard that is meaningful but not excessive, we sought to balance these considerations.

    In deciding on the policy that we finalize below, we considered several alternatives. For instance, we considered setting the revenue-based standard at up to 15 percent of revenue or setting the revenue-based standard at 10 percent so long as risk is at least equal to 1.5 percent of expected expenditures for which an APM Entity is responsible under an APM. While we are finalizing lower revenue-based standards for the first two QP Performance Periods in 2017 and 2018, we intend to increase the standard to one of the alternatives discussed above for the QP Performance Period in 2019 and later years. We will weigh public comments on this final rule with comment period and assess the impact of this standard, particularly on the design of Other Payer Advanced APMs by non-Medicare payers, in establishing the nominal amount standard for the QP Performance Period in 2019 and later. We particularly seek comments on a standard that tailors the level of risk to particular APM Entities' circumstances while also ensuring that APMs include strong incentives to improve performance and coordinate care across clinician types. In addition, we will consider the amount of risk taken in APM contracts (with Medicare and other payers) and seek comment on trends in those amounts and other factors that may inform the nominal risk standard for 2019.

    Finally, although we are finalizing a policy that is responsive to these comments in that we are not finalizing marginal risk components and we are generally reducing the requisite total risk for an APM to be an Advanced APM, we encourage commenters and other stakeholders to understand that, based on our preliminary analysis, all APMs that could be Advanced APMs for 2017 would have higher levels of risk than would be required under the proposed or the finalized standard. We also point out that reducing the standard for what constitutes a more than nominal amount of risk for losses for purposes of deciding whether an APM is an Advanced APM would not reduce the level of risk under any particular APM, nor is it likely to change the list of Advanced APMs in 2017. Rather, it opens the opportunity for future APMs to be considered Advanced APMs with lower levels of risk than those currently identified as potential Advanced APMs. However, as discussed above, we intend that such future APMs will meet the proposed marginal risk and minimum loss rate standards.

    Comment: Some commenters supported the proposed nominal amount standard but also suggested that we develop a more thorough strategy for helping practices develop the tools and capacity to manage risk and move into higher levels of risk over time. One commenter requested clarification as to when the nominal risk definition applies.

    Response: We appreciate these comments and agree that in addition to offering more Advanced APM opportunities, we also need to guide clinicians in being successful in APMs and Advanced APMs. We refer commenters to the discussion of technical assistance for APM adoption in section II.F.2. of this final rule with comment period. Regarding timing, we will publish a list of APMs that meet the finalized Advanced APM standards, as described in section II.F.4.a. of this final rule with comment. To be clear, the nominal amount standard we are finalizing in this final rule with comment period is the standard we will use in determining whether an APM is an Advanced APM. The actual risk participants bear is defined through the APM itself according to the APM's unique terms and timeframe.

    Comment: Some commenters asked whether or not PPS and bundled payments were considered in calculating risk.

    Response: To determine the amount of risk borne by an APM Entity in an APM, we will look at the specific risk arrangement under the APM, which may include bundled payments that are prospective or retrospective in nature, but would not include regular methods of Medicare payments for services. We will only assess financial risk that is under the APM; in other words, only risk arrangements that are part of the terms and conditions of the APM itself, not the underlying payment system or systems that the APM may modify. As expressed in the proposed rule and the finalized policy, we will assess total potential losses in relation to the target price for episode payment models.

    Comment: Some commenters requested clarification of what we meant in the proposed rule by stating that any payments made by an APM Entity to CMS outside the risk arrangement would not be counted toward the nominal amount consideration.

    Response: Payments made “outside” of a risk arrangement mean that the payments are not related to cost performance under the terms of the APM. For instance, an APM Entity could be required to pay CMS a flat fee of $1,000 or take a 1 percent discount on payments. No matter how well the APM Entity performs, those amounts are fixed under the APM. It is those types of payments that would not be considered at risk but rather a cost of APM participation.

    We are finalizing two ways that an APM can meet the Advanced APM nominal amount standard. An APM would meet the nominal amount standard if, under the terms of the APM, the total annual amount that an APM Entity potentially owes us or foregoes is equal to at least: (1) For QP Performance Periods in 2017 and 2018, 8 percent of the average estimated total Medicare Parts A and B revenues of participating APM Entities (the “revenue-based standard”); or (2) for all QP Performance Periods, 3 percent of the expected expenditures for which an APM Entity is responsible under the APM (the “benchmark-based standard”). For episode payment models, expected expenditures means the target price for an episode. We note that we are only finalizing the amount of the revenue-based nominal amount standard for the first two QP Performance Periods at this time. However, we intend to increase the revenue-based nominal amount standard for the third and subsequent QP Performance Periods. We seek comment on the amount and structure of the revenue-based nominal amount standard for QP Performance Periods in 2019 and later. Specifically, we seek comment on: (1) Setting the revenue-based standard for 2019 and later at up to 15 percent of revenue; or (2) setting the revenue-based standard at 10 percent so long as risk is at least equal to 1.5 percent of expected expenditures for which an APM Entity is responsible under an APM.

    The standard we are finalizing for the 2017 and 2018 QP Performance Periods is a change from the proposed nominal amount standard. Under this final standard, we would not assess marginal risk or MLRs. Additionally, instead of replacing the proposed benchmark-based total risk standard with the revenue-based standard, we are adopting the revenue-based standard as an additional option. Therefore, if an APM's financial design meets either of the two nominal amount standards, we would consider the nominal amount standard to be met. This makes the finalized standard more accommodating of the increasing variety of financial designs in APMs. For instance, current APMs that have total cost of care benchmarks, such as the Next Generation ACO Model, would be easily assessed as to whether they meet the benchmark-based standard because the standard and the APM design use the same metric. Other potential APM designs might be more easily assessed under the revenue-based standard. The nominal amount standard we are finalizing for the 2017 and 2018 QP Performance Periods further increases flexibility because, in the event that an APM using a total cost of care benchmark does not meet the benchmark-based standard, we would still assess it under the revenue-based standard by calculating the total potential risk as a percentage of the average estimated Medicare Parts A and B revenue of the participating APM Entities.

    Although we are finalizing a standard based in part on revenue, for episode payment models we believe the standard based on the target price is most relevant, as target price is the focal point for risk under such APMs. Using a revenue-based standard for episode payment models would likely disqualify most potential episode payment models from becoming Advanced APMs because their relatively narrow scope makes the amount at risk a smaller percentage of APM Entity revenue when compared to APMs like ACO initiatives.

    As discussed above, our intention in setting a revenue-based nominal amount standard is to tailor the level of risk an APM Entity must bear relative to the resources available to it. In instances where an APM Entity is one component of a larger health care provider organization, we believe that the revenue of the larger organization is a more accurate measure of the resources available to the APM Entity and should be the basis for setting the revenue-based nominal amount standard, even if only a portion of the organization is participating in the APM Entity.

    However, we believe that it will not be operationally feasible to apply the nominal amount standard in this fashion during the first two QP Performance Periods, so this final rule sets the revenue-based nominal amount standard based solely on the revenue of the APM Entity. Nevertheless, ideally, the nominal amount standard would take into consideration the resources available to an APM Entity using a measure such as revenue for the parent organization. We are evaluating the feasibility of implementing such a measure in lieu of APM Entity revenue for the third year of the program and later years. Under such an approach, we would anticipate basing the revenue-based nominal amount standard on the total Medicare Parts A and B revenues across the APM Entity, any parent organizations, any subsidiary organizations, and any subsidiaries of parent organizations for all eligible clinicians and groups who are participants of an APM Entity. We seek comment on this approach and how such an approach could be implemented while minimizing burden on participants.

    (b) Medical Home Model Standard

    We proposed that for Medical Home Models, the total annual amount that an Advanced APM Entity potentially owes us or foregoes under the Medical Home Model must be at least the following amounts in a given performance year:

    • In 2017, 2.5 percent of the APM Entity's total Medicare Parts A and B revenue.

    • In 2018, 3 percent of the APM Entity's total Medicare Parts A and B revenue.

    • In 2019, 4 percent of the APM Entity's total Medicare Parts A and B revenue.

    • In 2020 and later, 5 percent of the APM Entity's total Medicare Parts A and B revenue.

    We believe the statute's explicit discussion of medical homes gives us unique latitude to separately set financial risk and nominal amount standards for Medical Home Models that fall below an amount we consider sufficient to be “more than nominal” in the context of other types of APMs. We also believe that the meaning of the term “nominal” depends on the situation in which it is applied, so we believe it is appropriate to consider the characteristics of the APM Entities in Medical Home Models in setting the nominal amount standard for Medical Home Models. As we noted in discussing the financial risk standard, few APM Entities in Medical Home Models have had experience with financial risk, and many would be financially unable to provide sufficient care or even remain a viable business in the event of substantial disruptions in revenue. As such, we believe we should base the nominal amount standard on the APM Entity's total Medicare Parts A and B revenues and also avoid a potentially excessive level of risk for such entities. Our proposal set forth a gradually increasing but achievable long-term amount of risk that would apply in subsequent years. In general, we believe that this scheme allows Medical Home Models to craft incentive designs that allow participants in Medical Home Models to succeed through care transformation and the provision of high-value care while not threatening the ability of small practices to function.

    Even more than for participants in non-Medical Home Models, basing the Medical Home Model nominal amount standard on percentage of risk in relation to a total cost of care benchmark would mean that participants would be required to bear greater total risk in relation to their revenues than other entities, which we believe would be undesirable in light of the special characteristics of Medical Home Models.

    For the Medical Home Model nominal amount standard, we solicited additional comment on the length of the proposed multi-year “ramp up period” and the magnitude of the total risk amounts during such a period. We also solicited comment on the potential addition of a marginal risk amount to the extent applicable and on whether the Advanced APM benchmark or Advanced APM Entity revenue is the most appropriate standard for measuring total risk.

    In commenting on possible alternatives, we encouraged commenters to refer to the policy principles articulated in section II.F.1. of this final rule with comment period and to consider the extent to which their proposed alternatives would be more or less consistent with those principles.

    The following is a summary of the comments we received regarding our proposal for the Advanced APM nominal amount standard for Medical Home Models.

    Comment: Several commenters stated that 2.5 percent of Medicare Parts A and B revenue is an appropriate standard for the minimum total risk a Medical Home Model must require to be an Advanced APM and believe that we should not increase that requirement to 5 percent over time. Some commenters note that such a quick increase, set prospectively, is unwise because there is little experience with risk in the Medical Home Model context for all stakeholders involved. Some commenters expressed concern that this standard is too limiting in that too few clinicians will have access to an Advanced APM in 2017 or 2018.

    Response: We understand commenters' concerns that a programmed increase from 2.5 percent to 5 percent of revenue over several years is too great in magnitude and premature. However, we believe that an ultimate Medical Home Model nominal amount standard of 5 percent is appropriate, and that setting the standard at 5 percent of Parts A and B revenue strikes the appropriate balance to reflect the meaning of “nominal” in the Medical Home Model context. We do not believe the proposed increase in risk over time would be unmanageable. Instead, we consider the incremental increases in the standard over several years from 2.5 percent to 5 percent to be a recognition that the earliest adopters of risk in the Medical Home Model context might initially consider any losses to be substantial while acclimating to bearing risk, but with successive years of experience, gain comfort and confidence in assuming higher risk levels.

    We also reiterate, as we note for the generally applicable nominal amount standard, that the terms and conditions in the particular APM govern the actual risk that participants experience; the nominal amount standard we are setting in this final rule with comment period merely sets a floor on the level of risk required to be an Advanced APM. Therefore, we do not believe that this nominal amount standard for Medical Home Models will in itself limit Advanced APM participation opportunities. Rather, we believe that developing more APMs, amending existing APMs, expanding successful APMs, and reopening applications for certain APMs could result in increased opportunities to participate in Advanced APMs in the near future.

    We are finalizing the Medical Home Model nominal amount standard as proposed.

    To be an Advanced APM, a Medical Home Model must require that the total annual amount that an Advanced APM Entity potentially owes us or foregoes under the Medical Home Model be at least the following amounts in a given performance year:

    • In 2017, 2.5 percent of the APM Entity's total Medicare Parts A and B revenue.

    • In 2018, 3 percent of the APM Entity's total Medicare Parts A and B revenue.

    • In 2019, 4 percent of the APM Entity's total Medicare Parts A and B revenue.

    • In 2020 and later, 5 percent of the APM Entity's total Medicare Parts A and B revenue.

    Also, parallel with the generally applicable nominal amount standard, if the financial risk arrangement under the Medical Home Model is not based on revenue (for example, it is based on total cost of care or a per beneficiary per month dollar amount), we will make a determination for the APM based on the risk under the Medical Home Model compared to the average estimated Parts A and B revenue of its participating APM Entities using the most recently available data.

    We believe that, given the unique financial risk and nominal amount standards we proposed for Medical Home Models, it would be appropriate to impose size and composition limits for the Medical Home Models to which the unique standards would apply to ensure that the focus is on organizations with a limited capacity for bearing the same magnitude of financial risk as larger APM Entities do. We proposed that, beginning in the second QP Performance Period (proposed to be 2018), the Medical Home Model financial risk standard and nominal amount standard, described in section II.F.4.b.(4) of this final rule with comment period, would only apply to APM Entities that participate in Medical Home Models and that have 50 or fewer eligible clinicians in the organization through which the APM Entity is owned and operated. Thus, in a Medical Home Model that meets the criteria to be an Advanced APM, the proposed Medical Home Model financial risk and nominal amount standards would only apply to those APM Entities owned and operated by organizations with 50 or fewer eligible clinicians. We believe it is appropriate to use the number of eligible clinicians as the basis, rather than physicians, for this threshold because the number of eligible clinicians reflects organizational resources and capacity, and also may fluctuate widely around a specific number of physicians. We also believe that the size threshold of 50 eligible clinicians is appropriate because organizations of that size have demonstrated the capacity and interest in taking on higher levels of two-sided risk either by themselves or by joining with other organizations. In the event that a Medical Home Model happens to meet the generally applicable financial risk and nominal amount standards, this organizational size limitation would not be applicable. We proposed the same restriction on Medicaid Medical Home Models as discussed in section II.F.7 of this final rule with comment period.

    Measuring organizational size based on the size of the “parent organization” differs from measuring it based on the size of the APM Entity. Collecting accurate information on the number of eligible clinicians affiliated with a parent organization would require additional, but we believe achievable, reporting by APM Entities. We believe that size of the organization is generally a better indication of risk-bearing capacity than APM Entity size. For instance, an APM Entity may be very small if it represents one practice site, but that practice site may be one of many affiliated with a health system or independent physician association of substantial size. We believe that the proposed limits on the types and sizes of entities that can be Advanced APM Entities under Medical Home Models would encourage larger organizations to move into Advanced APMs with greater levels of risk than the smaller levels that could enable Medical Home Models to become Advanced APMs. This is consistent with our goals that the incentives for Advanced APM participation should reward commitment to challenging models. However, we do not intend to imply that participation in Medical Home Models is necessarily inappropriate for larger organizations. We recognize that Medical Home Models differ from other APMs, such as ACO initiatives, because Medical Home Models focus on improving primary care through much more targeted and intensive interventions than those commonly found in other APMs. We hope to encourage participation in Medical Home Models for all organizations that can derive value from their designs, not just those that are too small to join ACO initiatives and other higher risk APMs.

    We proposed to implement this size limitation for Advanced APMs that are Medical Home Models beginning in the second year of the Quality Payment Program (2018 QP Performance Period) because we understand that applications for many APMs would be due to us prior to this final rule, precluding APM Entities from having time to substantially adjust their APM participation strategies for the 2017 QP Performance Period. We proposed that we would make a determination of whether an APM Entity meets the size limitation prospectively before a QP Performance Period, and that the determinations would not subsequently change based on changes in organizational size during or after the QP Performance Period (although changes in organizational size would, as applicable, affect determinations for subsequent QP Performance Periods).

    We solicited comment on this proposal, particularly with regard to the use of the count of eligible clinicians in the parent organization of the APM Entity as the metric of organizational size for Medical Home Models, and whether setting the limit at 50 for the number of eligible clinicians in the organization would constitute a reasonable threshold to distinguish between organizations that we could expect to have the financial capability to join APMs, such as ACO initiatives, that have two-sided risk. We also solicited comment on an alternative option to establish the size limitation based on the number of eligible clinicians in the entire Medical Home Model, rather than on number of eligible clinicians in a particular APM Entity's organization.

    The following is a summary of the comments we received regarding our proposal to, starting in the second QP Performance Period, restrict the Medical Home Model financial risk and nominal amount standards applicable only to APM Entities with 50 or fewer eligible clinicians in their parent organizations. Comments regarding our proposal to apply the same restriction on Medicaid Medical Home Models are also included.

    Comment: Many commenters expressed opposition to this policy. Commenters cited deterrence of participation by larger organizations in Medical Home Models that are Advanced APMs because of the inability to earn the APM Incentive Payment, difficulties in creating attractive multispecialty Medical Home Models, and disadvantages for large organizations competing for eligible clinicians. They believe that the APM Incentive Payment is a strong incentive, and that the presence or absence of the opportunity to earn it will be a driving factor in eligible clinician and APM Entity decision-making.

    Some commenters believe that our proposed size criterion of 50 eligible clinicians in the organization is an arbitrary cutoff that does not accurately represent a distinction between organizations that can and cannot reasonably assume downside risk, and some asked for clarification for why the cutoff was set at 50. Some suggested that if we do not eliminate the size limit, we should increase it to 100 or 200 clinicians. Other clinicians suggested that the limit be applied to APM Entities rather than parent organizations.

    Response: We appreciate the many comments on this topic, and understand that the organization size limit creates an additional consideration for entities looking to participate in an Advanced APM. In many ways, it is consistent with our goal that entities move toward robust, performance-based APMs. Creating a unique Medical Home Model financial risk criterion reflects what we believe is a reasonable goal for smaller entities' risk-bearing capacity. We also believe that organization size is a meaningful proxy for potential risk-bearing capacity. In arriving at the magnitude of the limit, we compared the sizes of Shared Savings Program ACOs across tracks of the program to the organizational sizes of CPC practices and found that the vast majority of CPC practices fell below this number and the vast majority of ACOs were above this number. We believe that this supports using eligible clinician counts as a proxy for risk-bearing capacity and for selecting 50 as the cutoff that differentiates between use of the Medical Home Model or the generally applicable financial risk criterion. Therefore, we believe that our proposed policy is sound, especially because there is no limit in the first year, and organizations will have the time to consider their options accordingly.

    We also believe that a Medical Home Model such as CPC+ offers many inherent benefits to its participants regardless of the opportunity to earn the APM Incentive Payment. The 5 percent APM Incentive Payment will be one benefit to certain Advanced APM participants, but the opportunities within APMs themselves should be the primary drivers of participation decisions because those risks and rewards within the APMs can outweigh the 5 percent APM Incentive Payment. Therefore, we encourage organizations with both greater and fewer than 50 clinicians to consider the ability of Medical Home Models such as CPC+ to help develop care infrastructure and transform practices to be more patient-centered and value-oriented.

    Comment: Some commenters suggested that instead of using the number of eligible clinicians we use clinician revenue or the size of the APM Entity's patient panel or attributed beneficiary list in order to draw a distinction between organizations' risk-bearing capacity.

    Response: We appreciate the idea of using alternative methods of setting the size limit for a Medical Home Model Advanced APM such as patient panel size or revenue. Attribution and revenue have much greater variability across APM Entities than number of eligible clinicians, which would make the setting of a meaningful number more challenging. Further, for any APM Entity, attribution numbers can vary significantly from year to year, partly in relation to the number of eligible clinicians, but also due in part to uncontrollable factors such as beneficiary behavior and the presence of multiple APM Entities in the same region that vie for the attribution of a similar pool of beneficiaries. Finally, several APMs require that an APM Entity have a minimum number of attributed beneficiaries in order to be eligible to participate. We have data on APM Entity attribution numbers, but because we are pursuing an appropriate proxy for the risk-bearing capacity of a parent organization, we do not believe that we could accurately obtain patient panel data for entire organizations without imposing a substantial administrative burden of such organizations. Therefore, we continue to believe that the best metric available to us at this time is the number of eligible clinicians in the organization.

    Comment: Some commenters expressed concern that the Medical Home Models that are Advanced APMs offer an incentive for clinicians to enter Advanced APMs with lower levels of risk than they would otherwise bear. A commenter stated that this could cause the Advanced APMs to compete with one another, and that the lowest risk option that is an Advanced APM will be the most attractive to many clinicians.

    Response: The concern expressed by these commenters is what led us to propose this policy. We believe that organizations capable of taking on significant downside risk should have the incentives align to encourage them to assume the amount of risk that matches their capabilities. However, for many smaller organizations, a high degree of risk such as that required in the ACO initiatives is not a viable option. We believe participation in a Medical Home Model such as CPC+ represents the most risk some smaller organizations can handle at this time, and such APMs offer invaluable support for transforming practices to achieve our delivery system reform goals. That is, the balance we try to strike in this policy is to provide incentives for participation in Advanced APMs but also encouragement for each APM Entity to participate in the best “fit” APM for them.

    We are finalizing as proposed the limitation on applicability of the Medical Home Model financial risk and nominal amount standard to APM Entities with fewer than 50 eligible clinicians in their parent organizations. This limitation would not apply to the first QP Performance Period that begins in 2017. Therefore, any APM Entity participating in a Medical Home Model that meets the unique Medical Home Model Advanced APM standards will be considered to be participating in an Advanced APM and have the opportunity to become a QP for purposes of payment year 2019. Starting in the QP Performance Period that begins in 2018, the Medical Home Model Advanced APM financial risk standard would not apply for APM Entities that are owned and operated by organizations with greater than 50 eligible clinicians. As such, participation in a Medical Home Model Advanced APM by such an Advanced APM Entity would not offer the opportunity to attain QP status through that Medical Home Model unless the Medical Home Model meets the generally applicable Advanced APM financial risk criterion. Beginning for the QP Performance Period starting in 2018, we will make this size limit determination for APM Entities in relevant Medical Home Models prior to a QP Performance Period using the most recently available information from the year prior to the QP Performance Period. Therefore, the first determinations of organization size will take place in 2017 using information gathered in 2017. We intend to collect the necessary information through the Medical Home Model operations and will issue guidance on how and when we will do so.

    (5) Capitation

    We proposed that full capitation risk arrangements would meet the Advanced APM financial risk criterion. We proposed that, for purposes of this rulemaking, a capitation risk arrangement means a payment arrangement in which a per capita or otherwise predetermined payment is made to an APM Entity for all items and services furnished to a population of beneficiaries, and no settlement is performed for the purpose of reconciling or sharing losses incurred or savings earned by the APM Entity. We also reiterated that Medicare Advantage and other private plans paid to act as insurers on the Medicare program's behalf are not Advanced APMs.

    We believe that capitation risk arrangements, as defined here, involve full risk for the population of beneficiaries covered by the arrangement, recognizing that it might require no services whatsoever or could require exponentially more services than were expected in calculating the capitation rate. The APM Entity bears the full downside and upside risk in this regard. Thus, we believe capitation arrangements inherently require an APM Entity to bear financial risk for monetary losses in excess of a nominal amount. We proposed that, where payment is made to participating entities in an APM using a capitation risk arrangement, the APM and participating entities would meet the criterion under section 1833(z)(3)(D)(ii)(I) of the Act.

    In implementing this proposed policy, it is important to distinguish capitation as a risk arrangement from capitation as only a cash flow mechanism. A capitation risk arrangement adheres to the idea of a global budget for all items and services to a population of beneficiaries during a fixed period of time. Cash flow mechanisms that make payments in predetermined amounts that are later reconciled or adjusted based on actual services are not necessarily a full risk arrangement. For example, an APM Entity has a capitation arrangement under an APM that pays $1,000 per beneficiary per month for a population of 100 beneficiaries, totaling $1.2 million per year. If expenditures for services actually furnished to these beneficiaries would have totaled $1.3 million if paid on a FFS basis, a payment mechanism without risk might make a reconciliation payment of $100,000 to the entity. In that case, the APM Entity is not bearing any financial risk for monetary losses under the APM. If there is partial reconciliation, the arrangement would not meet the proposed capitation risk arrangement definition but still may meet the financial risk and nominal amount standards through the assessments described in this section above. In contrast, if this arrangement is a capitation risk arrangement, there would be zero reconciliation for those losses. Under our proposal, we would categorically accept that a capitation risk arrangement under an APM would meet the Advanced APM financial risk criterion.

    We solicited comment on our proposal for the categorical acceptance of capitation risk arrangements as satisfying the Advanced APM financial risk criterion and on our proposed definition of a capitation risk arrangement. We also solicited comment on other types of arrangements that may be suitable for such treatment for purposes of this financial risk criterion. Finally, we solicited comment on potential limits or qualifications to the capitation standard to prevent potential abuse or incentives that are not consistent with the provision of high value care.

    The following is a summary of the comments we received regarding our proposal to consider full capitation risk arrangements to meet the Advanced APM financial risk criterion.

    Comment: Several commenters expressed support for considering full capitation payment arrangements to meet the Advanced APM financial risk criterion. Some commenters requested that we further clarify what we would consider full capitation and how we would treat partial capitation arrangements. In particular, we received suggestions that full capitation be in reference to all “agreed upon items and services” rather than “all items and services.” Finally, some commenters requested that we not limit this policy to arrangements without reconciliation for savings or losses. One commenter cautioned that an over-abundance of capitation arrangements in a market could fuel consolidation and restrict the diversity of practice types and sizes, and one commenter wanted assurance that capitation would be accompanied by appropriate quality measurement to mitigate a focus only on cost.

    Response: We appreciate the general support for this policy. With respect to defining full capitation, we believe that the structure as proposed is neither too broad nor too narrow. In our preamble language, we described full capitation as a “global budget for all items and services to a population of beneficiaries during a fixed period of time.” We believe that that is a key distinction between full and partial capitation. An “agreed upon” set of items and services could be relatively small compared to all items and services in the payment arrangement between parties. Therefore, we believe that this standard should only apply to “full” capitation. Similarly, as described in the proposed rule, reconciliation of settlement of savings and losses mitigates and removes the risk aspect of capitation. This is the difference between a risk arrangement, in which there is no reconciliation, and a cash flow mechanism, in which the ultimate payment amount is adjusted after the fact to account for variations in utilization.

    For payment arrangements that do not meet this definition of full capitation, we would still assess the arrangement under the applicable financial risk criterion. Therefore, partial capitation arrangements could meet the criterion so long as the magnitude of the payments at risk involved in the arrangement meets the nominal amount standard and the arrangement is actually a risk arrangement rather than a cash flow mechanism.

    Finally, we appreciate the commenter's concern that a high prevalence of capitation arrangements without sufficient quality performance requirements could misplace incentives for delivering high value care. We will monitor the impact of Advanced APMs with capitation arrangements in the future, especially as the All-Payer Combination Option becomes available beginning in payment year 2021. We also believe that this concern is mitigated in part by the Advanced APM MIPS-comparable quality measure requirement, described earlier in this section, because Advanced APMs must also base payment at least in part on meaningful quality measures instead of entirely on cost performance.

    We are finalizing the policy that full capitation arrangements would meet the Advanced APM financial risk criterion. All other payment arrangements would be assessed against the applicable nominal amount standards set forth in this final rule.

    (6) Medical Home Expanded Under Section 1115A(c) of the Act

    Section 1833(z)(3)(D)(ii)(II) of the Act states that an Advanced APM must either meet the financial risk criterion or be a Medical Home Model expanded under section 1115A(c) of the Act. We refer to the latter criterion as the expanded Medical Home Model criterion. We proposed that a Medical Home Model that has been expanded under section 1115A(c) of the Act would meet the expanded Medical Home Model criterion and thus would not need to meet the Advanced APM financial risk criterion as described above. Under this proposal, an APM would have to both be determined to be a Medical Home Model as defined in this rulemaking and in fact be expanded using the authority under section 1115A(c) of the Act. Such expansion is contingent upon whether, for an APM tested under section 1115A(b) of the Act:

    • The Secretary determines that such expansion is expected to reduce spending under the applicable title without reducing the quality of care; or improve the quality of patient care without increasing spending;

    • CMS' Chief Actuary certifies that such expansion would reduce (or would not result in any increase in) net program spending under the applicable titles; and

    • The Secretary determines that such expansion would not deny or limit the coverage or provision of benefits under the applicable title for applicable individuals. In determining which models or demonstration projects to expand under the preceding sentence, the Secretary shall focus on models and demonstration projects that improve the quality of patient care and reduce spending.

    We note that the expanded Medical Home Model criterion cannot be met unless a Medical Home Model has been expanded under section 1115A(c). Merely satisfying expansion criteria would not be sufficient to meet this Advanced APM criterion. This expanded Medical Home Model criterion is directly related to a similar criterion addressed in the proposed rule for Medicaid Medical Home Models, which addresses how such APMs can meet the Other Payer Advanced APM financial risk criterion by having criteria comparable to an expanded Medical Home Model. We requested comments on the proposed requirements for this and all proposed Advanced APM criteria.

    The following is a summary of the comments we received regarding our proposal that Medical Home Models that are expanded under section 1115A(c) of the Act would meet the Advanced APM financial risk criterion.

    Comment: Several commenters urged us to assess the Comprehensive Primary Care (CPC) initiative in order to expand it under section 1115A(c) authority as soon as possible. Some commenters also stated that this criterion was very narrow and limits the future Medical Home Model opportunities for Advanced APM participation. Some commenters believe that this is not aligned with Congressional intent to enable Medical Home Models to become Advanced APMs without meeting the financial risk criteria.

    Response: Expansion of the CPC initiative is outside the scope of this final rule with comment period. We will continue to consider whether CPC meets the statutory expansion criteria. As with CPC, we will closely monitor the results of CPC+ in order to determine whether it meets the statutory criteria for expansion in the future.

    With respect to the narrowness of this policy, we believe that we do not have the statutory authority to broaden the standard to include Medical Home Models that have not actually been expanded. Section 1833(z)(3)(D)(ii)(II) of the Act is quite clear in its reference to expansion under section 1115A(c) of the Act.

    We are finalizing the expanded Medical Home Model policy as proposed. An APM that is determined to be a Medical Home Model and has in fact been expanded using the authority under section 1115A(c) of the Act meets the Advanced APM financial risk criterion.

    (7) Application of Criteria to Current and Recently Announced APMs

    In the proposed rule, we used the proposed Advanced APM criteria to identify the current APMs that we anticipate would be Advanced APMs for the first QP Performance Period. The list of proposed Advanced APMs was based on the application of criteria in the proposed rule and did not preclude any changes to the list based on: (1) any changes made to the proposed criteria or their application in this final rule; (2) any modifications to the design of current APMs; or (3) any new APMs announced after publication of the proposed rule. Consistent with our finalized policy to post an official determination of which APMs would meet the final Advanced APM criteria prior to the beginning of the first QP Performance Period, we will publish such materials on the CMS Web site following the publication of this final rule with comment period.

    The following is a summary of the comments we received on the preliminary assessment of which current APMs meet the Advanced APM criteria.

    Comment: Many commenters responded to the publication of the preliminary list of Advanced APMs by suggesting additional candidates to be Advanced APMs. Several commenters supported the indication that certain APMs, such as Shared Savings Program Tracks 2 and 3, the Oncology Care Model, and the Next Generation ACO Model, would be Advanced APMs based on the proposed criteria. Other commenters stated their belief that the Shared Savings Program Track 1, BPCI, and the proposed Part B Drug Payment Model should be Advanced APMs as well. Some commenters suggested that the current Maryland All-Payer Model should be classified as an Advanced APM, and that participating Maryland hospitals and hospital-based clinicians should be considered Advanced APM Entities because they will be primarily responsible for the cost and quality of care provided to beneficiaries. Commenters cited that participants in such APMs currently represent some of the most innovative and dedicated organizations interested in driving delivery system reform goals. Other commenters generally stated that the current list of Advanced APMs is quite limited and that there should be more Advanced APMs, specifically for hospitals and specialties.

    Response: We thank the commenters for their thoughts on which APMs should or should not be Advanced APMs. We are finalizing criteria and discuss the rationale for such decisions earlier in this section, and we highlighted the many ways in which we are planning to expand the opportunities for Advanced APM participation. For instance, concurrent with the release of this rule, we explain our strategy to: (1) Reopen certain APMs for additional application rounds; (2) amend the design of certain APMs so that they meet the Advanced APM criteria; (3) and engage in development of new APMs that could be Advanced APMs, potentially including APMs based on recommendations from the PTAC. Finally, we encourage the commenters to examine the final Advanced APM determinations for 2017 that we will publish no later than January 1, 2017. These determinations will identify which Advanced APM criteria each APM meets or does not meet.

    Comment: Some commenters responded to the proposed list of APMs by submitting ideas for the design of new APMs.

    Response: We thank the commenters for providing input on the design of potential future APMs. We note that soliciting comment on the design of potential future APMs is outside of the scope of this final rule with comment period. However, we remind commenters of the PTAC, as described in section II.F.10. of this final rule with comment period, and note that commenters can submit proposals for the design of new APMs directly to the Innovation Center.

    Comment: Some commenters urged CMS to identify a Medical Home Model that would be an Advanced APM and stated their belief that it was Congress' intent to have a Medical Home Model that is an Advanced APM.

    Response: We thank the commenters for emphasizing the importance of making Medical Home Models available as Advanced APMs. As stated in section II.F.4.6. of this final rule with comment period, the unique statutory path specified for Medical Home Models to become Advanced APMs explicitly requires expansion under section 1115A(c) of the Act, which is something that has not yet occurred for any Medical Home Model. In the absence of a Medical Home Model that has been expanded under section 1115A(c) of the Act, the Medical Home Model financial risk criterion could allow a Medical Home Model to be an Advanced APM without meeting the expansion pathway set forth in the law.

    In the proposed rule, we noted that the CJR model did not meet the proposed Advanced APM criteria. We solicited comment on how we might change the design of CJR through future rulemaking to make it an Advanced APM, and we solicited comment on how to include eligible clinicians in CJR for purposes of the QP determination as described in section II.F.5. of this final rule with comment period.

    The following is a summary of the comments we received regarding our request for comments on how to redesign the CJR model to make it an Advanced APM.

    Comment: Many commenters urged CMS to modify existing programs, such as the CJR model, Track 1 of the Shared Savings Program, and BPCI, to make them meet the criteria for Advanced APMs and to create an Advanced APM “on-ramp” for interested participants. Specifically, many commenters recommended that CJR and BPCI be modified to require the use of CEHRT, and that steps be taken to enable BPCI to include quality measures that will satisfy the Advanced APM quality criterion. One commenter expressed the view that CJR currently meets the requirements of an Advanced APM. Commenters recommended rewarding clinicians with improvement activities credit for participating in CJR and BPCI programs that satisfy the Advanced APM criteria. Some commenters suggested that CMS either allow all tracks of the CEC model to be an Advanced APM or to offer an option for non-Large Dialysis Organization (LDO) participations in the CEC model to assume downside risk to be in an Advanced APM. One commenter suggested that CMS consider the Maryland All-Payer Model to be an Advanced APM.

    Response: To be considered an Advanced APM, an APM must meet the three criteria described in this section through the terms of its arrangement with APM Entities. It is not sufficient that an APM Entity, independent of an obligation under the APM, meets the standards.

    We agree with commenters that one way for CMS to encourage more participation in Advanced APMs is to assess and modify existing APMs to meet the criteria for Advanced APMs. We considered this in developing proposed amendments to CJR (81 FR 50793), and we are considering implementing a new voluntary bundled payment APM for CY 2018 that could meet the Advanced APM criteria.

    Comment: One commenter requested that any APM in which CMS takes a direct discount off of FFS payments, such as CJR, regardless of whether or not it meets the Advanced APM criteria outlined in the Proposed Rule, should qualify for the APM Incentive Payment. Another commenter requested that we deem CJR an Advanced APM regardless of modifications to the model.

    Response: We believe we have defined the statutory criteria appropriately, consistent with the terms of the statute. As such, Advanced APMs are limited to those that meet the final criteria.

    Comment: A few commenters suggested that CMS make the following changes to CJR: (1) Restructure CJR by replacing the hospital as the APM Entity with MIPS eligible clinicians; (2) replace CJR's retrospective reimbursement with a prospective payment; and (3) include outpatient services in CJR. Another commenter recommended that CMS use its own data to determine which CJR hospitals meet the Meaningful Use requirements and relay this information to affiliated clinicians, or, in the alternative, add a measure similar to the Shared Savings Program measure that assesses the use of CEHRT by certain eligible clinicians. Another commenter suggested that CMS ask CJR hospitals to voluntarily provide a list of eligible clinicians who treat patients in the hospital for any of the CJR procedures to satisfy the Advanced APM Participation List requirement. Also, to satisfy the CEHRT requirement, commenter suggested that CMS either use the Advancing Care Information domain data submitted by eligible clinicians in CJR to assess whether the eligible clinicians are meaningful users of CEHRT or count a hospital's participation in the EHR Incentive Program. Another commenter suggested the following changes to CJR: (1) Make physician assumption of risk mandatory, rather than place the risk on hospitals; and (2) include medical device manufacturers in the pool of CJR collaborators.

    Response: We thank the commenters for their ideas. We considered these comments informally in developing proposed amendments to CJR (see 81 FR 50793). We will consider public comments on these proposed amendments in the separate rulemaking process for those proposed amendments.

    5. Qualifying APM Participant (QP) and Partial QP Determination

    The QP determination process is specified under section 1833(z)(2) of the Act, in which QPs are defined as those eligible clinicians who meet the specified threshold(s). We proposed a process for determining which eligible clinicians would be QPs or Partial QPs for a given payment year through their participation in Advanced APMs during a corresponding QP Performance Period. Per sections 1833(z)(2) and 1848(q)(1)(C)(ii)(I) and (II) of the Act, an eligible clinician would become a QP or Partial QP for a payment year if they are determined at the end of the performance period to be eligible clinicians in an Advanced APM Entity that collectively meets the threshold values for participation in an Advanced APM during the corresponding QP Performance Period, and starting in 2021, the threshold values for participation in an Other Payer Advanced APM as proposed here. We proposed to determine each year whether an eligible clinician achieved the threshold level of participation to become a QP or Partial QP during the corresponding QP Performance Period. We would make this assessment independent of QP or Partial QP determinations made in previous years and accounting for Advanced APMs that begin or end on timeframes that do not align precisely with the QP Performance Period. The following would apply to an eligible clinician whom CMS determines to be a QP for a particular year:

    • For payment years 2019-2024, the QP will receive a lump sum payment equal to 5 percent of the estimated aggregate payment amounts for Medicare Part B covered professional services for the prior year, as described in section II.F.8. of this final rule with comment period;

    • The QP will be excluded from MIPS payment adjustments, as described in section II.E.3. of this final rule with comment period; and

    • For payment years 2026 and later, payment rates under the Medicare PFS for services furnished by the eligible clinician will be updated by the 0.75 percent qualifying APM conversion factor as specified in sections 1848(d)(1)(A) and (d)(20) of the Act.

    Through the APM Entity group determination described in section II.F.5.b. of this final rule with comment period, we would identify eligible clinicians who do not meet the QP Threshold but reach the Partial QP Threshold for a year to be Partial QPs. Partial QPs would not be eligible for the 5 percent APM Incentive Payment for years from 2019 through 2024 or, beginning for 2026, the qualifying APM conversion factor. However, Partial QPs would have an opportunity to decide whether they wish to be subject to a MIPS payment adjustment, which could be positive or negative.

    The statute requires that we use two options to determine whether an eligible clinician is a QP or a Partial QP for a payment year—one is the Medicare Option and, beginning in 2021, the other is the All-Payer Combination Option. While these are the terms based on statutory language that we have chosen to use for the purposes of describing the process by which we can calculate an eligible clinician's Threshold Score, we note that the use of the word “option” does not imply that an eligible clinician will have the ability to choose between the two. We further outlined in the proposed rule our proposed process by which we will assess eligible clinicians under both options (beginning in 2021) to the extent that sufficient data is submitted to us.

    The Medicare Option focuses on participation in Advanced APMs, and we would make determinations under this option based on Medicare Part B covered professional services attributable to services furnished through an Advanced APM Entity. The Medicare Option is the only option available for QP determinations during the first 2 years of this program (payment years 2019-2020). The All-Payer Combination Option, described in section II.F.7. of this final rule with comment period, is applicable beginning in the third payment year (2021) and would allow us to make determinations based on participation in both Advanced APMs and Other Payer Advanced APMs. The All-Payer Combination Option would not replace or supersede the Medicare Option; instead, it would allow eligible clinicians to become QPs by meeting a relatively lower threshold based on Medicare Part B covered professional services through Advanced APMs and an overall threshold based on services through both Advanced APMs and Other Payer Advanced APMs. With our QP Threshold Score methodologies finalized in this rule, we generally interpret payments “through” an Advanced APM Entity to mean payments made by us for services furnished to attributed beneficiaries, who are the beneficiaries for whose costs and quality of care an Advanced APM Entity is responsible under the Advanced APM. Under section 1848(q)(1)(C)(iii) of the Act, the calculations used for Partial QP determinations are the same, but the threshold percentages to be a Partial QP for each year are lower than those required to be a QP.

    The QP and Partial QP Thresholds under the Medicare Option are shown in Tables 32 and 34 of this final rule with comment period. The QP and Partial QP Threshold values under the All-Payer Combination Option are shown in Tables 33 and 35 of this final rule with comment period. We will determine an eligible clinician's QP status for a payment year by calculating an eligible clinician's Threshold Score, and comparing the eligible clinician's Threshold Score (either based on payment amounts or patient counts) to the relevant QP Threshold or Partial QP Threshold. In addition, we discussed our proposal to make QP determinations at a group level based on an entire Advanced APM Entity in section II.F.5.b of the proposed rule (81 FR 28319-28321).

    According to section 1833(z)(2)(D) of the Act, the Secretary may base the determination of whether an eligible clinician is a QP or a Partial QP by using counts of patients in lieu of using payment amounts and using the same or similar percentage criteria as those used for the payment amount method, as the Secretary determines is appropriate. For QP and Partial QP determinations using patient count calculations, we proposed to use the percentage values displayed in Tables 34 and 35 of this final rule with comment period. The purpose of the proposed design of the Medicare patient count method is to make QP determinations accessible to entities and individuals who are clearly and significantly engaged in delivering value-based care through participation in Advanced APMs.

    By performing preliminary analyses using our proposed QP determination methodologies with historical APM data, we found that the proposed QP and Partial QP Patient Count Thresholds are similar in magnitude and trajectory to those specified in the statute for the payment-based calculations. Due to varying attribution and organizational characteristics, we anticipate that using our proposed thresholds, the method—payment amount or patient count—that results in the most favorable QP status will likely vary across different Advanced APMs and Advanced APM Entities. We believe that each eligible clinician should have every opportunity to reach the QP threshold for each year, and do not intend to limit this opportunity by preemptively selecting one method over another.

    Table 32—QP Payment Amount Thresholds—Medicare Option Medicare Option—Payment Amount Method Payment year 2019
  • (percent)
  • 2020
  • (percent)
  • 2021
  • (percent)
  • 2022
  • (percent)
  • 2023
  • (percent)
  • 2024 and later
  • (percent)
  • QP Payment Amount Threshold 25 25 50 50 75 75 Partial QP Payment Amount Threshold 20 20 40 40 50 50
    ER04NO16.009 Table 34—QP Patient Count Thresholds—Medicare Option Medicare Threshold Option—Patient Count Method Payment year 2019
  • (percent)
  • 2020
  • (percent)
  • 2021
  • (percent)
  • 2022
  • (percent)
  • 2023
  • (percent)
  • 2024 and later
  • (percent)
  • QP Patient Count Threshold 20 20 35 35 50 50 Partial QP Patient Count Threshold 10 10 25 25 35 35
    ER04NO16.010

    We solicited comment on the proposed QP Patient Count Threshold and Partial QP Patient Count Threshold percentage values for both the Medicare Option and the All-Payer Combination Option.

    The following is a summary of the comments we received regarding our proposed QP Patient Count Thresholds and Partial QP Patient Count Thresholds.

    Comment: A few commenters did not support the proposed QP Patient Count Thresholds because they are lower than the correlating QP Payment Amount Thresholds. They stated that this would increase the number of QPs in the absence of a strong connection between performance and reward.

    Response: We believe that what appear to be the lower QP Patient Count Thresholds actually represent a parallel to the QP Payment Amount Thresholds and that both reflect increasing rigor over time. We believe the QP Patient Count and Payment Amount Thresholds represent the same overall level of rigor by taking into account factors that cause the payment amount and patient count Threshold Scores to vary for an Advanced APM Entity group. These factors include, in addition to the obvious patient counts or payment amounts, characteristics of the markets in which APM Entities operate, the APMs' attribution methodologies, and the participation of different types of eligible clinicians such as specialists and non-physician practitioners. In addition to excluding payment amounts and patient counts that are categorically impossible to be in the numerator from the denominator of Threshold Score calculations, we believe that the thresholds (payment amount and patient count) should have the same overall level of rigor in order to effectuate the intent of the law to have thresholds that reward committed participation in Advanced APMs. Regarding the concern that a lower QP Patient Count Threshold would increase the number of eligible clinicians who are QPs without a connection between performance and reward, we believe that the Advanced APMs themselves are the drivers of cost and quality performance through their unique incentive designs. The QP thresholds are not replacements for those performance measurements in Advanced APMs. However, we believe that having a sufficient amount of payments or patients flowing through an Advanced APM contributes to ensuring eligible clinicians have a meaningful incentive to deliver high-value care across their entire practice. We also do not believe that we should aim to produce a particular number of QPs by calibrating the QP Patient Count Threshold. We want the QP thresholds to be meaningful and attainable independent of how many eligible clinicians ultimately become QPs.

    Comment: Many commenters supported the proposed QP Patient Count Thresholds, although some expressed a degree of concern about the difficulty of meeting the higher percentage thresholds we proposed for future performance periods.

    Response: We thank the commenters for their support. We believe that the higher thresholds in future years will be challenging but attainable for eligible clinicians in Advanced APMs. We also believe it is appropriate for increases in the QP Patient Count Threshold over the next several performance periods to parallel those for the QP Payment Amount Threshold.

    Comment: Some commenters stated their belief that in order to become QPs, participants in Advanced APMs should be held to a high performance standard—for instance, demonstrated cost and quality improvements in the Advanced APM—that increases over time. Conversely, other commenters believe that becoming a QP should be based upon participation in Advanced APMs and not on the actual performance within the Advanced APMs.

    Response: We thank commenters for their input on how to achieve QP status. We do not believe that we have the legal authority to tie QP status to performance within the Advanced APMs. The statute specifies that becoming a QP is based on reaching the QP thresholds, which are based on the percentage of payments or patients provided services through an Advanced APM, not on other performance metrics such as cost and quality.

    Comment: Many commenters stated that the QP thresholds—both payment amount and patient count—were too high, especially for certain types of Advanced APM Entities that have high ratios of specialists or act as referral centers, resulting in substantial amounts of care delivered to non-attributed beneficiaries. Some commenters stated that if such Advanced APM Entities cannot meet the QP thresholds, we would essentially be discouraging participation and penalizing them for fulfilling their missions of treating a wide range of beneficiaries and for utilizing their expertise as broadly as possible. Therefore, several commenters suggested that CMS further reduce the QP thresholds, both payment amount and patient count, to ensure participation is appropriately incentivized. Other commenters suggested that the QP thresholds be reduced differentially depending on the Advanced APM in order to tailor the thresholds to the particular context of an Advanced APM. Some commenters requested that we monitor the issue in the early years of implementation so that we can adjust our thresholds or methodologies for the later years if necessary.

    Response: We thank the commenters for their thoughts on the QP thresholds. First, we reiterate that the payment amount thresholds are set by the statute and that we do not have the authority to change them in this final rule. Second, based on our preliminary analyses of historical participation in APMs, we believe that QP thresholds in the first years under both the payment amount and patient count thresholds are highly attainable by Advanced APM participants. We will closely monitor the results and consider whether the finalized patient count thresholds accurately represent participants' level of commitment to Advanced APMs in a manner similar to the payment amount thresholds. We understand that there may be some natural differences in Threshold Scores depending on the characteristics of a particular Advanced APM or its participants, but we believe the statute contemplates a single QP threshold for each performance period, and that it is preferable to have a single, simple set of QP thresholds applicable to all Advanced APM participants. We believe our proposed set of QP Patient Count Thresholds adhere to the statutory directive that we use percentage criteria for the QP Patient Count Thresholds that are similar to those for the QP Payment Amount Threshold.

    After considering the public comments, we are finalizing the QP Patient Count Thresholds and Partial QP Patient Count Thresholds as proposed, and we are finalizing the QP Payment Amount Threshold and Partial QP Payment Amount Thresholds as specified in statute.

    We proposed that, beginning with payment year 2021, we would conduct the QP determination sequentially so that the Medicare Option is applied before the All-Payer Combination Option. We proposed to apply the All-Payer Combination Option only to an Advanced APM Entity group of eligible clinicians or eligible clinicians who do not meet either the QP Payment Amount or Patient Count Threshold under the Medicare Option but who do meet the lower Medicare threshold for the All-Payer Combination Option. This process is illustrated in Figures C and D of this final rule with comment period, which show that the first assessment is whether the Medicare QP Threshold has been met under either the Medicare Option or the All-Payer Combination Option.

    Because in addition to being a standalone path to QP status, the Medicare Option (either based on payment amounts or patient counts) is also a component of the All-Payer Combination Option, and because all eligible clinicians must reach at least a minimum Threshold Score through Advanced APMs to be QPs, we believe that this sequential approach streamlines the analytic and operational requirements to make QP determinations under the All-Payer Combination Option. Figure C illustrates the proposed process for making QP determinations under the Medicare Option for 2019 and 2020. Figure D illustrates the process proposed for making QP determinations under both the Medicare and All-Payer Combination Options for payment years 2021-2024. Figure E provides an example of the proposed process for making QP determinations in payment years 2023-2024. Figures C, D, and E only illustrate the payment amount method, but a similar process would apply for the patient count method.

    ER04NO16.011 ER04NO16.012

    The following is a summary of the comments we received regarding our proposal to assess Advanced APM Entities sequentially under the Medicare Option and, only if necessary, under the All-Payer Combination Option.

    Comment: A few commenters expressed support for the sequential determination of QPs and Partial QPs in the Medicare Option and then the All-Payer Combination Option.

    Response: We thank the commenters for their support.

    We are finalizing the policy as proposed. Beginning with payment year 2021, we will conduct the QP determination sequentially so that the Medicare Option is applied before the All-Payer Combination Option. We will apply the All-Payer Combination Option only to an Advanced APM Entity group of eligible clinicians who do not meet either the QP Payment Amount or Patient Count Thresholds under the Medicare Option but who do meet the lower Medicare threshold for the All-Payer Combination Option.

    The following is a summary of comments regarding the QP determination process generally.

    Comment: A few commenters expressed that the QP determination is too complex and that clinicians will not understand what is required to attain QP status. They recommended that we establish a more transparent and simple approach. Other commenters suggested that any level of participation in an Advanced APM should suffice for receiving the APM Incentive Payment. One commenter requested that CMS be more flexible in granting QP status.

    Response: We are required by statute to apply payment amount or patient count thresholds in order to identify which eligible clinicians receive the APM Incentive Payment and are excluded from MIPS adjustments. We understand that this is a new process with certain inherent complexities, but we believe that in our proposed policies we have balanced the interests of simplicity and the need to accurately apply standards to an increasingly diverse array of Advanced APMs now and in the future. We will be providing education and technical assistance to help eligible clinicians understand the requirements to attain QP status.

    Comment: Some commenters believe that the QP determination process discourages participation in Advanced APMs due to the uncertainty of the results of the Threshold Score. Similarly, one commenter suggested that those eligible clinicians and entities that have already invested heavily and currently participate in Advanced APMs should have an easier path to QP determination that those who are new participants.

    Response: We take seriously any potential incentives that could work against this program's purpose of increasing Advanced APM participation. Although we do not agree that our proposed and final QP determination policies will discourage participation, we intend to provide information and preliminary assessments based on historical data to help Advanced APM participants understand what their Threshold Scores would likely be in order to mitigate uncertainty about their likely QP status.

    We disagree with the commenter that current APM participants should have an easier path to QP status than APM participants who have never previously participated in APMs. While we greatly appreciate the early adopters of Advanced APMs, we find no policy justification for making it relatively more difficult for those who have never participated in an Advanced APM to achieve QP status because, as stated above, a core purpose of this program is to increase Advanced APM participation.

    a. Group Determination and Lists (1) Group Determination

    The statute consistently refers to an eligible clinician throughout section 1833(z) of the Act and clearly identifies that the QP determinations are to be made for an eligible clinician, whom we identify by a unique NPI. Thus, an eligible clinician is a person who may have multiple TIN/NPI combinations but only one NPI. In section 1833(z)(3)(B) of the Act, the definition of an eligible clinician includes a group of such clinicians.

    We proposed, in general, to make the QP determination at a group level. As a result, the QP determination for the group would apply to all the individual eligible clinicians who are identified as part of an Advanced APM Entity. If that eligible clinician group's collective Threshold Score meets the relevant QP threshold, all eligible clinicians in that group would receive the same QP determination, applied to their NPI, for the relevant year. The QP determination calculations described in the proposed rule would be aggregated using data for all eligible clinicians participating in the Advanced APM Entity during the QP Performance Period.

    We believe that this policy promotes administrative simplicity and collaboration among group members instead of promoting barriers, and while many beneficiaries are attributed to an APM Entity based on the services rendered by one eligible clinician, many of the eligible clinicians participating in the APM Entity may play a role in the actual diagnosis, treatment, and management of many beneficiaries in the APM Entity population. Each of these individual eligible clinicians could potentially view themselves as being instrumental in providing quality care to the beneficiary that is in line with the objectives of the APM, regardless of whether their individual services are counted towards APM-specific attribution methods.

    An Advanced APM Entity faces the risks and rewards of participation in an Advanced APM as a single unit, and is responsible for performance metrics that are aggregated to the level of that entity. This policy is based on the premise that positive change occurs when entire organizations commit to participating in an Advanced APM and focusing on its cost and quality goals as a whole. It also mitigates situations in which individual eligible clinicians who practice together in an Advanced APM Entity receive different QP determinations and thus are treated differently for purposes of APM Incentive Payments, MIPS payment adjustments, and eventually, differential fee schedule updates under the PFS. We believe that such discrepancies could potentially lead to confusion and lack of cohesion among eligible clinicians and Advanced APM Entities and place additional burdens on eligible clinicians and organizations to track these differences. Additionally, we wish to avoid any additional burden, confusion, and operational difficulties for both eligible clinicians and CMS that would result from allowing eligible clinicians or Advanced APM Entities to elect whether to be assessed at the Advanced APM Entity level. We believe that a simple, overarching rule is preferable to adding extra variables to the already complex processes under this program.

    The following is a summary of the comments we received regarding our proposal to make the QP determination at the Advanced APM Entity group level.

    Comment: Most commenters expressed support for performing the Threshold Score calculations in this section at a group level defined by the Advanced APM Entity. Commenters stated that this was supportive of care coordination, organization cohesiveness, and the different clinician types supporting an Advanced APM Entity regardless of whether or not their services are tied directly to attribution. Some commenters were supportive but cautioned that this approach might be difficult to apply in certain Advanced APMs with Advanced APM Entities that have partial TINs or span multiple TINs.

    Response: We thank the commenters for their support of this approach and agree that it aligns with the goals of Advanced APMs. We believe that this accommodates the various organizational structures across Advanced APMs because it relies upon the lists maintained under each APM and its particular rules.

    Comment: One commenter stated that the exclusionary criteria under MIPS (first year of Medicare participation and low-volume threshold) should also apply to QP eligibility.

    Response: We disagree with the commenter. Although the statute specified exclusionary criteria for MIPS, we find no statutory basis or policy rationale to exclude such eligible clinicians from QP determinations.

    Comment: A few commenters recommended that the QP determinations be made at the TIN or NPI level instead of the Advanced APM Entity level. One commenter favored TIN level assessment in order to parallel the MIPS group reporting option and enable a greater degree of accuracy in a group's financial estimates.

    Response: We appreciate the potential advantages in certain scenarios for QP determinations to be made at TIN or NPI levels, but we continue to believe that QP determination at the Advanced APM Entity group level aligns with the goals of the Advanced APMs themselves and ultimately is more beneficial for a wider range of eligible clinicians who might not have an opportunity to be QPs individually or in smaller groups. We want to reinforce the collective responsibility of an Advanced APM Entity. However, as outlined below, we finalize two exceptions for situations in which we believe it is more appropriate to make the QP determination at the individual NPI level: (1) For individuals participating in multiple Advanced APM Entities, none of which meet the QP threshold as a group, and (2) for eligible clinicians on an Affiliated Practitioner List when that list is used for the QP determination because there are no eligible clinicians on a Participation List for the Advanced APM Entity. For the former exception, we believe that participation in multiple Advanced APMs demonstrates particular commitment to Advanced APMs. We believe it will be rare that all of an eligible clinician's multiple Advanced APM Entities would fail to meet the QP thresholds, but in such cases, the Threshold Scores of those Advanced APM Entities may not be indicative of the degree to which the eligible clinician has dedicated his or her practice to Advanced APMs. For the latter exception, eligible clinicians on an Affiliated Practitioner List, particularly those in episode payment models, do not necessarily have the same organizational relationship with one another as eligible clinicians who are on a Participation List. Unlike APM Entities that are defined as a group of eligible clinicians, affiliated practitioners may have no common connection to each other aside from their mutual relationship with a facility.

    Comment: One commenter did not support the proposal to apply QP status to an eligible clinician's NPI rather than the TIN/NPI combination associated with an Advanced APM Entity.

    Response: We disagree with the commenter and believe that applying QP status at the TIN/NPI level instead of at the NPI level as proposed would do a disservice to QPs. An eligible clinicians identified by an NPI may have reassigned billing to multiple TINs, resulting in multiple TIN/NPI combinations being associated with one eligible clinician (NPI). If QP status was only applied to one of an eligible clinician's multiple TIN/NPI combinations, an eligible clinician who is a QP for only one TIN/NPI combination might still have to report under MIPS for another TIN/NPI combination. Further, under that approach, the APM Incentive Payment would be based on only a fraction of the eligible clinician's covered professional services instead of, as we believe is the most logical reading of the statute, all those services furnished by the individual eligible clinician, as represented by an NPI. Therefore, we do not believe that applying QP status only to a specific TIN/NPI combination is supportive of the program's goals to reward individuals for commitment to Advanced APM participation.

    Except as explained further below, we are finalizing the proposed policy to make QP determinations collectively using the group of eligible clinicians in an Advanced APM Entity. We are finalizing two exceptions to this policy. First, if the eligible clinicians are identified on an Affiliated Practitioner List rather than a Participation List, as described in this section below, we will perform the QP determination individually for each eligible clinician on the Affiliated Practitioner List. We believe that eligible clinicians on Affiliated Practitioner Lists are unlike eligible clinicians on Participation Lists because, although they may have similar relationships with the Advanced APM Entity, they may not have any relationship with one another and do not represent a single organization unified in APM-related goals. Therefore, we believe considering these eligible clinicians individually is the most appropriate approach. We finalize the other exception regarding eligible clinicians participating in multiple Advanced APMs in section II.F.5.a.3. of this final rule with comment period.

    We understand that, as with any group assessment, there will be some situations in which individual Threshold Scores would differ from group Threshold Scores if assessed separately. This could lead to some eligible clinicians becoming QPs when they would not have met the QP Threshold individually (a “free-rider” scenario) or, conversely, some eligible clinicians not becoming QPs within an Advanced APM Entity when they might have qualified individually (a dilution scenario). We believe that through the methodology we are finalizing for QP determinations in this final rule, the magnitude of such discrepancies will be relatively small compared to the value of maintaining Advanced APM Entity cohesion.

    (2) Groups Used for QP Determination

    We proposed that the group of eligible clinicians used for a collective QP determination would consist of all the eligible clinicians participating in an Advanced APM Entity during a QP Performance Period. This would be defined by an Advanced APM Entity's Participation List provided to CMS. We proposed that the Participation List for each Advanced APM Entity would be compiled from CMS-maintained lists that identify each eligible clinician by a unique TIN/NPI combination attached to the identifier of the Advanced APM Entity.

    We proposed two exceptions to this rule. One exception is for Advanced APMs that do not identify eligible clinicians on a Participation List. In certain Advanced APMs, a Participation List may not include eligible clinicians. For example, in an APM where all Advanced APM Entities are hospitals, the Advanced APM Entity may not have eligible clinicians identified by a unique TIN/NPI combination attached to the identifier of the Advanced APM Entity on a Participation List. On the other hand, in certain Advanced APMs, an Advanced APM Entity may have a list (Affiliated Practitioner List) of other entities, including eligible clinicians, who are affiliated with and support the Advanced APM Entity in its participation in the Advanced APM but are not on the Participation List. For example, an Affiliated Practitioner List comprised of gainsharers under an APM might include eligible clinicians whereas a Participation List may only include hospitals.

    Where there is a Participation List that can be used to identify eligible clinicians, we proposed that it be the only list that is considered for the QP determination. We proposed that for Advanced APMs where the Participation List does not identify eligible clinicians, but there is an Affiliated Practitioners List of eligible clinicians who have a contractual relationship with the Advanced APM Entity based at least in part on supporting the Advanced APM Entity's quality or cost goals under the Advanced APM, we would use the eligible clinicians on the Affiliated Practitioner List for purposes of the QP determination. Where there is both a Participation List and an Affiliated Practitioner List that can be used to identify eligible clinicians under an Advanced APM, we proposed to only use the Participation List for purposes of the QP determination.

    This proposed policy was developed to capture the group or groups of eligible clinicians who are the most closely associated with the performance of the Advanced APM Entity under an Advanced APM and to recognize their role in supporting the Advanced APM Entity. We believe this policy provides for flexibility in the design of Advanced APMs while providing the APM Incentive Payment to those eligible clinicians who are the most engaged in the Advanced APM.

    We solicited comment on our proposals to define the eligible clinician group for QP determination based on the Participation List and the exception to use the Affiliated Practitioners List for Advanced APMs in which there are not eligible clinicians on the Participation List. We also solicited comment on whether to limit the proposed policy to the Medicare Option, as it may be less likely that Affiliated Practitioners support the Advanced APM Entity in Other Payer Advanced APMs and may be more difficult for us to distinguish based on information submitted to CMS by Advanced APM Entities. Because there may be Advanced APMs in the future that have multiple lists of Affiliated Practitioners, we sought comment on approaches for grouping those separate lists for purposes of the QP determination.

    The following is a summary of the comments we received regarding our proposals pertaining to defining the eligible clinician groups for QP determination.

    Comment: Several commenters requested clarification on which list, the Participation List or Affiliated Practitioner List, would be used when an Advanced APM has both. One commenter requested clarification of the definition of the Participation List and another commenter requested clarification regarding the definition of Affiliated Practitioner, specifically if the definition varies by Advanced APM. Several commenters recommended that when an Advanced APM has both a Participation List and an Affiliated Practitioner List, the lists should be reconciled in order to include a broader group of eligible clinicians for purposes of the QP determination. Some commenters supported the distinction between participants on a Participation List and Affiliated Practitioners on an Affiliated Practitioner List for purposes of the QP determination.

    A few commenters made specific comments on how the proposed policy relates to episode payment models. Some commenters suggested that if BPCI or CJR become Advanced APMs, CMS should accept a hospital's Affiliated Practitioner List for the QP determination. A commenter suggested that CMS create a process for APM Entities in episode payment models to report their Affiliated Practitioners out of concern that ACOs will exclude specialists so that their primary care physicians will as a group be QPs.

    Response: A Participation List is a CMS-maintained list that includes the most central participants in an APM. Affiliated Practitioners are eligible clinicians who are more loosely affiliated with an Advanced APM Entity than those on a Participation List, and have a contractual relationship with the Advanced APM Entity based at least in part on supporting the Advanced APM Entity's quality or cost goals under the Advanced APM. The definitions of Participation List and Affiliated Practitioner List are located at § 414.1305. If the terms of an Advanced APM do not require a Participation List to identify eligible clinicians but do allow for eligible clinicians to be identified on an Affiliated Practitioner List, we would use the Affiliated Practitioner List for purposes of the QP determination. If an Advanced APM has both a Participation List and an Affiliated Practitioner List, we will only look at the Participation List for purposes of the QP determination, with the following exception.

    In response to the comment requesting that we identify Affiliated Practitioners for the QP determination in BPCI, we are finalizing an exception that would allow for the appropriate identification of eligible clinicians in APMs that, like BPCI, have multiple types of participating APM Entities. Under this exception, we will use either the Participation List or the Affiliated Practitioner List depending on the type of APM Entity. This exception applies to Advanced APMs, such as some episode payment models, in which different types of APM Entities participate and some Advanced APM Entities may identify eligible clinicians on a Participation List, and others may have only an Affiliated Practitioner List. For these models, we will identify the eligible clinicians for QP determinations based on the composition of the Advanced APM Entity instead of at the Advanced APM level. Specifically, for these episode payment model Advanced APMs, we will determine which eligible clinicians will be included in the QP determination as follows: (1) For Advanced APM Entities that include and identify eligible clinicians on a Participation List, that Participation List will be used to define the Advanced APM Entity group, regardless of whether or not there is also an Affiliated Practitioner List or other list of eligible clinicians, and we will make QP determinations at the APM Entity group level; (2) for Advanced APM Entities that do not include and identify eligible clinicians on a Participation List and there is an Affiliated Practitioner List that identifies eligible clinicians, that Affiliated Practitioner List will be used to identify the eligible clinicians for purposes of QP determination, and those eligible clinicians will be assessed individually. The structure of BPCI serves as a useful example to show how we would apply this policy. In a model like BPCI, when the APM Entity is a physician group practice that identifies eligible clinicians on a Participation List, we would use that list for purposes of the QP determination, even if there is also an Affiliated Practitioner List. When the APM Entity is a hospital that does not identify eligible clinicians on a Participation List, but it identifies eligible clinicians on an Affiliated Practitioner List, we would use that list for purposes of identifying eligible clinicians for the QP determination, and those eligible clinicians would be evaluated individually. While this policy is responsive to comments about APMs like BPCI, this policy does not change the design of the models within BPCI. We are also considering implementing a new voluntary APM that is an episode payment model for CY 2018 that could meet the criteria for this exception (81 FR 50793).

    We believe this exception to making QP determinations at a group level appropriately identifies the eligible clinicians with the closest supporting role to the Advanced APM Entity in episode payment models. We would assess affiliated practitioners individually because affiliated practitioners do not necessarily have the same organizational relationship with one another as eligible clinicians on a Participation List have with one another. Unlike APM Entities that are defined as a group of eligible clinicians, affiliated practitioners may have no common connection to each other aside from their mutual relationship with a facility. Therefore, we believe that the rationale for group assessments does not apply to these individual eligible clinicians.

    Comment: Several commenters requested that CMS develop a way to identify individual eligible clinicians who are employed by an Advanced APM Entity in an episode payment model and involved in episodes of care or require that the Advanced APM Entity provide CMS with a list of such clinicians. One commenter recommended that in addition to using Participation Lists and Affiliated Practitioner Lists for purposes of QP determination, CMS should also use all eligible clinicians under a single TIN as a group of eligible clinicians, regardless of inclusion on one of these lists. Another commenter suggested that it may be preferable to only count eligible clinicians who can be used for beneficiary attribution in the Advanced APM. Another commenter suggested that eligible clinicians working with a partner teaching hospital that is an Advanced APM Entity should receive credit for participation in the Advanced APM, even if they do not have a formal arrangement with the Advanced APM Entity.

    Response: We believe that the policy to use Participation Lists and Affiliated Practitioner Lists, when applicable, for purposes of the QP determination, captures the eligible clinicians who are most closely associated with the performance of the Advanced APM Entity under the Advanced APM. We do not believe that including clinicians for whom we have no other record of participation in an Advanced APM would be an accurate and equitable representation of the eligible clinicians that could become QPs through an Advanced APM. CMS defines an eligible clinician's role in an APM through his or her inclusion on specific CMS-maintained lists defined in each APM's terms and conditions or regulation or law, and we cannot verify relationships for which we do not maintain records. Further, we do not believe it would be useful to merge the Participation Lists with Affiliated Practitioner Lists to identify eligible clinicians for QP determinations because the clinicians on these lists have different relationships with an Advanced APM Entity.

    The policy we are finalizing addresses which eligible clinicians, those on a Participation List or those on an Affiliated Practitioner List, will be considered for the QP determination. We believe that we should only capture those eligible clinicians affirmatively identified as the most central participants supporting an Advanced APM Entity for purposes of the QP determination. Many APMs have multiple “tiers” of eligible clinicians who may play different roles for an APM Entity, and the policy we are finalizing reflects those tiers so that only the eligible clinicians most responsible for the requirements of the Advanced APM relative to other tiers of eligible clinicians will be considered the central participants of an Advanced APM Entity. Where Advanced APM Entities have eligible clinicians identified on a Participation List, those eligible clinicians are the most central participants. Where Advanced APM Entities do not have eligible clinicians identified on a Participation List, but they do have eligible clinicians on an Affiliated Practitioner List, those eligible clinicians are the most central participants.

    Comment: One commenter suggested that CMS provide some protections, flexibility, or an appeals process for those eligible clinicians who find themselves to be on what they believe to be the wrong list, especially during the first few years of adjusting to the Quality Payment Program.

    Response: We understand that ensuring list accuracy can be a difficult process for organizations and clinicians to manage. List management takes place with the APMs themselves, and we are not developing any universal standards for how each APM collects, updates, and maintains its lists. However, because of the important implications in list management, we will closely monitor this issue in the first years of the Quality Payment Program, and on an ongoing basis.

    We are finalizing the proposed policy with certain modifications, as follows:

    • For Advanced APMs for which there is a Participation List that identifies eligible clinicians, that Participation List will be used to define the Advanced APM Entity group, regardless of whether there is also an Affiliated Practitioner List or other list of eligible clinicians associated with the Advanced APM. QP determinations will be made at the Advanced APM Entity group level.

    • For Advanced APMs for which there is not a Participation List that identifies eligible clinicians and there is an Affiliated Practitioner List that identifies eligible clinicians, that Affiliated Practitioner List will be used to identify the eligible clinicians for purposes of QP determinations. Eligible clinicians on an Affiliated Practitioner List will be assessed individually, unlike eligible clinicians on a Participation List who are assessed as a group.

    • For Advanced APMs, such as episode payment models, in which there are some Advanced APM Entities that include eligible clinicians on a Participation List and other Advanced APM Entities that identify eligible clinicians only on an Affiliated Practitioner List, we will identify eligible clinicians for QP determinations based on the composition of the Advanced APM Entity: (1) For Advanced APM Entities that include and identify eligible clinicians on a Participation List, that Participation List will be used to define the Advanced APM Entity group, regardless of whether or not there is also an Affiliated Practitioner List or other list of eligible clinicians, and those eligible clinicians will be assessed as a group; (2) for Advanced APM Entities that do not include and identify eligible clinicians on a Participation List and there is an Affiliated Practitioner List that identifies eligible clinicians, that Affiliated Practitioner List will be used to identify the eligible clinicians for purposes of QP determinations, and those eligible clinicians will be assessed individually.

    As discussed in our response to comments above, we believe the relationship between eligible clinicians and APM Entities in APMs such as episode payment models can vary and that eligible clinicians on an Affiliated Practitioner List can be engaged in the goals of the Advanced APM in a similar manner as eligible clinicians on a Participation List depending on the characteristics of the Advanced APM Entity.

    We are finalizing these policies on the identification of eligible clinicians for purposes of QP determinations only for the Medicare Option. We did not receive public comment on whether to extend this policy to the All-Payer Combination Option and believe it is prudent to first apply this policy in the Medicare Option before considering whether to apply it in the All-Payer Combination Option through future rulemaking.

    (3) Exception for Participation in Multiple Advanced APMs

    We proposed an exception to making QP determinations at the group level. Some eligible clinicians may participate in multiple Advanced APMs. For instance, an eligible clinician could participate in an ACO under the Shared Saving Program and an episode payment model with another entity, both of which have been determined to be Advanced APM Entities. In such a case, we proposed the following (81 FR 28320):

    • Consistent with the general policy proposed above, if one or more of the Advanced APM Entities in which the eligible clinician participates meets the QP threshold, the eligible clinician becomes a QP.

    • If none of the Advanced APM Entities in which the eligible clinician participates meet the QP threshold, CMS proposes to assess the eligible clinician individually, using combined information for services associated with that individual's NPI and furnished through all such eligible clinician's Advanced APM Entities during the QP Performance Period. We would adjust to assure that services are not double-counted (for example, a surgeon participating in an episode payment model, in which some of the procedures are performed on patients affiliated with an ACO that the surgeon is also a part of, would only have payments or patients from those procedures count once towards the QP determination).

    We believe that this policy maintains the general simplicity of the Advanced APM Entity-level QP determination while acknowledging individual eligible clinicians who are participating in multiple advanced initiatives that support CMS goals. This also complements the policy described under the All-Payer Combination Option for QP determinations in which an eligible clinician may submit information on participation in Other Payer Advanced APMs to be assessed as an individual under that option in the event that the APM Entity or Entities in which the eligible clinician participates do not submit sufficient information.

    We solicited comment on the proposal for exceptions to making QP determinations at the Advanced APM Entity level. In particular, we solicited comment on the merits of making all determinations at the individual eligible clinician level versus through some alternative grouping methodology. We also solicited comment on our proposal to assess an eligible clinician who participates in multiple Advanced APM Entities, and any other potential exceptions to the proposed general policy to make QP determinations at the Advanced APM level.

    The following is a summary of the comments we received regarding our proposal to assess an eligible clinician individually for purposes of a QP determination in the event that the eligible clinician participates in multiple Advanced APM Entities, none of which meet the QP thresholds as a group.

    Comment: Many commenters supported our proposal to evaluate the individual eligible clinician participating in multiple Advanced APMs if the individual is not determined to be a QP based on participation in any single Advanced APM. Some commenters suggested that for at least the first year, we allow any individuals or TINs within an Advanced APM Entity to be QPs if they reach the QP threshold independent of their Advanced APM Entities in order to ensure that as many eligible clinicians become QPs as possible.

    Response: With respect to the alternative of allowing individual TINs or NPIs within an Advanced APM Entity to be assessed separately and to apply the most favorable result, we do not believe that approach would best reflect the collective participation toward shared goals that is fostered under APMs, and in particular, Advanced APMs. Like the APM Entity's performance under the APM, we believe group level determinations in the APM context involve collective and consistent responsibility for results, including QP determinations. We will have instances in which eligible clinicians are assessed as a group and instances in which they are assessed as individuals, but we believe individual evaluation should be used only to address exceptional circumstances. The approach suggested by the commenters would effectively apply an individual assessment with a floor determined by the group performance. Such an alternative would erode the cooperative purpose of a group determination, and we continue to believe, as stated in the proposed rule, that APM participation is focused on collective responsibility for the cost and quality of care for Medicare beneficiaries.

    Comment: One commenter requested clarification on how we would average or weight participation across multiple Advanced APMs.

    Response: We appreciate the questions regarding how individuals would be assessed in the case of an eligible clinician participating in multiple Advanced APMs. Because we will make QP determinations using claims analyses, which enables us to connect services for beneficiaries to an eligible clinician's NPI, we would only need to add the numerator and denominator values together, and adjust for any duplication in the numerator or denominator. The formulas would be the same as if calculated for the group but based on the individual eligible clinician's activity at the NPI level.

    We are finalizing the policy as proposed. If an eligible clinician participates in multiple Advanced APM Entities during a QP Performance Period, and is not determined to be a QP based on participation in any of those Advanced APM Entities, then we will assess the eligible clinician individually using combined information for services associated with that individual's NPI and furnished through all the eligible clinician's Advanced APM Entities during the QP Performance Period. This includes all Advanced APM Entities for which the eligible clinician is represented on either a Participation List or Affiliated Practitioner List that CMS uses for QP determinations in accordance with the identification policies described in this section of the final rule with comment period. We will make adjustments to ensure that patients and payments for services that may be counted in the QP calculations for multiple Advanced APM Entities (for example, payments for services furnished to a beneficiary attributed to an ACO that are also part of an episode in an episode payment model) are not double-counted for the individual.

    We believe that this policy maintains the general principles behind Advanced APM Entity-level QP determinations while acknowledging the broader commitment of individual eligible clinicians who are participating in multiple Advanced APMs. We believe considering these eligible clinicians individually is the most reasonable approach to capturing the multiple potential permutations of participation in Advanced APMs and providing eligible clinicians an equitable opportunity to become a QP.

    (4) Timing of Group Identification for Eligible Clinicians

    We proposed that we would identify the eligible clinician group for each Advanced APM Entity at a specified point in time for each QP Performance Period. We proposed that this point-in-time assessment will occur on December 31 of each QP Performance Period.

    We solicited comments on our proposal to define the Advanced APM Entity group based on the Participation List for each Advanced APM Entity at a specified point in time during the QP Performance Period. We also solicited comment on the proposed date of the Participation List assessment, and whether this date should be earlier in the QP Performance Period or should instead be a range of time (81 FR 28320).

    The following is a summary of the comments we received regarding our proposal to define the Advanced APM Entity group for purposes of the QP determination by take a point-in-time snapshot of eligible clinicians in an Advanced APM Entity according to Participation Lists on December 31 of the QP Performance Period.

    Comment: Many commenters expressed concerns with the proposed policy of a December 31 snapshot of Participation Lists in order to determine the Advanced APM Entity group for QP determinations and, if applicable, MIPS reporting and scoring under the APM scoring standard. Some commenters stated that December 31 captures APMs that start or allow additions to Participation Lists during the calendar year, but for APMs such as the Next Generation ACO Model in which Participation Lists are set at the beginning of the year and can only be reduced during the year, December 31 does not necessarily capture the entity as it operates throughout the year. Similarly, commenters noted that the proposed policy both does not incentivize participation during the early part of a year and is not fair to eligible clinicians who may have been part of the Advanced APM Entity for large portions of the QP Performance Period but not on the Participation List on the last day of the year. Finally, many commenters stressed that because not being on an Advanced APM Entity's Participation List means that the eligible clinician must make arrangements for MIPS reporting for the services furnished under the TIN/NPI combination associated with the Advanced APM Entity, and learning that late in the year would make the necessary preparations to perform well under MIPS in a limited amount of time very difficult.

    Response: We agree with commenters that a single snapshot on December 31 or the last day of the QP Performance Period may create some potentially inequitable or burdensome situations. Therefore, as described in this section, we are finalizing a policy that moves away from a single, end of year snapshot to instead use several snapshots through the year that we believe better represent eligible clinician participation in Advanced APMs over the course of a year for purposes of QP determinations.

    Comment: A few commenters suggested that we use a range of time during which presence on a Participation List would be sufficient to be included in the group. Similarly, some commenters suggested that presence on the list for a certain number of consecutive days (that is, on the Participation List for 60 days) should result in inclusion in the group.

    Response: We thank the commenters for the suggestion. These ideas both have merit, and we considered them carefully. We are finalizing a different policy because, under these suggested options, we believe APM list management will be more difficult. We want the list used for purposes of an APM and for purposes of the Quality Payment Program to be as consistent as possible so that APM Entities may easily understand how list changes have impacts across programs. Under the commenters' proposals, at any single point in time, there will likely be inconsistencies between the APM Entity's Participation List and the list we would use for QP determinations, which would be very challenging to explain and to manage for both CMS and Advanced APM Entities. Specifically with regard to the proposal to have a range of time during which anyone on the Participation List would be included in the APM Entity group, many APM Entities make changes to their Participation Lists at certain times of the year, especially during the first quarter, and we do not want to include eligible clinicians in the APM Entity group if they were only on a list fleetingly during a period of administrative transitions. We believe that the minimum length of time proposal ensures that eligible clinicians participate sufficiently before being included in an APM Entity group, but APM Entities would not have the ability on a given date to know which eligible clinicians will be included in the group because some may leave at staggered points in time prior to participating for the necessary number of days.

    Comment: Several commenters suggested that we allow each Advanced APM Entity to submit a list, which may vary from the one used under the Advanced APM, for purposes of the QP determination in order to accurately reflect what the entity believes is the most recent and salient representation of its group of eligible clinicians participating in the Advanced APM Entity. Other commenters suggested that each Advanced APM select its own snapshot based on its particular operations or change its Participation List rules to allow adjustments in preparation for this snapshot date. On the other hand, several commenters expressed a desire for as much automation as possible, such as through claims analyses and use of PECOS data, to avoid administratively burdensome list submission and avoid potential list inaccuracies and inconsistencies.

    Response: We thank the commenters for their suggestions. We understand that allowing a distinct list submission solely for QP determinations purposes or selecting a snapshot date would maximize the control an Advanced APM Entity has over the group's Threshold Score. However, we believe these options would be ripe for potential gaming and could result in inconsistencies between the group of eligible clinicians actually responsible for performance under the Advanced APM according to CMS records and the list the Advanced APM might identify for the QP determination. We believe it is most appropriate to align QP determinations with records of participation under the Advanced APMs themselves. Although changing how any particular APM manages participation by eligible clinicians and APM Entities is beyond the scope of this final rule with comment period, we understand how certain changes could be made in Advanced APMs to help Advanced APM Entities sync up with the Quality Payment Program goals. Finally, we agree with commenters in principle that automation of the identification process for eligible clinicians in Advanced APMs is a valuable goal. We do not want to create additional administrative tasks, such as maintaining and submitting a unique Participation List, when we can use available information in CMS systems.

    Comment: Many commenters expressed support for a December 31 snapshot date because the fluid nature of participation during a year may result in eligible clinicians joining and leaving an Advanced APM Entity in relatively short periods of time during the year.

    Response: We thank the commenters for their support of the proposed policy, but because of the issues raised by other commenters, we are finalizing a policy that we believe retains many of the benefits of an end-of-year or end-of-QP Performance Period snapshot while trying to provider greater certainty earlier in the year.

    To address the comments we received, especially the concerns regarding eligible clinicians not knowing whether they are part of an Advanced APM Entity for purposes of QP determinations until the end of the calendar year, we are finalizing a modified process for identifying the APM Entity group (individual eligible clinicians in the case of an Affiliated Practitioner List) to use a series of three “snapshots” of an APM Entity's Participation List (or Affiliated Practitioner List) during the QP Performance Period. Each snapshot may add eligible clinicians to the APM Entity group or capture new affiliated practitioners who were not previously identified as part of the group or as individuals in the Advanced APM, but once determined to be a participant in an APM Entity for the QP Performance Period at any of the three snapshots, an eligible clinician will be considered by CMS for QP determinations as part of an APM Entity group, or as an individual, as appropriate, regardless of whether they are included on a Participation List or Affiliated Practitioner List in later snapshots. The first snapshot will be on March 31 of the QP Performance Period, the second snapshot will be on June 30 of the QP Performance Period, and the third snapshot will be on the August 31, which will be the last day of the QP Performance Period.

    Each of these snapshots will establish and then add to the APM Entity group used for purposes of the QP determinations made for the QP Performance Period described in this section. In the event that the APM Entity participates in a MIPS APM and is not excluded from MIPS, the final APM Entity group after the third snapshot will be also be the APM Entity group used for purposes of MIPS group reporting and scoring under the APM scoring standard described in section II.e.3.h. of this final rule with comment period.

    We believe that this final policy accommodates the variety of policies in different models regarding the adding or dropping of APM participants so that we capture the eligible clinicians who have meaningfully participated in an Advanced APM Entity during a QP Performance Period. Most importantly, we believe that, in combination with the final policy on the QP Performance Period, this policy allows for substantially greater certainty at an earlier point in time of an eligible clinician's status, first as a participant or affiliated practitioner in an Advanced APM or MIPS APM, and then as a QP or MIPS eligible clinician, as compared to the proposed policy. Figure F illustrates the three additive snapshots we will use to identify the participants in an APM Entity.

    We acknowledge that use of point-in-time snapshots may result in some eligible clinicians being captured on Participation Lists or Affiliated Practitioner Lists when they have only been on such a list for a short period of time. Although we believe that most APMs have list management rules to inhibit potential manipulation and that large numbers of additions to a Participation List may reduce an Advanced APM Entity's Threshold Score, we will monitor whether APM Entities systematically construct their lists in a manner that inappropriately affects the assessment of participation in Advanced APMs. In response, we may modify our policy through future rulemaking to address any such issues.

    ER04NO16.013 b. QP Performance Period

    According to section 1833(z)(2) of the Act, we are required to determine QP and Partial QP status based on payment amounts or patient counts during the most recent period for which data are available, which may be less than a year. We proposed that the QP Performance Period is the full calendar year that aligns with the MIPS performance period (for instance, 2017 would be the QP Performance Period for the 2019 payment year). We believe that having a QP Performance Period that concludes 1 year and one day before the payment year would enable us to provide all eligible clinicians participating in Advanced APMs the best opportunity to monitor their performance through the Advanced APM and make the most informed decisions regarding their decision whether to not to be subject to MIPS in the event that they become a Partial QP. We solicited comment on this proposal and any alternative QP Performance Period timeframes that would both enable meaningful QP assessment and ensure operational alignment with MIPS.

    The following is a summary of the comments we received regarding our proposal that the QP Performance Period would be the full calendar year 2 years prior to the payment year.

    Comment: Many commenters expressed concern that under the proposed QP Performance Period, participants in Advanced APMs would not know their QP status until after the end of the MIPS submission period. As a result, prudent Advanced APM participants would proactively report to MIPS in order to ensure that, in the event they do not reach the QP thresholds, they have an opportunity to fare well under MIPS. Most of these commenters suggested that QP determinations be made earlier so that QPs know their status in sufficient time to avoid unnecessary MIPS reporting. Whereas most of these commenters agreed generally that the determinations should be completed during the QP Performance Period in order to avoid the administrative task of reporting to MIPS, some commenters suggested that QP determinations be made as early as the spring of the QP Performance Period in order to prevent as much unnecessary MIPS-related activity as possible, such as the performance necessary for the improvement activities and advancing care information performance categories. Other commenters went further and stated that QP determinations should be completed prior to or at the very beginning of the QP Performance Period. To enable these very early determinations, commenters recognized that the calculations would have to be made using historical data from 2016 or by issuing presumptive determinations with MIPS adjustment accommodations in the event that actual results differed from those predicted. Several commenters requested that we at least mitigate the issue by providing as much preliminary data as possible so that Advanced APM participants may clearly understand their possible and likely outcomes.

    Response: Although we designed the APM scoring standard in section II.E.3.h. of this final rule with comment period to reduce the reporting burden, we agree with commenters that it is only part of the solution. We disagree with commenters who recommended using 2016 as the QP Performance Period or implied that we should use 2016 data by suggesting that we make QP determinations for 2019 at the beginning of 2017. First, such a proposal would further remove the performance timeframe from the corresponding reward; second, the purpose of a performance period is to base a determination on actual participation in Advanced APMs during that period, and the applicable “performance” to attain QP status is participation in Advanced APMs; third, a performance period of 2016 would severely restrict access to QP status because there were fewer opportunities to participate in APMs that could be considered Advanced APMs in 2016 than in 2017; and fourth, it is very important to base these determinations on robust, reliable data instead of historical abstractions or future predictions.

    That said, we agree in principle that earlier notification of QP status is optimal and would prevent wasted time and resources. Our analyses indicate that one calendar quarter's data is sufficiently reliable and consistent with full year results to make early final QP determinations using those data. Thus, we are modifying our proposed policy in this final rule, as explained more fully below, to incorporate QP determinations during the calendar year based on data from less than the full QP Performance Period. Any such QP determination made during the QP Performance Period will be considered final. QP determinations may be rescinded in the event that an Advanced APM Entity is terminated from an Advanced APM, voluntarily or involuntarily, prior to August 31 of the QP Performance Period, or in the event of eligible clinician or Advanced APM Entity program integrity violations, as described in section II.F.9. of this final rule with comment period. We also intend to provide preliminary information to eligible clinicians participating in an Advanced APM early in the QP Performance Period in order to help participants assess their likelihood of becoming a QP for a year. For the first performance year, we will calculate hypothetical Threshold Scores based on historical claims data and current attribution data that represent an approximation of QP status as if the eligible clinicians had participated in an Advanced APM in 2016.

    Comment: Many commenters suggested that CMS include a later time period for the QP Performance Period so that there is a smaller gap in time between the QP Performance Period and the payment year (for example, 2018 Advanced APM participation would determine QP status for the 2019 payment year). Commenters expressed a desire to have an opportunity following the publication of this final rule with comment period to join an Advanced APM and receive the first APM Incentive Payment. Some commenters suggested keeping the QP Performance Period as proposed but adjusting the Participation List snapshot date to January 1 of the year after the QP Performance in order to capture new participants in Advanced APMs for inclusion in the QP determination. Other commenters generally urged us to use 2018 Advanced APM participation to make QP determinations for the 2019 payment year.

    Response: In isolation, we agree that a QP Performance Period during the calendar year immediately prior to the payment year would provide certain advantages over one that is during the calendar year 2 years prior to the payment year. However, using 2018 Advanced APM participation information to make QP determinations for the 2019 payment year would raise several significant complications. First, we do not believe we should “double-count” performance for two different payment years because we believe the APM Incentive Payment is intended to reflect and reward Advanced APM participation for a specific, delineated period that should logically align with common APM operational functions and timelines. Crossing calendar years would lead to highly unreliable and disjointed data because Participation Lists often change substantially between calendar years, and we cannot assume that the performance of a previous year's group of participants would be replicated in a new year with different participants and different attributed beneficiaries. Second, as stated in the previous response regarding a 2016 QP Performance Period for the 2019 payment year, we believe it is paramount that we use actual Advanced APM participation information rather than proxies for participation, which would be the case for new entrants into an Advanced APM on January 1 under the first commenter's proposal. We also believe that we need data from at least one calendar quarter of activity in order to consider the data reliable, so we do not believe that one day of Advanced APM participation is sufficient to reliably calculate a Threshold Score for eligible clinicians; any 2017 data would be derived from performance while such new Advanced APM participants were not Advanced APM participants. Finally, the relationship between QP determinations and MIPS reporting drives the need for determinations based on an earlier timeframe, and earlier QP determinations rather than later determinations. At the very latest, we need to ensure that all QPs for a year are removed from the MIPS eligible clinician cohort in sufficient time for us to make the requisite budget neutrality calculations, which in turn drives the calculation of MIPS payment adjustments for a year. We are also modifying our proposals to be responsive to many commenters who want to know as early as possible whether or not they will need to report under MIPS, and if so, which groups and reporting mechanisms they will use for reporting.

    Comment: One commenter requested that the first QP Performance Period start on July 1, 2017 instead of January 1, 2017, because of the close proximity of January 1 to the publication of this final rule with comment period.

    Response: We disagree with the commenter and note that the QP determination described in this section is different in nature from MIPS. Unlike with MIPS, the QP determination requires no reporting or directed activity by Advanced APM participants beyond what is required in the Advanced APMs themselves. We believe that starting the QP Performance Period later in 2017 would actually do a disservice to Advanced APM participants because they potentially would not be able to receive due credit for their participation early in the year. We also do not believe a later start for the QP Performance Period would provide a meaningfully greater opportunity to eligible clinicians to join an Advanced APM in the event that they were not part of one at the beginning of 2017.

    We are modifying our proposals for the QP Performance Period and the timing of QP determinations. Instead of the proposed policy, we are finalizing a QP Performance Period that runs from January 1 through August 31 of the calendar year that is 2 years prior to the payment year. During that QP Performance Period, we will make QP determinations at three separate times, each of which would be a final determination for the eligible clinicians who are determined to be QPs. The QP Performance Period and the three separate QP determinations apply similarly for both the group of eligible clinicians on a Participation List and the individual eligible clinicians on an Affiliated Practitioner List.

    The first QP determination of the QP Performance Period will be made for all eligible clinicians who are identified as Advanced APM participants eligible to be QPs, either through a Participation List or Affiliated Practitioner List as described above, as of March 31 using data for that Advanced APM Entity group from January 1 through March 31 of that year. If the APM Entity group meets the QP threshold under this first assessment, then all eligible clinicians in the Advanced APM Entity group will be QPs for purposes of the respective payment year unless the Advanced APM Entity's participation in the Advanced APM is voluntarily or involuntarily terminated prior to the end of the QP Performance Period, or in the event of eligible clinician or Advanced APM Entity program integrity violations, as described in section II.F.9. of this final rule with comment period. These same procedures apply to the first QP determination made for individual eligible clinicians on an Advanced APM Entity's Affiliated Practitioner List or individual eligible clinicians in multiple Advanced APMs whose Advanced APM Entity groups did not meet the QP threshold.

    In the event that the Advanced APM Entity group did not meet the QP threshold at the first QP determination, or if the Advanced APM Entity group includes eligible clinicians who were not part of the Advanced APM Entity group at the first QP determination, we will make a second QP determination for all eligible clinicians in the Advanced APM Entity at the first QP determination plus any additional eligible clinicians who are on the Participation List as of June 30 using data for that Advanced APM Entity group from January 1 through June 30 of that QP Performance Period. If the Advanced APM Entity group meets the QP threshold, then all eligible clinicians in the Advanced APM Entity group will be QPs for the payment year unless the Advanced APM Entity's participation in the Advanced APM is voluntarily or involuntarily terminated prior to the end of the QP Performance Period, or in the event of eligible clinician or Advanced APM Entity program integrity violations, as described in section II.F.9. of this final rule with comment period. If the Advanced APM Entity group does not meet the QP threshold at the second determination but did meet the QP threshold at the first determination, CMS would not revise the QP status of the eligible clinicians who were previously determined to be QPs, but any additional eligible clinicians who were in the Advanced APM Entity group at the second determination would not be QPs for the payment year through the group determination. If an Advanced APM Entity group meets the threshold in both the first and second determination, but some eligible clinicians no longer remain on the Participation List for the second determination, those eligible clinicians will still be considered QPs. These same procedures apply to the second QP determination made for individual eligible clinicians on the Advanced APM Entity's Affiliated Practitioner List or individual eligible clinicians in multiple Advanced APMs whose Advanced APM Entity groups did not meet the QP threshold.

    In the event that the Advanced APM Entity group did not meet the QP threshold under the first or second QP determination or if the Advanced APM Entity group includes eligible clinicians who were not part of the Advanced APM Entity group at the second QP determination, we will make a third QP determination for all eligible clinicians on the Advanced APM Entity's Participation List from the first and second QP determinations plus any additional eligible clinicians who are on the Participation List as of August 31 using data for that Advanced APM Entity group from January 1 through August 31 of that QP Performance Period. If the Advanced APM Entity group meets the QP thresholds, then all eligible clinicians in the Advanced APM Entity group will be QPs for the payment year unless the Advanced APM Entity's participation in the Advanced APM is voluntarily or involuntarily terminated prior to the end of the QP Performance Period. If the Advanced APM Entity group does not meet the QP threshold at the third determination but did meet the QP threshold at a previous determination, CMS would not revise the QP status of the eligible clinicians who were previously determined to be QPs, but any additional eligible clinicians who were only in the Advanced APM Entity group at the third QP determination would not be QPs for the payment year. If an Advanced APM Entity group meets the QP threshold in both the third determination and a previous determination, but some eligible clinicians no longer remain in the Advanced APM Entity at the third determination, those eligible clinicians will still be considered QPs. These same procedures apply to the third QP determination made for individual eligible clinicians on the Advanced APM Entity's Affiliated Practitioner List or individual eligible clinicians in multiple Advanced APMs whose Advanced APM Entity groups did not meet the QP threshold.

    For each of the three QP determinations, we will allow for 3 months' claims run-out before calculating the Threshold Scores so that the three QP determinations will be made approximately 4 months after the end of that determination time period. Therefore, the last of these three QP determinations will take place on or around January 1 of the subsequent calendar year, which is the year immediately prior to the payment year. This way, eligible clinicians will know of their QP status prior to or near the beginning of the MIPS data submission period and know whether they should report to MIPS for the applicable year. Additionally, for purposes of this policy, we do not consider the ending of an Advanced APM's operations to be the voluntary or involuntary termination of an Advanced APM Entity. We consider an Advanced APM's end to be the natural and scheduled close rather than a “dropping out” of participants from the Advanced APM.

    Figure G illustrates the three QP determinations during a QP Performance Period and the associated period of claims data used for QP determination (A), the Participation List or Affiliated Practitioner List snapshot date (B), the claims run-out period (C), and the estimated completion date of the QP determination (D).

    ER04NO16.014 c. Partial QP Election To Report to MIPS

    Section 1848(q)(1)(C)(ii)(II) of the Act excludes from the definition of MIPS eligible clinician an eligible clinician who is a Partial QP for a year. However, under section 1848(q)(1)(C)(vii) of the Act, an eligible clinician who is a Partial QP for a year and reports on applicable measures and activities as required under the MIPS is considered to be a MIPS eligible clinician for the year. To carry out these provisions, we proposed to require that each Advanced APM Entity must make an election each year on behalf of all of its identified participating eligible clinicians on whether to report under MIPS in the event that the eligible clinicians participating in the Advanced APM Entity are determined as a group to be Partial QPs for a year. We proposed that the Advanced APM Entity could change its election for a year at any time during the QP Performance Period, but the election would become permanent at the close of the QP Performance Period. We believe that this is consistent with our proposed general policy to make QP determinations at the Advanced APM Entity level, and with related MIPS policies described in section II.E.3.h. of this final rule with comment period, under which we proposed that each APM Entity would be considered a group for purposes of MIPS reporting. Therefore, we believe that the decision of whether to report and subsequently be subject to MIPS adjustments should also be made at the group level. We solicited comment on whether the Advanced APM Entity or each individual eligible clinician should make the Partial QP MIPS reporting election.

    As discussed in section II.E.3.h. of this final rule with comment period, we recognize that the Shared Savings Program eligible clinicians participate as a complete TIN such that all of the eligible clinician participants in the participant billing TIN participate in the Shared Savings Program. Therefore, we also solicited comment on an alternative approach for Shared Savings Program APM Entities in which each individual billing TIN participating in the APM Entity would make the Partial QP election on behalf of its individual eligible clinicians and that election would be applied to all eligible clinicians in that individual billing TIN, as opposed to having the APM Entity (ACO) make the Partial QP election. We stated that we would only undertake this alternative paired with determining a MIPS final score for each TIN within an APM Entity (ACO) at the TIN level, an alternative discussed under the APM scoring standard in the proposed rule.

    Our proposal that Partial QPs may choose whether to report to MIPS has two additional interactions with other proposed policies. First, because we proposed unique MIPS scoring policies for MIPS eligible clinicians participating in certain APMs, the election by the APM Entity not to report under MIPS is in effect a decision to tell us not to score the information submitted by the APM Entity under MIPS. Under our proposal, that decision would be made at the APM Entity level. APM Entities and eligible clinicians would continue to report to their respective APMs as required under the terms of their participation agreements with us.

    Second, given the proposed timeframe for QP determinations under section II.F.5.a. of the proposed rule (81 FR 28313), our proposed treatment of claims run-out, claims adjustments, supplemental service payments, and alternative payment methods for purposes of QP determination (further detailed in section II.F.8 of the proposed rule (81 FR 28339)), and the and subsequent notification of QP determinations proposed under section II.F.5.d. of the proposed rule (81 FR 28322), eligible clinicians who become Partial QPs would not receive notification of this status until after the proposed timeframe for the MIPS reporting period will have closed. Although the information necessary for MIPS reporting would already be prepared in our systems by the time the Partial QP determination is made, a prospective election by the Advanced APM Entity to not be scored under MIPS and receive a MIPS payment adjustment would signal us to not transfer information from our reporting system to the MIPS scoring system in the event of a Partial QP determination, and that any submitted information is not to be used for purposes of a MIPS assessment or payment adjustment. Thus, by choosing not to report under MIPS, those Advanced APM Entities and eligible clinicians determined to be Partial QPs would be exempted from the MIPS payment adjustment for that year. We solicited comment on the timing and process for Advanced APM Entities to elect whether to be subject to MIPS in the event of a Partial QP determination.

    The following is a summary of the comments we received regarding our proposal for Advanced APM Entities to determine whether or not to be subject to MIPS in the event that their eligible clinicians are determined to be Partial QPs.

    Comment: Several commenters expressed opinions about the level at which the Partial QP decision is made of whether or not to report to MIPS. Most of these commenters stated strong support for the proposed policy that the decision be made at the APM Entity level in order to reinforce the collective nature of APM participation, and, in the event the group is subject to MIPS through a Partial QP election or missing the Partial QP and QP thresholds, the use of the APM Entity as the defining group for MIPS scoring. Other commenters stated support for QPs to make the decision at a TIN level in order to align with billing, other activities outside the APM context, and the TIN-based structure of the Shared Savings Program. A few commenters expressed that the decision should be made individually by each eligible clinician.

    Response: We agree that although there could be some advantages to TIN-level Partial QP decisions, it is most appropriate to retain consistency within the Quality Payment Program in which eligible clinicians in APMs are primarily assessed at the APM Entity level. In the cases for which we make the Partial QP determination at the individual eligible clinician level, those individual eligible clinicians would accordingly make the Partial QP election of whether to be subject to MIPS payment adjustments at an individual level.

    Comment: Some commenters expressed general support for the Partial QP election policy because it enables a degree of flexibility and choice to recognize those who participate in Advanced APMs to some extent, despite the fact that the eligible clinicians did not reach the QP threshold.

    Response: We thank the commenters for their support.

    Comment: Several commenters expressed concern about the timing of the Partial QP determinations. They stated that, as proposed, Partial QPs would not be able to make a fully-informed decisions because they would make their decision of whether or not to be subject to MIPS prior to knowing their ultimate QP status; therefore, Partial QP determinations should be made earlier to avoid prospective decisions.

    Response: We agree with commenters that Partial QP decisions regarding MIPS should not be made without any information regarding a group's QP status. We believe that we resolve this issue through the finalized QP Performance Period policy so that all Advanced APM participants will know if they are Partial QPs by the beginning of the MIPS submission period and will not need to make MIPS decisions as Partial QPs prior to that point in time.

    Comment: Some commenters were concerned that Partial QPs would not have enough information to made decisions about whether to report to MIPS.

    Response: We note that no eligible clinicians, regardless of whether they are Partial QPs, will be able to know their MIPS payment adjustments until they are actually announced just before the payment year, so a Partial QP decision to report to MIPS does carry with it some unavoidable uncertainty. Each Advanced APM Entity will need to weigh its options of the burden of reporting and likelihood of positive MIPS adjustments with the certainty of choosing exclusion from MIPS payment adjustments, which could be upward, neutral, or downward adjustments for the payment year.

    Comment: Some commenters suggested alternatives to the consequences of Partial QP status. One commenter recommended that Partial QPs receive a partial bonus payment. Another commenter recommended that Partial QPs who report MIPS data should receive the higher amount of the MIPS adjustments or the neutral payment adjustment. In other words, the commenter suggested that MIPS adjustments should apply if positive but not apply if negative.

    Response: We appreciate the ideas for Partial QP policies, but we do not believe the statute provides for the kinds of Partial QP incentives suggested by the commenter.

    In consideration of the comments and the modifications we are making to the proposed QP Performance Period policies, we are not finalizing the proposed policy that Advanced APM Entities would make prospective elections regarding whether or not to score their MIPS data in the event that they are determined to be Partial QPs. Instead, we are finalizing a modified policy such that, following a final determination that eligible clinicians in an Advanced APM Entity group are Partial QPs for a year, the Advanced APM Entity will make an election whether to report to MIPS, thus making all eligible clinicians in the Advanced APM Entity group subject to MIPS reporting requirements and payment adjustments for the year; if the Advanced APM Entity elects not to report, all eligible clinicians in the APM Entity group will be excluded from MIPS adjustments. In the cases where the QP determination is made at the individual eligible clinician level, if the eligible clinician is determined to be a Partial QP, the eligible clinician will make the election whether to report to MIPS and then be subject to MIPS reporting requirements and payment adjustments.

    A Partial QP who elects not to report to MIPS, whether based on the decision of the APM Entity or the individual eligible clinician, similar to QPs, is excluded from MIPS across all TINs associated with that Partial QP's NPI. Partial QPs do not under any circumstance receive the APM Incentive Payment.

    Under this finalized policy, only Partial QPs must make this election after the Partial QP determination is made. The finalized QP Performance Period and QP determination policies apply to Partial QP determinations and enable Partial QP determinations to be made in a timeframe that makes the proposed prospective elections unnecessary. This means that Advanced APM Entities do not make a Partial QP decision on behalf of their constituent eligible clinicians unless and until that group actually is determined to be a Partial QP. Similarly, eligible clinicians for whom we make individual QP determinations do not elect whether to report to MIPS unless and until the eligible clinician is determined to be a Partial QP for the year.

    We also clarify how we consider the absence of an explicit election. For situations in which the APM Entity is responsible for making the determination on behalf of all eligible clinicians in the APM Entity group, the group of Partials QPs will not participate in MIPS unless the APM Entity opts the group into MIPS participation so that no actions other than the APM Entity's election for the group to participate in MIPS would result in MIPS participation. We believe that this default minimizes the possibility of unexpected participation in MIPS and also recognizes that most APM Entity groups in this situation would be scored collectively under the APM scoring standard in MIPS, thus necessitating group decision-making.

    For situations in which an eligible clinician is determined to be a Partial QP individually, we will use the eligible clinician's actual reporting activity to determine whether to exclude the Partial QP from MIPS in the absence of an explicit election. Therefore, if an eligible clinician determined as an individual to be Partial QP submits information to MIPS (which does not include information automatically populated or calculated by CMS on the Partial QP's behalf), we will consider the Partial QP to have reported and thus be participating in MIPS. Likewise, if an eligible clinician determined as an individual to be a Partial QP does not take any action to submit information to MIPS, then we will consider the Partial QP to have elected to be excluded from MIPS.

    We anticipate that there will be relatively few Partial QPs, especially in the first few years of the Quality Payment Program; therefore, we believe that this finalized policy will reduce administrative burden on Advanced APM participants and operate most smoothly with our finalized policies for QP determinations.

    d. Notification of QP Determination

    We proposed to notify both Advanced APM Entities and their participating eligible clinicians of their QP or Partial QP status as soon as we have made the determination and performed all necessary validation of the results. We proposed that this notification would be made directly to the Advanced APM Entity and eligible clinician, and made in combination with a general public notice on the CMS Web site that such determinations have been completed for the applicable QP Performance Period. We proposed that this notification would also contain other necessary and useful information, such as what actions, if any, an Advanced APM Entity or eligible clinician may or should take with respect to MIPS.

    We solicited comment on our proposals to make the QP and Partial QP status notifications. We also solicited comment on other methods and media for the notification of QP and Partial QP status. We also solicited comment on the content of such notifications so that they may be as clear and useful as possible.

    The following is a summary of the comments we received regarding our proposal to make notifications regarding the results of QP and Partial QP determinations.

    Comment: Many commenters suggested that CMS notify Advanced APM participants of their QP status as soon as possible so that they can know whether or not they should report to MIPS. Several commenters specifically stated that notification should be made during the QP Performance Period or by February 1 or March 1 of the year following the QP Performance Period.

    Response: We agree that timely notification is important, and we understand that much of the concern for receiving timely notifications is related primarily to the QP Performance Period timeframe. We can only notify Advanced APM participants of their QP status as soon as such status is finalized, and as proposed, that notification could not have occurred prior to April or May of the year following the QP Performance Period. However, under the finalized QP determination timeframe, we will be able to complete QP determinations at three separate times during the QP Performance Period, thus enabling significantly earlier notifications than proposed.

    Comment: Several commenters stated the need for clear communication and offered suggestions on the types of content that should be contained in the notifications. Some commenters recommended that we provide Advanced APM participants with comprehensive information on their Threshold Scores using both the payment amount and patient count methods so that they can see precisely where they stand in relation to the QP thresholds. Other commenters stated that we should send preliminary information to Advanced APM participants before the actual QP determinations so that they can predict their QP status. Finally, some commenters requested that we send reports with data sufficient for Advanced APM Entities to replicate and verify the Threshold Score calculations. Finally, one commenter requested that we include an appeals process following notification of the QP determinations.

    Response: We agree that supplying useful information about Threshold Scores under the different methods in concert with the QP determination is a valuable goal. We will take all of these comments into account as we develop the notification format and content. We also plan to supply Advanced APM participants with preliminary analyses based on their historical claims to help them understand their likelihood of meeting the QP thresholds were they to practice in the Advanced APM similarly to how they have done previously. Finally, section 1833(z)(4) of the Act explicitly excludes administrative or judicial review of the QP determinations, but we will ensure that the calculations undergo a rigorous quality assurance process prior to finalization.

    Comment: Some commenters provided suggestions as to which parties should receive notifications of QP status. One commenter suggested that we send notifications to the Advanced APM Entity and the eligible clinicians in writing or via email and that we publicly post the determinations on the CMS Web site. One commenter stated that the Advanced APM Entity should receive the notification instead of TINs that may be part of the Advanced APM Entity.

    Response: We thank the commenters for their suggestions, and we agree that it is important to ensure that the appropriate parties are properly notified of their status. We will take these comments into consideration when developing our notification processes.

    We are finalizing the proposal to notify Advanced APM Entities and eligible clinicians of their QP or Partial QP status as soon as we have made the determination and performed all necessary validation of the results. This notification process will occur for each of the three QP determinations that we will perform during a QP Performance Period. We will provide additional information on the format of such notifications and the data we will include as part of our public communications following this final rule with comment period.

    6. Qualifying APM Participant Determination: Medicare Option a. In General

    Under the Medicare Option, we proposed to calculate a Threshold Score for an Advanced APM Entity—or eligible clinician in the cases of an exception described in section II.F.5.b. of the proposed rule (81 FR 28319)—based on participation in an Advanced APM by analyzing claims for Medicare Part B covered professional services. Under the alternative calculation using patient counts in lieu of payments (patient count method), we proposed to similarly calculate a Threshold Score for the Advanced APM Entity based on patient attribution as described in the proposed rule. Under either the payment amount or patient count method, only Medicare Part B covered professional services under the PFS will count toward the numerator and denominator of the Threshold Score calculation.

    Section 1833(z)(2)(A), (B)(i) and (C)(i) of the Act describes the QP determination using the Medicare payment method as follows: A QP is an eligible clinician whose payments under this part for covered professional services furnished by such professional during the most recent period for which data are available (which may be less than a year) were attributable to such services furnished under this part through an Advanced APM Entity. Section 1833(z)(2)(D) of the Act describes the basis for the patient count method.

    (1) Attributed Beneficiaries

    In section II.F.3. of the proposed rule (81 FR 28295), we proposed two definitions that would apply specifically for the purposes of QP determination: Attributed beneficiary and attribution-eligible beneficiary. Each term describes a particular relationship between an Advanced APM Entity and the beneficiaries for whose cost and quality of care the participating eligible clinicians are held accountable. These terms are the foundation for how we propose to count services furnished through an Advanced APM Entity.

    We proposed that “attributed beneficiary” be defined as a beneficiary attributed to the Advanced APM Entity on the latest available list of attributed beneficiaries during the QP Performance Period based on each APM's respective attribution rules. There are some natural advantages to using this term for the purposes of QP determination because it is consistent with how many APMs—including the Shared Savings Program (assigned beneficiaries), Next Generation ACO Model (aligned beneficiaries), and BPCI Model (attributed beneficiaries)—identify the beneficiaries whose outcomes and costs are included in an APM Entity's assessment. We believe that using the same construct also coordinates the incentives under the Advanced APM with the incentives under the MACRA by addressing the same beneficiary population.

    In most episode payment models, such as the CJR Model, attribution is defined by the beneficiaries who trigger the defined episode of care under the model, often by presenting with a specific condition at the location of a participating APM Entity. In many attribution-based APMs, such as ACO initiatives or the Comprehensive Primary Care Initiative, CMS attributes beneficiaries to APM Entities through claims-based algorithms that identify the APM Entity with the plurality of evaluation and management visits for a beneficiary. In addition, most APMs do not allow beneficiaries to be attributed to more than one APM Entity. This means that the greater the APM Entity density in a market, the lower the attributed population for a given APM Entity will be as a percent of its total beneficiaries. We solicited comment on the proposed methodology for defining the attributed beneficiary population, including comment on alternative methods for capturing the most meaningful cohort of attributed beneficiaries.

    Under these plurality-based approaches, typically only 30-50 percent of an Advanced APM Entity's total population of beneficiaries for whom its eligible clinicians furnish services are actually attributed to the Advanced APM Entity for a performance period. These percentages reflect a combination of CMS' design decisions, beneficiaries' underlying care patterns, and the fact that beneficiaries in Medicare FFS retain freedom of choice to select clinicians. These percentages reflect conditions that are not entirely under the control of the APM Entity or its eligible clinicians. Thus, we recognize that because Advanced APMs have different attribution methodologies, using the specific Advanced APM attributed beneficiary as the definition may create a standard that advantages or disadvantages participation in certain Advanced APMs relative to others simply based on the specific attribution policies.

    We proposed to use the attributed beneficiaries on Advanced APM attribution lists generated by each Advanced APM in making QP determinations. We also proposed that the attributed beneficiary list would be taken from the Advanced APM's latest available list at the end of the QP Performance Period prior to making the QP determinations. For episode payment models, attributed beneficiaries would be those beneficiaries who trigger episodes of care under the terms of the APM.

    We believe that this approach to attribution lists maintains consistency with the panel of beneficiaries for whom Advanced APM Entities are responsible under their respective Advanced APMs during the QP Performance Period. Therefore, we believe that such lists would be appropriate for use in QP determinations. Advanced APM Entities are already accustomed to providing care for the panel of beneficiaries represented by their APM Entity-specific list. We believe that our proposal to link attribution for QP determination to Advanced APM attribution lists further strengthens the goals of the Advanced APMs in which these Advanced APM Entities participate. By using the same beneficiary population for QP determination purposes, Advanced APM Entities may continue focusing on the care they furnish to the same panel of attributed beneficiaries, instead of shifting focus and changing practice patterns to reach a QP threshold. As stated in our principles in section II.F.1. of the proposed rule (81 FR 28293), we intend for the QP determination process to seamlessly reward participation in Advanced APMs, not to create a new set of performance standards distinct from the goals of APMs.

    We solicited comment on our proposal for determining which beneficiaries are considered attributed to an Advanced APM Entity.

    The following is a summary of the comments we received regarding our proposal to define the attributed beneficiary population based on actual Advanced APM attribution and to use the latest available attribution list at the end of the QP Performance Period for QP determinations.

    Comment: We received several comments related to attribution more generally, such as how to improve attribution in APMs, enable attribution across multiple entities or APMs, allow for review and modification of attribution lists, and requests for clarification of how attribution is performed in APMs.

    Response: We thank the commenters for their input. However, these issues are beyond the scope of this final rule with comment period.

    We are finalizing the definition of attributed beneficiary as proposed. We are finalizing that we would identify the attributed beneficiaries for an Advanced APM Entity based on the latest available attribution list at the time of a QP determination. This differs slightly from the proposed policy in order to align with the finalized QP Performance Period policies in this section and enable QP determinations to be made earlier in the QP Performance Period.

    (2) Attribution-Eligible Beneficiaries

    Consistent with our proposed definition of attributed beneficiary, our proposed definition for an attribution-eligible beneficiary would allow us to be consistent across Advanced APMs in how we consider the population of beneficiaries served by an Advanced APM Entity for the purposes of QP determination. To be attributed to an Advanced APM Entity in an Advanced APM, a beneficiary is first required to meet certain eligibility criteria. Specifically, for purposes of QP determinations, we propose that an attribution-eligible beneficiary would be one who:

    (1) Is not enrolled in Medicare Advantage or a Medicare cost plan.

    (2) Does not have Medicare as a secondary payer.

    (3) Is enrolled in both Medicare Parts A and B.

    (4) Is at least 18 years of age.

    (5) Is a United States resident.

    (6) Has a minimum of one claim for evaluation and management services by an eligible clinician or group of eligible clinicians within an APM Entity for any period during the QP Performance Period.

    An attribution-eligible beneficiary may or may not be an attributed beneficiary. Attributed beneficiaries are a subset of attribution-eligible beneficiaries. Much like the term “attributed beneficiary,” the term attribution-eligible beneficiary is generally consistent with the attribution methodologies used in most current APMs to identify the beneficiaries who could potentially be attributed to an APM Entity. Although the factors we proposed for the definition of an attribution-eligible beneficiary in this context would only apply for the purposes of QP determinations, and would not change APM-specific methodologies, we believe that the factors in the proposed definition are representative of the methodologies most current APMs use to perform attribution. Therefore, we believe it would serve as a practical common set to apply in QP threshold calculations.

    The purpose of using the attribution-eligible construct is to ensure that the denominator of QP determination calculations described in this section only includes payments for services furnished to patients who could potentially be attributed to an Advanced APM Entity under the Advanced APM, and thus could also appear in the numerator of the QP determination calculations. We believe that including amounts in the denominator that could not possibly be included in the numerator would be arbitrarily punitive toward certain Advanced APM Entities that furnish services to a substantial population of non-attribution-eligible beneficiaries.

    We note that specialty-focused or disease-specific APMs may have attribution methodologies that are not based on evaluation and management services. Therefore, we anticipate needing targeted exceptions, especially related to the sixth factor of the definition of attribution-eligible beneficiary, for such APMs so that the attributed beneficiary population is truly a subset of the attribution-eligible population. Such exceptions would be made either through rulemaking or using available waiver authority and would be announced when the APM is announced.

    For example, under the CEC Model, one criterion, among others, to be an aligned beneficiary requires that the beneficiary receive maintenance dialysis services. In the event that the CEC Model were determined to be an Advanced APM, we would consider attribution-eligible beneficiaries for the APM Entities participating in the CEC Model to be beneficiaries that meet the first five criteria outlined above and that have had at least one maintenance dialysis service billed through the Advanced APM Entity during the QP Performance Period. We would make this exception for the CEC Model to ensure that the denominator of QP determination calculations described in this section only includes payments for services furnished to patients who could potentially be attributed to an Advanced APM Entity under the Advanced APM.

    Although the availability of such exceptions, as outlined above, would create multiple standards for the beneficiaries that are attribution-eligible, we believe this slightly more complex approach is more appropriate and equitable because it is consistent with the design of APMs. An alternative approach could be to have a simple standard that includes in the denominator all beneficiaries who are furnished any Medicare Part B covered professional service by eligible clinicians participating in the Advanced APM Entity.

    We solicited comment on the proposed general definition of attribution-eligible beneficiary and on our proposal to use of APM-specific standards as necessary to fulfill our expressed goals for specialty- or disease-focused APMs that may use alternative attribution methodologies.

    The following is a summary of the comments we received regarding our proposal to define attribution-eligible beneficiaries.

    Comment: Many commenters stated support for altering the definition of attribution-eligible in circumstances when an Advanced APM does not base attribution on evaluation and management services in order to support disease- and specialty-focused APMs, such as BPCI, CJR, OCM, and CEC, with the assumption that these APMs would be Advanced APM. Some commenters requested that we explain how such exceptions will be made and that stakeholders have input in defining the rules.

    Response: We do not believe that there should be a formal application process for these exceptions because we operate both the Quality Payment Program and Advanced APMs. Therefore, we will make assessments appropriate to the interactions between programs. As we explained, we would for the CEC Model, consider whether the evaluation and management service basis for the definition of attribution-eligible beneficiary is appropriate for assessing eligible clinicians' participation in an Advanced APM. If evaluation and management services are significantly at odds with actual attribution and the evaluation of performance on the cost and quality of beneficiary care under an Advanced APM, we may consider a different basis for the attribution-eligible beneficiary definition that takes into consideration attribution within the Advanced APM. In that case, we would make an exception either through rulemaking or by using available waiver authority that would be announced in connection with notifications for the APM.

    Comment: In direct response to our solicitation on defining attribution-eligible beneficiaries in the context of the CEC initiative, several commenters suggested that only patients on dialysis be included in the attribution-eligible definition, which would exclude those patients with CKD or a kidney transplant unless and until the CEC initiative expands to include responsibility for CKD or kidney transplant patients.

    Response: We appreciate the detailed input commenters offered in response to our solicitation regarding the CEC initiative and agree that these are important components to appropriately defining the attribution-eligible population. We will take these responses into account as needed to develop the basis for attribution-eligible beneficiaries for CEC.

    We are finalizing the definition of attribution-eligible to mean a beneficiary who:

    • Is not enrolled in Medicare Advantage or a Medicare cost plan.

    • Does not have Medicare as a secondary payer.

    • Is enrolled in both Medicare Parts A and B.

    • Is at least 18 years of age.

    • Is a United States resident.

    • Has a minimum of one claim for evaluation and management services by an eligible clinician or group of eligible clinicians within an APM Entity for any period during the QP Performance Period.

    We are also finalizing that, for Advanced APMs that do not base attribution on evaluation and management services and for which attributed beneficiaries are not, in fact, a subset the attribution-eligible beneficiary population based on the requirement to have at least one claim for evaluation and management services furnished by an eligible clinician who is in the APM Entity for any period during the QP Performance Period, then we will modify the definition of attribution-eligible beneficiary for that Advanced APM only in order to identify the appropriate attribution-eligible population based upon the attribution methodology of the Advanced APM (for example, a combination of evaluation and management services and/or other Part B covered professional services). We will announce these exceptions to the extent applicable in a manner consistent with the Advanced APM notification process under section II.F.4. of this final rule with comment period.

    For example, we would develop such an exception for the CEC Model to the extent it is determined to be an Advanced APM to ensure that the denominator of QP determination calculations described in this section only includes payments for services furnished to patients who could potentially be attributed to ESRD Seamless Care Organizations (ESCOs).

    b. Payment Amount Method

    This section describes how we will calculate a Threshold Score for the eligible clinician group in an Advanced APM Entity—or individual eligible clinician in the exception situations under section II.F.6. of this final rule with comment period—using the payment amount method, which would then be compared to the relevant QP Payment Amount Threshold and Partial QP Payment Amount Threshold to determine if the eligible clinician meets the QP status for a payment year.

    (1) Claims Methodology and Adjustments

    For the payment amount method, sections 1833(z)(2)(A), (B)(i) and (C)(i) of the Act requires that we use payments for Medicare Part B covered professional services to make QP determinations. Covered professional services are defined under section 1848(k)(3)(A) of the Act as services for which payment is made under, or based on, the PFS. The payment amounts discussed in this proposal only include payments for Medicare Part B services under, or based on, the PFS, even if an Advanced APM bases attribution and/or financial risk on payments other than or in addition to Medicare Part B payments.

    We proposed to use all available Medicare Part B claims information generated during the QP Performance Period. Additionally, we propose that CMS will treat claims run-out, claims adjustments, supplemental service payments, and alternative payment methods in the same manner for purposes of calculating both the Threshold Score and for determining the APM Incentive Payment amount. We further detailed our proposals to account for claims run-out, claims adjustments, non-claims-based payments, and alternative payment methods in section II.F.8. of the proposed rule (81 FR 28339).

    We believe it is appropriate to maintain consistency across the QP determination and the incentive payment calculation in order to support internal CMS operational consistencies. It also ensures that any unique payment mechanisms within an Advanced APM do not affect the opportunity for an eligible clinician to reach the QP threshold.

    We solicited comment on whether the claims methodology we use under the Medicare payment method should align with the proposed claims methodology for purposes of calculating the estimated aggregate payment amount for the APM Incentive Payment.

    The following is a summary of the comments we received regarding our proposal for the QP payment amount method to use all available claims information for Medicare Part B covered professional services during the QP Performance Period and to treat claims run-out, claims adjustments, supplemental service payments, and alternative payment methods in the same manner as that used for the APM Incentive Payment calculation.

    Comment: Several commenters expressed general support for the QP determination methodologies in this section, including our interpretation of which payments and patients are considered “through” an Advanced APM and that we will use the same treatment of claims for calculating the Threshold Scores in this section and the APM Incentive Payment in section II.F.8. of this final rule with comment period.

    Response: We thank the commenters for their support of our approach to QP determination methodologies.

    Comment: Some commenters were uncertain about how “incident to” items and services would be considered in QP calculations.

    Response: We would consider “incident to” billing for covered professional services to be covered professional services when calculating the Threshold Scores, as long as the NPI billing for the “incident to” claims is identified as a participant in the Advanced APM Entity. We further clarify that this would exclude “incident to” payment for drugs, biologics, and devices covered under Medicare Part B because those are not covered professional services.

    Comment: One commenter requested that we use the Medicare paid amount instead of the allowed amount when calculating the Threshold Score.

    Response: We do not believe it would be appropriate to use the Medicare paid amount instead of the allowed amount when calculating Threshold Scores. The Medicare paid amount reflects any reductions from the Medicare PFS amount for beneficiary co-payments or coinsurance requirements, and also reflects any payment adjustments that are applied to fee schedule payments, such as positive or negative payment adjustments from the PQRS, MU, VM, or MIPS programs. Including these adjustments is inconsistent with our proposal to exclude payment adjustments from these programs that we finalized in section II.F.8. of this final rule with comment period.

    We are finalizing that for the QP payment amount method we will use all available claims information for Medicare Part B covered professional services during the applicable QP determination period as described in this section of the final rule with comment period.

    (2) Threshold Score Calculation

    In general, our method for deriving a Threshold Score for an Advanced APM Entity is to divide the value described under paragraph (a) in this section by the value described under paragraph (b) of this section. This calculation will result in a percent value that CMS will compare to the QP Payment Amount Threshold and the Partial QP Payment Amount Threshold to determine the QP status for all eligible clinicians in the Advanced APM Entity for the payment year.

    (a) Numerator

    We proposed that the numerator for this calculation would be the aggregate of all payments for Medicare Part B covered professional services furnished by the eligible clinicians in the Advanced APM Entity to attributed beneficiaries during the QP Performance Period.

    We believe that this method is the most logical reading of the statute and is reflective of the population of beneficiaries for whom an Advanced APM Entity is responsible for cost and quality. Therefore, we believe that counting payments for covered professional services furnished to attributed beneficiaries is the most suitable metric for payments that are attributable to services furnished “through” an Advanced APM Entity. In episode payment models, because a beneficiary is considered attributed during the course of an episode, the payments included in the numerator for this calculation are those for Medicare Part B covered professional services furnished to an attributed beneficiary by eligible clinicians in the Advanced APM Entity during the course of an episode.

    One program integrity concern is that an Advanced APM Entity might meet the higher QP Payment Amount Threshold in later years by providing substantially disproportionate amounts of care for attributed beneficiaries relative to all others. However, because of the financial risk an Advanced APM Entity bears, which is usually based on expenditures, we believe that the relatively large potential loss under the Advanced APM would outweigh the advantage of any overutilization geared toward abusing Threshold Score calculations. We solicited comment on any alternative numerators we could use for purposes of the Medicare payment method that meaningfully meet statutory requirements, are understandable, and operationally feasible.

    The following is a summary of the comments we received regarding our proposal to calculate the numerator of the Threshold Score under the QP payment amount method using the aggregate of all payments for Medicare Part B covered professional services furnished by the eligible clinicians in the Advanced APM Entity to attributed beneficiaries during the QP Performance Period.

    Comment: We received few comments regarding the numerators for QP determinations, but most commenters were generally supportive of its basis in services furnished to an Advanced APM Entity's attributed beneficiary population. One commenter suggested that we essentially include all physician payments in the numerator for which a physician is listed as an attending physician because the commenter stated that hospitalists are ultimately responsible for all spending for a patient.

    Response: We thank the commenters for their general support of our approach. We do not believe it is meaningful or consistent with the statute to design numerators that are mathematically the same as, or potentially larger than, denominators. Although we understand that Advanced APM participation may have valuable spillover effects into other aspects of clinical practice, we must base the calculations in terms of direct Advanced APM participation, not any activity that appears similar in nature to Advanced APM activity.

    We are finalizing the numerator of the Threshold Score under the QP payment amount method to be the aggregate of all payments for Medicare Part B covered professional services furnished by the eligible clinicians in the Advanced APM Entity to attributed beneficiaries during the timeframe used for QP determination.

    This is identical to the proposed policy except that the applicable range of service dates will vary depending on which of the three QP determinations during a QP Performance Period is being performed in accordance with the finalized QP Performance Period policy in this section and illustrated in Figure G.

    (b) Denominator

    We proposed that the denominator in the Medicare payment method would be the aggregate of all payments for Medicare Part B covered professional services furnished by the eligible clinicians in the Advanced APM Entity to attribution-eligible beneficiaries during the QP Performance Period. We proposed that when the QP determination is made at the eligible clinician level as described in section II.F.5. of the proposed rule (81 FR 28313), the denominator would be the total of all payments for Medicare Part B covered professional services furnished to attribution-eligible beneficiaries by the eligible clinician. In episode payment models, the payments included in the denominator for this calculation as proposed would be those for Medicare Part B covered professional services furnished to any attribution-eligible beneficiary by eligible clinicians in the Advanced APM Entity. This would include all such services to all attribution-eligible beneficiaries whether or not such services occur during the course of an episode under the Advanced APM.

    Including payment for services furnished only to attribution-eligible beneficiaries standardizes the denominator to ensure fairness across types of eligible clinicians and geographic regions. By using the attribution-eligible population, the denominator will not penalize entities for furnishing services to beneficiaries who could not possibly be in the numerator through attribution under an Advanced APM. For example, an ACO's eligible clinicians may furnish services to a large population of beneficiaries with Medicare as a secondary payer. Those beneficiaries may not be eligible for attribution to the ACO, and could never be included in the numerator. Therefore, we believe that this methodology focuses on factors for which Advanced APM Entities have some control rather than those for which they may have no control or that disadvantage certain organizational structures or types of APMs. We solicited comment on alternative methods that are consistent with the statutory language.

    The following is a summary of the comments we received regarding our proposal to calculate the denominator of the Threshold Score for the QP payment amount method using the aggregate of all payments for Medicare Part B covered professional services furnished by the eligible clinicians in the Advanced APM Entity to attribution-eligible beneficiaries during the QP Performance Period.

    Comment: Several commenters expressed support for the basis of the denominator being tied to attribution-eligible beneficiaries because it meaningfully limits the denominator to those beneficiaries that could potentially be in the numerator. Some commenters recommended that we consider adjusting the denominator to ensure that episodes are treated appropriately in the numerator and the denominator. For instance, one commenter suggested that the last element of the attribution-eligible definition be tied to all beneficiaries with the specific disease, condition, or episode to whom the Advanced APM Entity eligible clinicians furnished services for Medicare Part B covered professional services.

    Response: We thank the commenters for their support and agree that the attribution-eligible construct is a meaningful way to define the denominator. We also believe that evaluation and management services remain a consistent standard that identifies the population of beneficiaries whom eligible clinicians can consider their patients, even though some APMs base attribution on services other than evaluation and management services. The narrow focus of some APMs, primarily episode payment models, may make it relatively difficult for participants to reach the QP thresholds in comparison to APMs that are based upon a more comprehensive assessment of beneficiary care. Nonetheless, we believe that the QP thresholds will still be attainable by episode payment model participants who have a significant portion of their practice focused on the services upon which the APM is based. Customizing the denominator to the particular services upon which an APM is focused could, in many cases, reduce the denominator so much that it would not be meaningfully representative of an eligible clinician's business under Medicare Part B. Under such an approach, the 5 percent APM Incentive Payment, which is based on an eligible clinician's payments for Part B covered professional services, could exceed the entire denominator value in many cases. Therefore, we continue to believe that the proposed policy as applied to episode payment models is appropriate and representative of an eligible clinician's degree of Part B-related participation in an Advanced APM.

    Comment: A few commenters expressed concern that the denominator will be difficult for APM participants to estimate, thus causing uncertainty about their likely QP status. Most APMs currently provide APM Entities only with an attributed beneficiary list, not an attribution-eligible beneficiary list.

    Response: We appreciate the feedback on the inability to precisely predict an Advanced APM Entity's Threshold Score because this is a new calculation without historical scores or readily available information on beneficiaries considered attribution-eligible. Each APM manages the beneficiary attribution rules and lists provided to participants. However, we believe that the preliminary Threshold Score information we plan to send to Advanced APM participants near the beginning of a QP Performance Period will mitigate these concerns. We welcome input for the future on particular types of data that Advanced APM participants would find helpful in their analyses.

    Comment: Several commenters expressed concern about the potential difficulty of attaining high Threshold Scores based on the proposed denominator. In particular, commenters cited specialists, who in most cases may participate in only one ACO but often see a broad range of patients across many networks, most of whom are not attributed to the specialist's particular ACO. Commenters stated that this could result in a dilution of the denominator and be detrimental to the entire APM Entity's ability to meet the higher QP thresholds, creating an inadvertent incentive to remove specialists from Participation Lists in the future. Some commenters requested that we find a way other than attribution to define the denominator or to separately evaluate non-primary care practitioners so that the relative breadth of their practices is not a detriment to the Threshold Score. One commenter suggested that we extend the QP threshold increases so that the higher thresholds are further in the future. One commenter stated that it is unfair to have a metric for which attaining a 100 percent score is often not possible, particularly for specialty practice groups. Another commenter suggested that we simply use attributed beneficiaries in the denominator instead of attribution-eligible beneficiaries, acknowledging that this would essentially give a 100 percent score to all Advanced APM Entities. Some commenters suggested that we exclude from attribution-eligible category any beneficiaries who are actually attributed to other Advanced APM Entities in order to address the issue of attribution competition over beneficiaries in regions with a high density of Advanced APM Entities. Finally, some commenters recommended that we include service area adjustments to account for mobile beneficiary populations that may only reside in an area for a portion of a year.

    Response: We understand the commenters' concerns about the denominator being a key factor in reaching the higher QP thresholds. We agree and have developed the attribution-eligible definition in response to otherwise unrealistic thresholds for many organizations. That said, we do not believe it is necessary to make a 100 percent Threshold Score attainable to all Advanced APM participants because this is, as the name implies, a metric based on reaching a threshold. Once the threshold is met, no additional benefit accrues to those with higher Threshold Scores. With respect to the suggestion of removing beneficiaries attributed to other Advanced APM Entities from the denominator, we appreciate the idea and agree that it would have the effect of shrinking the denominator. However, we believe it is important to provide an incentive for Advanced APM Entities to strive to expand their attributed population. It is consistent with our goals that, within their capabilities, Advanced APM Entities are responsible for the cost and quality of as many beneficiaries as possible. With respect to service area adjustments, we believe that this would add a high degree of complexity to this calculation with very minimal positive impact on Threshold Scores.

    We will closely monitor Threshold Scores, particularly with respect to the impact of specialist participation, the fragmentation of where beneficiaries seek their care, or circumstances such as high rates of “snowbird” patients affecting the denominator. We will monitor Threshold Scores at both the APM Entity and individual levels to understand how group Threshold Scores may vary based on characteristics of attributed beneficiary populations and the eligible clinicians in an Advanced APM Entity.

    We believe that ACOs, which on average have specialist representation on their Participation Lists approximately representative of the specialist distribution nationally, will generally be able to meet the QP thresholds. We also believe that participation in any episode payment models that are Advanced APMs could be an opportunity, consistent with the relatively narrow scope of many episode payment models, for eligible clinicians to become a QP.

    Comment: A few commenters believe that the attribution-eligible construct for the denominator could inadvertently create an incentive to not provide necessary services or to select patients for purposes of meeting the QP thresholds.

    Response: We appreciate the consideration for program integrity concerns. We take these concerns seriously, and, as described in section II.F.9. of this final rule with comment period, we will be monitoring the program for patterns of behavior with unintended negative consequences.

    We are finalizing the denominator of the Threshold Score under the QP payment amount method to be the aggregate of all payments for Medicare Part B covered professional services furnished by the eligible clinicians in the Advanced APM Entity to attribution-eligible beneficiaries during the timeframe used for QP determination. This is identical to the proposed policy except that the applicable range of service dates will vary depending on which of the three QP determinations during a QP Performance Period is being performed in accordance with the finalized QP Performance Period policy in this section.

    c. Patient Count Method

    Similar to the Medicare payment amount method, this section describes our proposal for calculating a Threshold Score for the eligible clinicians participating in an Advanced APM Entity—or eligible clinician in situations under section II.F.6. of this final rule with comment period—using the Medicare patient count method, which would then be compared against the relevant QP Patient Count Threshold and Partial QP Patient Count Threshold to determine the QP status of an eligible clinician for the year. Given our authority under section 1833(z)(2)(D) of the Act to use patient counts in lieu of payments “as the Secretary determines appropriate,” we are interpreting the patient count method to offer a more flexible alternative to the payment method. As previously mentioned, the purpose of the proposed design of the Medicare patient count method is to make QP status determinations accessible to entities and individuals who are clearly and significantly engaged in delivering value-based care through participation in Advanced APMs.

    (1) Unique Beneficiaries

    We proposed that when counting the number of beneficiaries under this method, we may count a given beneficiary in the numerator and denominator for multiple different Advanced APM Entities or eligible clinicians.

    We proposed that we would not count any beneficiary more than once for any single Advanced APM Entity or eligible clinician. In other words, for each Advanced APM Entity or eligible clinicians, we would count each unique beneficiary no more than one time in the numerator and one time in the denominator.

    We believe that counting beneficiaries this way retains the integrity of the Threshold Scores by preventing double counting of beneficiaries within an Advanced APM Entity while recognizing the reality that beneficiaries often have relationships with multiple different organizations.

    To be consistent with the Medicare payment method, we proposed that beneficiary counts would be based on any beneficiary for whom the eligible clinicians within an Advanced APM Entity receive payments for Part B covered professional services, or professional services furnished at an RHC or FQHC as described in this section, even if an Advanced APM bases its attribution and/or financial risk on both Parts A and B. We proposed that for this Threshold Score calculation, we would use any and all available Part B claims information generated during the QP Performance Period. We received no specific comments regarding our proposals to enable a beneficiary to be counted for multiple APM Entities but to count a beneficiary no more than once per APM Entity.

    We are finalizing the policy for the Medicare patient count method to enable a beneficiary to be counted in the numerator and denominator for multiple APM Entities or eligible clinicians but to count a beneficiary no more than once in the numerator and once in the denominator per APM Entity or eligible clinician. We are also finalizing the policy to base patient counts on any beneficiary for whom the eligible clinicians within an Advanced APM Entity receive payments for Part B covered professional services, or professional services furnished at an RHC or FQHC as described in this section, and to use any and all available Part B claims information generated during the QP Performance Period.

    (2) Threshold Score Calculation

    We proposed that the Threshold Score would be calculated under the Medicare patient count method as a percent by dividing the value described under paragraph (a) of this section by the value described under paragraph (b) of this section. We include the formula and examples in the summary equation below.

    (a) Numerator

    We proposed that the numerator would be the number of unique attributed beneficiaries to whom eligible clinicians in the Advanced APM Entity furnish Medicare Part B covered professional services during the QP Performance Period. For episode payment models, this would include the number of attributed beneficiaries furnished Medicare Part B covered professional services, or professional services at an RHC or FQHC as described in this section, by eligible clinicians in the Advanced APM Entity during the course of an episode under the Advanced APM.

    We did not receive any comments uniquely responding to our proposal for the numerator in the patient count method that were not also applicable to the payment amount method. Therefore, relevant comments were addressed in the payment amount numerator section.

    We are finalizing the policy that the numerator of the Threshold Score for the QP patient count method will be the number of unique attributed beneficiaries to whom eligible clinicians in the Advanced APM Entity furnish Medicare Part B covered professional services, or professional services at an RHC or FQHC as described in this section, during the QP determination timeframe. For episode payment models, the numerator will be the number of attributed beneficiaries furnished Medicare Part B covered professional services by eligible clinicians in the Advanced APM Entity during the course of an episode under the Advanced APM. This policy is identical to the proposed policy except that the applicable range of service dates will vary depending on which of the three QP determinations during a QP Performance Period is being performed in accordance with the finalized QP Performance Period policy in this section.

    (b) Denominator

    We proposed that the denominator would be the number of attribution-eligible beneficiaries to whom eligible clinicians in the Advanced APM Entity furnish covered professional services during the QP Performance Period. For episode payment models, this would include the number of attribution-eligible beneficiaries furnished Medicare Part B covered professional services by eligible clinicians in the Advanced APM Entity group at any point during the QP Performance Period, irrespective of whether such services occur during the course of an episode.

    We solicited comment on alternative approaches to the patient count method that would achieve our goal of a simple and meaningful Threshold Score calculation.

    We did not receive any comments uniquely responding to our proposal for the denominator in the patient count method that were not also applicable to the payment amount method. Therefore, relevant comments were addressed in the payment amount denominator section.

    We are finalizing the denominator of the Threshold Score under the QP patient count method to be the number of attribution-eligible beneficiaries to whom eligible clinicians in the Advanced APM Entity furnish covered professional services during the timeframe used for QP determination.

    This is identical to the proposed policy except that the applicable range of service dates will vary depending on which of the three QP determinations during a QP Performance Period is being performed in accordance with the finalized QP Performance Period policy in this section. In general, we believe that through consistency with the payment amount method, this approach balances our interests of relative simplicity and having a meaningful standard that recognizes the common aspects of attribution and accountability under Advanced APMs. Similar to the payment amount method, the patient count method represents a proportion of the patients for whom an Advanced APM Entity is accountable under the Advanced APM with respect to all patients who could potentially be attributed to the Advanced APM Entity under the Advanced APM.

    (3) APM Entity Participation in Multiple Advanced APMs

    We proposed that if the same Advanced APM Entity participates in multiple Advanced APMs and if at least one of those Advanced APMs is an episode payment model, we would add the number of unique beneficiaries in the numerator of the episode payment model Advanced APM Entity to the numerator(s) for non-episode payment models in which the Advanced APM Entity participates. For example, if an Advanced APM Entity is an ACO in Track 3 of the Shared Savings Program and also in the OCM (assuming these are both Advanced APMs for purposes of this example), we would add the entity's unique attributed beneficiaries in OCM to the numerator for its Shared Savings Program Track 3 Threshold Score calculation. We proposed that for purposes of the APM incentive, Advanced APM Entities would be considered the same if we determine that the eligible clinician Participation Lists are the same or substantially similar, or if the Advanced APM Entity participating in one Advanced APM is the same as, or is a subset of, the other.

    The purpose of this proposal was to allow the logical combination of activities under multiple Advanced APMs where appropriate. We believe that the purpose of the incentives for Advanced APM participation is to capture the degree of Advanced APM participation generally, not simply the degree of participation within a single Advanced APM. Where relevant and operationally feasible, we want this program to encourage participation in multiple Advanced APMs. The counterfactual where we would not account for a single Advanced APM Entity's participation in multiple Advanced APMs could be seen as punitive. For instance, an Advanced APM Entity could serve the vast majority of its beneficiaries through several Advanced APMs, but unless that participation is aggregated, the entity could end up with several lower Threshold Scores that are below the QP Patient Count Threshold and not indicative of its broader participation.

    We understand the difficulty associated with determining whether two Advanced APM Entities are in fact the same organization. It is highly unlikely that their Participation Lists would be exactly the same. Therefore, we solicited comment on how best to make a determination of substantial similarity, which includes, for example, matching organizational information, aligning TINs, and comparing Participation Lists. We also solicited comment on percentages of Participation List or TIN similarity that would be sufficient for APM Entities to be considered under this policy.

    The following is a summary of the comments we received regarding our proposal that if the same Advanced APM Entity participates in multiple Advanced APMs and if at least one of those Advanced APMs is an episode payment model, we would add the number of unique beneficiaries in the numerator of the episode payment model Advanced APM Entity to the numerator(s) for non-episode payment models in which the Advanced APM Entity participates.

    Comment: Several commenters supported this proposed policy, and one commenter suggested that we define “substantially similar” so that either of the Advanced APM Entity Participation Lists must have at least a specified percentage of the eligible clinicians participating in the other Advanced APM Entity.

    Response: We agree that it would be optimal to have a clear percentage of similarity standard that could apply across all APMs and APM Entities. However, many episode payment models construct Participation Lists or Affiliated Practitioner Lists differently than those in other APMs so the amount by which the individual eligible clinicians overlap is highly variable rates depending upon the entities.

    After considering the comments and the difficulty of implementing this policy as proposed, we are not finalizing the proposed policy to combine the numerators of Advanced APM Entities that participate in multiple Advanced APMs with substantially similar Participation Lists. We do not believe that we have a reliable, precise mechanism for determining substantial similarity of Participation Lists. Further, we believe that the policy we finalized earlier in this section regarding how we would assess eligible clinicians who are in multiple Advanced APMs serves the intended purpose of this proposed policy because it gives an eligible clinician the opportunity to become a QP in the event that the eligible clinician does not become a QP through any one of the multiple Advanced APMs in which the eligible clinician participates. Therefore, we do not believe that our reconsideration of this policy will limit the opportunity of eligible clinicians to become QPs, and we believe that not finalizing this policy will promote operational and conceptual simplicity.

    d. Use of Methods

    We proposed that we would calculate Threshold Scores for eligible clinicians in an Advanced APM Entity under both the payment amount and patient count methods for each QP Performance Period. We also proposed that we would assign QP status using the more advantageous of the Advanced APM Entity's two scores.

    We believe that both the payment amount and patient count methods produce valid Threshold Scores, even as there may be cases in which Threshold Scores vary enough that different QP determinations could result depending on which is used. In such an event, we do not believe that prioritizing the Threshold Score using one calculation over the other would yield an appropriate, non-arbitrary result. By using the more advantageous of the Threshold Scores achieved, we hope to promote simplicity in QP determinations and to maximize the number of eligible clinicians that attain QP status each year. We solicited comment on the use of the payment and patient count methods for the Medicare Option.

    The following is a summary of the comments we received regarding our proposal to calculate for each Advanced APM Entity of eligible clinician the Threshold Score using both the payment amount and patient count methods and use the more advantageous of the two scores.

    Comment: Many commenters expressed support for the use of the patient count method, for not double-counting beneficiaries, and for using the more favorable of the payment amount or patient count methods for each Advanced APM Entity because it reduces potential variations in Threshold Scores across practice patterns, certain specialty types, and the costs of services. Some supportive commenters recommended that we monitor results to potentially take action to ensure year-to-year stability in Threshold Scores. One commenter was concerned that the patient count methodology might be easier to game to meet the QP thresholds and encouraged CMS to consider whether this might create problematic incentives.

    Response: As commenters suggest, we will monitor the results of QP determinations to see if there are cases of large disparities between the two methods that may indicate gaming.

    We are finalizing the policy as proposed. We will calculate Threshold Scores for each Advanced APM Entity or eligible clinician using both the payment amount and patient count methods and apply the more advantageous QP result.

    To clarify the meaning of “more advantageous,” we mean that a QP determination takes precedence over a Partial QP determination, which takes precedence over not meeting either threshold. Therefore, if one method results in a QP determination and the other results in a Partial QP determination, we would apply the QP determination and disregard the Partial QP determination. We note that this is distinct from the numerical score, which is not directly comparable across the payment amount and patient count methods due to the different percentage thresholds for the respective methods. A lower numerical patient count Threshold Score may actually result in a more advantageous QP result than a relatively higher numerical payment amount Threshold Score.

    e. Services Furnished Through CAHs, FQHCs, and RHCs (1) Critical Access Hospitals (CAHs)

    We proposed that professional services billed by CAHs under section 1834(g)(2)(B) of the Act (Method II CAH professional services) would count towards the QP determination threshold calculations for both the Medicare payment amount and patient count methods in both the numerator and the denominator, as applicable. These services would constitute “covered professional services” under section 1848(k)(3) of the Act because they are furnished by an eligible clinician and payment is based on the Medicare PFS. This policy is consistent with our treatment of payments for Method II CAH professional services for purposes of the EHR Incentive Program and PQRS adjustments under sections 1848(a)(7) and (8) of the Act, respectively. Under section 1848(a)(7) and (8) of the Act, the PQRS and EHR Incentive Program adjustments are applied to payments for covered professional services furnished by an eligible clinician in a Method II CAH.

    CAHs were established under the Balanced Budget Act (BBA) of 1997 as a separate provider type with a distinct set of Medicare Conditions of Participation and their own payment methodology. CAHs are not subject to the Medicare Inpatient Prospective Payment System (IPPS) or the Medicare Outpatient Prospective Payment System (OPPS). Instead, CAHs are generally paid based on 101 percent of reasonable costs for inpatient services and are paid for outpatient services under one of two methods: the Standard Payment method outlined in section 1834(g)(1) of the Act (Method I), or the Optional Payment Method outlined in section 1834(g)(2) of the Act (Method II). A CAH is paid under Method I unless it elects to be paid under Method II.

    Under Method I, for cost reporting periods beginning on or after January 1, 2004, payments to CAHs are made for outpatient CAH facility services at 101 percent of reasonable costs. Physicians and practitioners receive payment for professional services under the Medicare PFS. A CAH may elect Method II billing, under which the CAH bills Medicare for both facility services and professional services furnished to its outpatients by a physician or practitioner who has reassigned his or her billing rights to the CAH. Even if a CAH makes this election, each physician or practitioner who furnishes professional services to CAH outpatients can choose to either: (1) reassign his or her billing rights to the CAH, agree to be included under the Method II billing, attest in writing that he or she will not bill Medicare for professional services furnished in the CAH outpatient department, and receive payment from the CAH for the professional services; or (2) elect to file claims for his or her professional services with Medicare for standard payment under the Medicare PFS.

    As of January 1, 2004, payment for a physician's professional services provided at a CAH billing under Method II is 115 percent of the allowed amount, after applicable deductions, under the Medicare PFS. For a non-physician practitioner's professional services, the payment amount is 115 percent of the amount that otherwise would be paid for the practitioner's professional services, after applicable deductions, under the Medicare PFS.

    (2) Rural Health Clinics (RHCs) and Federally Qualified Health Centers (FQHCs)

    RHCs and FQHCs are facilities that furnish services that are typically furnished in an outpatient clinic setting. They are located in areas that have been designated as underserved or health professional shortage areas (HPSAs), and meet other requirements.

    Under section 1833(a)(3) of the Act, RHCs are paid an all-inclusive rate (AIR) based on reasonable costs, subject under section 1833(f) of the Act to a maximum payment per visit that is established by Congress and updated annually based on the percentage change in the Medicare Economic Index (MEI) and subject to annual reconciliation. The per-visit limit does not apply to RHCs determined to be an integral and subordinate part of a hospital with fewer than 50 beds. Laboratory tests (excluding venipuncture) and technical components of RHC services are paid separately. The RHC payment limit per visit for CY 2016 is $81.32, effective January 1, 2016, through December 31, 2016.

    The FQHC Medicare benefit was added when section 1861(aa) of the Act was amended by section 4161 of the Omnibus Budget Reconciliation Act of 1990. FQHCs are paid according to the FQHC PPS set out under section 1834(o) of the Act, in which Medicare pays a national encounter-based rate per beneficiary per day, with some adjustments. The unadjusted 2016 PPS rate is $160.60.

    We proposed that professional services furnished at RHCs and FQHCs that participate in ACOs, and are reimbursed under the RHC AIR or FQHC PPS (respectively), be counted towards the QP determination calculations under the patient count method but not under the payment amount method.

    In certain Medicare ACO APMs, RHC and FQHC services can be counted for purposes of attributing beneficiaries to an ACO. Therefore, we proposed to include beneficiaries attributed to an Advanced APM Entity in full or in part because of services furnished by RHCs or FQHCs in the patient counts used for QP determination calculations.

    As previously stated, section 1833(z)(2)(D) of the Act permits us to use patient counts in lieu of payments when determining whether an eligible clinician is a QP “as the Secretary determines appropriate.” Our proposal to include the professional services furnished by eligible clinicians at RHCs and FQHCs in the QP threshold calculations for the patient count method is essential to assure consistency with this program and existing APM attribution methodologies. An Advanced APM Entity is responsible for the cost and quality of care for all beneficiaries attributed to an APM Entity, including all professional services furnished to such beneficiaries, regardless of whether or not attribution was based on services furnished by an eligible clinician or by an RHC or FQHC. We believe such beneficiaries are clearly served through the Advanced APM Entity, and it would be potentially confusing to eligible clinicians and Advanced APM Entities to track this distinction strictly for purposes of QP determination. We also believe that it would be unduly burdensome and impractical for us to develop and maintain a separate list of beneficiaries aligned to each Advanced APM Entity from the full list of beneficiaries for whom an Advanced APM Entity is responsible under an Advanced APM.

    Because professional services furnished by eligible clinicians at RHCs and FQHCs are not reimbursed under, or based on, the Medicare PFS, professional services furnished in these settings do not constitute “covered professional services” under section 1848(k)(3)(A) of the Act. In the Medicare Payment Amount Method, where payments for specified covered professional services are summed, only payments for covered professional services can be included.

    We believe that our proposal will continue to encourage the development of APMs that span rural and/or underserved areas. We solicited comment on this proposal.

    The following is a summary of the comments we received regarding our proposal to (1) include payments for Method II CAH professional services furnished by eligible clinicians in an Advanced APM Entity in the numerator of the Threshold Score for the payment amount method and (2) to allow Method II CAH professional services furnished by eligible clinicians in an Advanced APM Entity and professional services furnished by eligible clinicians in an Advanced APM Entity at RHCs and FQHCs to place a beneficiary in the numerator of the Threshold Score for the patient count method.

    Comment: A few commenters expressed support for the treatment of CAH, RHC, and FQHC services to enable them to be a factor in the patient count method. One commenter stated that FQHC clinicians should be eligible for the APM Incentive Payment.

    Response: If clinicians in RHCs or FQHCs meet the definition of eligible clinician set forth in section II.F.3. of this final rule with comment period and participate in an Advanced APM, then they will be considered for QP determination as part of the Advanced APM Entity along with all the other eligible clinicians in the group. The calculation of the APM Incentive Payment amount for an eligible clinician that practices at an RHC or FQHC will be subject to the specific criteria, which are based on Part B covered professional services, for calculating the APM Incentive Payment amounts outlined in section II.F.8.c. of this final rule with comment period.

    We are finalizing the policy as proposed. We will include payments for Method II CAH professional services furnished by eligible clinicians in an Advanced APM Entity in the numerator of the Threshold Score for the payment amount method. We will also count a beneficiary in the numerator of the Threshold Score for the patient count method if the beneficiary receives Method II CAH professional services furnished by eligible clinicians in an Advanced APM Entity and professional services furnished by eligible clinicians in an Advanced APM Entity at RHCs and FQHCs.

    7. Combination All-Payer and Medicare Payment Threshold Option a. Overview

    Beginning in 2021, in addition to the Medicare Option, eligible clinicians may alternatively become QPs through the All-Payer Combination Option, described under section 1833(z)(2)(B)(ii) and (C)(ii) of the Act as the Combination All-Payer and Medicare Payment Threshold Option. Thus, there will be two avenues for eligible clinicians to become QPs—the Medicare Option and the All-Payer Combination Option. An eligible clinician need only meet the QP threshold under one of the two options to be a QP for the payment year. The All-Payer Combination Option provides an incentive for eligible clinicians to participate in payment arrangements with payers other than Medicare Part B that have payment designs similar to Advanced APMs. The All-Payer Combination Option uses both the methods described in the Medicare Option and methods that calculate payments for all services from all payers, with certain exceptions, that are attributable to participation in both Advanced APMs and Other Payer Advanced APMs.

    Although the statutory QP threshold for an eligible clinician to be a QP (the QP Payment Amount Threshold) under the Medicare Option increases from 25 percent in 2019 and 2020 under section 1833(z)(2)(A) of the Act, to 50 percent in 2021 and 2022 under section 1833(z)(2)(B)(i) of the Act, to 75 percent beginning in 2023 under section 1833(z)(2)(C)(i) of the Act, the All-Payer Combination Option allows eligible clinicians with lower levels of participation in Advanced APMs to become QPs through sufficient participation in Other Payer Advanced APMs with payers such as State Medicaid programs and commercial payers, including Medicare Advantage plans. Section 1833(z)(2)(D) of the Act also allows the QP determination to be based on payment amount or on counts of patients in lieu of payments using the same or similar percentage criteria. These QP thresholds are presented in Tables 36 and 37 of this final rule with comment, and the process for the payment amount method is shown in Figures H and I of this final rule with comment. We may reassess the QP Patient Count Thresholds in future years based on the experience gained during the first years of operations.

    In summary, in addition to becoming QPs through the Medicare Option, eligible clinicians may alternatively become QPs through the All-Payer Combination Option if the following steps occur as described in the associated sections of the proposed rule: (1) the eligible clinician submits to CMS sufficient information on all relevant payment arrangements with other payers; (2) based upon that information CMS determines that at least one of those payment arrangements is an Other Payer Advanced APM; (3) the eligible clinician meets the relevant QP thresholds by having sufficient payments or patients attributed to a combination of participation in Other Payer Advanced APMs and Advanced APMs.

    ER04NO16.015 ER04NO16.016

    Sections 1833(z)(2)(B)(ii) and (C)(ii) of the Act describe the payment amount method for making the QP determination under the All-Payer Combination Option. For purposes of making a QP determination under this option, a QP is an eligible clinician for whom it is determined that items and services furnished by such a professional during the most recent period for which data are available (which may be less than a year) and where the specified percent of the sum of combined Medicare payments and all other payments regardless of payer are through Advanced APMs and Other Payer Advanced APMs that meet the criteria set forth in this section.

    The following is a summary of the comments we received regarding our overall approach to the All-Payer Combination Option.

    Comment: Several commenters supported aligning policies for the All-Payer Combination Option with those for the Medicare Option and emphasized the value of consistent models, measures, and reporting mechanisms across payers. One commenter appreciated our recognition of eligible clinicians engaging in APMs with payers other than Medicare and recommended that CMS minimize administrative burdens associated with such eligible clinicians demonstrating their participation in Other Payer Advanced APMs. Another commenter supported the proposal to implement the All-Payer Combination Option beginning in 2021.

    By contrast, one commenter expressed concern about extending Advanced APMs to other payer arrangements by identifying Other Payer Advanced APMs. Another commenter noted we have historically emphasized the importance of engaging multiple payers in payment reform, but has never suggested that commercial payers must offer identical arrangements to those CMS offers or replicate CMS payment models. One commenter opposed using the same criteria to determine both Advanced APM and Other Payer Advanced APMs, noting that these criteria would require large-scale renegotiation of payer contracts, which may not be within an organization's control. One commenter recommended CMS abandon the All-Payer Combination Option. Two commenters suggested that the All-Payer Combination Option be made effective earlier than payment year 2021, preferably payment year 2019. Another commenter encouraged CMS to accept risk-based non-Medicare contracts as Other Payer Advanced APMs beginning in payment year 2019. The commenter stated that clinicians who have invested in the transition to value-based care with many of their payers should not have to wait until 2021 to be rewarded.

    Response: We appreciate the support for our proposed approach. We recognize that the All-Payer Combination Option will require adjustments and transitions. However, the All-Payer Combination Option is required by the statute, and we believe that it represents a promising opportunity for those participating in certain other payer arrangements to participate in the Advanced APM framework. The statute specifies the criteria for Other Payer Advanced APMs, and that those criteria generally mirror the Advanced APM criteria. However, the Other Payer Advanced APM criteria only address certain aspects of payment arrangements, leaving substantial room for flexibility. Just as they do for Advanced APMs, the criteria allow for exploration and testing of alternative payment arrangements that can improve quality and reduce cost. Finally, the statute does not allow us to make the All-Payer Combination Option effective prior to payment year 2021.

    Comment: A commenter suggested CMS clearly define the process for determining whether a payment arrangement is an Other Payer Advanced APM. One commenter encouraged CMS to be flexible in the application of Other Payer Advanced APM criteria in order to encourage other payers to innovate. One commenter noted that among the 24 APMs reviewed by CMS, only six met all of the proposed criteria to be an Advanced APM, and that given these limitations, the commenter did not believe CMS has the flexibility to bring as many physicians as possible into Advanced APMs. As a result, this commenter believes Other Payer Advanced APMs and PFPMs might become more important to CMS goals, and recommended offering flexibility in the Other Payer Advanced APM criteria. One commenter recommended CMS provide additional flexibility to recognize as Other Payer Advanced APMs private payer reimbursement arrangements that accomplish high quality and efficient care but may not meet the Other Payer Advanced APM criteria. Another commenter recommended that the approach to assessing whether a payment arrangement will qualify as an Other Payer Advanced APM should be phased in so that initially, participation in any payment arrangement that meets some of the criteria could be considered an Other Payer Advanced APM, and then all three criteria would apply at a later time.

    Response: We appreciate the comments and the widespread desire to make as many Other Payer Advanced APMs available as possible. The statute requires us to use the three criteria discussed in section II.F.7.b. of this final rule with comment period as the basis for determining whether a payment arrangement is an Other Payer Advanced APM. We believe our proposed and final policies adhere to the statute as well as to our principles and reflect the commenters' suggestions to allow substantial flexibility in the design of payment arrangements when implementing the Quality Payment Program.

    Comment: Several commenters stated that CMS should include a thorough proposal of the criteria for Other Payer Advanced APMs in the CY 2018 PFS and/or issue subregulatory guidance.

    Response: We appreciate the comments. We are not sure at this time the exact vehicles through which we will establish policies and publish more information in the future, but we intend to inform the public regarding developments in the All-Payer Combination Option and Other Payer Advanced APM criteria through future rulemaking and subregulatory guidance.

    Comment: One commenter believes that the inclusion of all-payer data in APMs was the most important provision of the proposed rule and stated that the number of different measures and incentives across all payers created unnecessary burden and would be difficult to compare across APMs and payers. The same commenter recommended a standardized attribution model to ensure equitable treatment of models across payers. Some commenters requested to have additional guidance on the data collection requirements and the determination of Other Payer Advanced APMs as soon as practicable. Another commenter expressed concern that the proposed All-Payer Combination Option extends the CMS collection of, and access to, data beyond those of Medicare patients. One commenter supported multi-payer engagement but was concerned about the willingness of commercial payers to support value-based arrangements. This commenter implored CMS to mandate that commercial payers share full claims data sets to allow clinicians to manage risk and patient populations.

    Response: We do not believe that we are creating an unnecessary burden, but rather that we are proposing an approach to implementing the statute that is clear and flexible enough to be applicable to the diversity of payment arrangements. We do not believe a standardized attribution model is appropriate at this stage given the breadth of payment arrangements across payers. We are not sure at this time the exact vehicles through which we will establish policies and publish more information in the future, but we intend to inform the public regarding developments in the All-Payer Combination Option and Other Payer Advanced APM criteria through future rulemaking and subregulatory guidance.

    Comment: Several commenters requested that some flexibility be provided to states in assessing their models. One commentator said that the three Other Payer Advanced APM criteria are broadly reflective of the direction states are seeking to move. One commenter noted that states and clinicians are at different points along a continuum towards their ability to meet the criteria, and states are implementing changes in a manner that reflects local health care markets and the Medicaid populations they serve. The commenter stated that recognition of the variation among states in the development and implementation of APMs is essential to accommodate Medicaid APMs, given the unique needs of Medicaid beneficiaries, different health care provider risk-bearing capacity, and health care provider infrastructure issues that states may confront. Specifically, the commenter recommended that there should be a clear optional pathway for states to contact CMS in order to have Other Payer Advanced APMs be identified or deemed as such. Some commenters suggested that Medicaid APMs developed under the CPC+ model or the State Innovation Models (SIM) should be considered Other Payer Advanced APMs.

    Another commenter recommended that CMS establish a process to approve state-operated APMs so that clinicians can be aware of which payment arrangements will be Other Payer Advanced APMs. One commenter believes CMS needs to clearly articulate a process for how it will determine Other Payer Advanced APMs in states that are engaged with CMS in the development of Other Payer Advanced APMs.

    Response: We acknowledge that there is variation among Medicaid programs in the development and implementation of alternative payment models, which is in part due to varying state circumstances. We seek comment and input on the potential creation of a separate pathway to determine whether Medicaid APMs are Other Payer Advanced APMs prior to a QP Performance Period for the All-Payer Combination Option.

    Comment: One commenter supported the proposed definition for Medicaid APMs stating that it provides some flexibility for states to implement new payment models and align core requirements for Medicaid APMs with the broader Advanced APM and Other Payer Advanced APM criteria. One commenter requested additional flexibility and consideration for state models, such as population-based payment models.

    One commenter supported the proposal to assess Medicaid APMs under the Other Payer Advanced APM criteria and to include the Medicaid APM as part of the All-Payer Combination Option. This commenter agreed that CMS should generally defer to states in their design of these payment arrangements. The commenter also agreed with the proposal that if a state does not offer a Medicaid APM that meets the Other Payer Advanced APM criteria, then Medicaid payments and patients would be excluded from the All-Payer Combination Option calculations. Another commenter supported the criteria for Other Payer Advanced APMs and recommended including Medicare Advantage and state programs created through the Medicaid Health Home State Plan Option in the All-Payer Combination Option calculations.

    Response: We appreciate the comments and support for our proposals. We believe that the Other Payer Advanced APM criteria allow for flexibility in the design of Medicaid APMs that can be considered Other Payer Advanced APMs. However, as discussed in this section, we are interested in conducting further analysis and seeking further comment on the appropriate criteria for certain payers.

    Comment: One commenter encouraged CMS to work with stakeholders in creating and streamlining a process for assessing Other Payer Advanced APMs given the inherent complexity of these arrangements. Some commenters also encouraged CMS to work with state Medicaid agencies on a parallel process to approve state-supported models, whether through their FFS program or managed care arrangements. Another commenter believes that clinicians will have difficulty determining which of their contracts count as Other Payer Advanced APMs in time for them to know if they should try to alter a contract to make it an Other Payer Advanced APM. To resolve this, the commenter suggested that CMS have a process whereby payers submit models to CMS for basic approval of the specifications as Other Payer Advanced APMs in advance of parties finalizing contracts. Two commenters suggested CMS work with the Health Care Payment Learning and Action Network (LAN) to support this process.

    Response: We appreciate the comments. We seek public comment on the possibility of establishing a process to prospectively engage in design and review of payment arrangements to determine if they meet the criteria for being Other Payer Advanced APMs, particularly regarding the assessment of Medicaid APMs. In addition, we will continue to communicate our work to the LAN.

    After considering public comments, we are finalizing our overall approach to the All-Payer Combination Option as proposed. We are seeking additional comments on the creation of an optional pathway for states to seek a determination from CMS on whether a Medicaid payment arrangement is an Other Payer Advanced APM. We are also seeking additional comments on the overall process for reviewing payment arrangements in order to determine whether they are Other Payer Advanced APMs.

    b. Other Payer Advanced APM Criteria (1) In General

    According to section 1833(z)(2)(B)(iii) of the Act, a payment arrangement is an Other Payer Advanced APM if it meets three criteria:

    • CEHRT is used;

    • Quality measures comparable to measures under the MIPS quality performance category apply; and

    • The payment arrangement either: (1) requires APM Entities to bear more than nominal financial risk if actual aggregate expenditures exceed expected aggregate expenditures; or (2) for beneficiaries under title XIX, is a Medicaid Medical Home Model that meets criteria comparable to Medical Home Models expanded under section 1115A(c) of the Act.

    Payment arrangements under any payer other than traditional Medicare, including Medicare Advantage and other Medicare-funded private plans, will be Other Payer Advanced APMs if they meet all three criteria.

    (2) Medicaid APMs

    We proposed to define a Medicaid APM as a payment arrangement under title XIX that meets the criteria to be an Other Payer Advanced APM as proposed. States can choose from different authorities in title XIX when implementing new payment models. We believe this proposal would provide some flexibility for States but align the core requirements for Medicaid APMs with the broader Advanced APM and Other Payer Advanced APM criteria. Otherwise, we intend to generally defer to states in their design of payment arrangements.

    (3) Medicaid Medical Home Models

    We proposed that a Medicaid Medical Home Model is a Medical Home Model that is operated under title XIX instead of under section 1115A of the Act. We specifically identified Medicaid Medical Home Models because section 1833(z) of the Act mentions both medical homes generally and medical homes for beneficiaries under title XIX several times, but does not define the terms. Medicaid Medical Home is also not defined in title XIX or in Medicaid laws or regulations. Therefore, we needed to define the terms because of their importance in the Quality Payment Program. This definition of Medicaid Medical Home Model applies only for the purposes of the Quality Payment Program. We proposed that a Medicaid Medical Home Model must have the following elements at a minimum:

    • Model participants include primary care practices or multispecialty practices that include primary care physician and practitioners and offer primary care services, and

    • Empanelment of each patient to a primary clinician.

    In addition to these elements, we proposed that a Medicaid Medical Home Model must have at least four of the following elements:

    • Planned chronic and preventive care.

    • Patient access and continuity.

    • Risk-stratified care management.

    • Coordination of care across the medical neighborhood.

    • Patient and caregiver engagement.

    • Shared decision-making.

    • Payment arrangements in addition to, or substituting for, FFS payments (for example, shared savings, population-based payments).

    This definition of Medicaid Medical Home Model applies only for the purposes of the Quality Payment Program, and could be defined differently for other purposes. To define these terms, we reviewed existing and past Medical Home Models CMS developed under section 1115A of the Act, including the Comprehensive Primary Care Initiative (CPC). In addition, we reviewed a variety of other sources including several from the National Committee for Quality Assurance, the Joint Principles of the Patient-Centered Medical Home (a joint statement by the American Academy of Family Physicians, the American Academy of Pediatrics, the American College of Physicians, and the American Osteopathic Association), and the Agency for Healthcare Research and Quality. Our proposed definition of Medicaid Medical Home Model uses common elements from these sources. We believe that using a common set of elements ensures general comparability between Medical Home Models and Medicaid Medical Home Models while maintaining flexibility for the states under title XIX. We did not propose adhering to any particular organization's accreditation process for Medical Home Models or Medicaid Medical Home Models. We believe that such a policy would provide limited additional benefit while unnecessarily restricting state innovation. However, it is possible that accredited models, such as those certified by the National Committee on Quality Assurance, may also meet the definition of a Medicaid Medical Home Model. Medicaid Medical Home Models can be Other Payer Advanced APMs if they meet the criteria.

    We solicited comment on the proposed definitions of Medicaid APMs and Medicaid Medical Homes Models.

    The following is a summary of the comments we received regarding our definitions of Medicaid APMs and Medicaid Medical Home Models. Additional comments on the definition of Medicaid Medical Home Models were considered as part of the response to the definition of Medical Home Models in section II.F.3. of this final rule with comment period.

    Comment: One commenter applauded CMS for developing an appropriate, physician-friendly, and patient-centered framework for Medicaid Medical Home Models, and agreed with CMS' proposal not to mandate a specific method or accreditation process for recognizing Medicaid Medical Home Models. Another commenter supported the proposal regarding Medicaid Medical Home Models.

    Several commenters believed CMS should provide states flexibility in designing and implementing their Medicaid Medical Home Models. One commenter suggested the rule should enable states to deem and define their own patient-centered medical home programs and determine if it is a Medicaid Medical Home Model. Another commenter recommended that Other Payer Advanced APMs should include state-sponsored patient-centered medical home models that have demonstrated improvements in cost, quality, and patient experience through an evaluative process. Another commenter recommended CMS permit greater flexibility in enabling Medicaid Medical Home Models to qualify as Other Payer Advanced APMs. One commenter recommended state-based medical homes models should be considered Other Payer Advanced APMs.

    One commenter recommended CMS recognize robust regional programs when assigning credit for nationally recognized medical home models. Another commenter recommended that the proposed Medicaid Medical Home Model definition be expanded to include partnerships across sectors designed to improve population health and achieve health equity.

    One commenter recommended that CMS require Other Payer Advanced APMs to meet all of the proposed primary care practice criteria and characteristics required of Medical Home Models.

    Response: We appreciate these comments. We believe the proposed definition of Medicaid Medical Home Model provides states with significant flexibility for implementation. Nothing in the definition precludes states from deeming and defining their own medical home programs. As proposed, the rule does not endorse any specific certification process for Medicaid Medical Home Models. However, we retain the authority to determine whether any payment model under title XIX meets our criteria to be a Medicaid Medical Home Model or Other Payer Advanced APMs. We do not believe at this time that it is appropriate to create additional criteria for Other Payer Advanced APMs beyond those set forth in the statute. We are adopting a definition for Medicaid Medical Home Model because it is necessary to interpret an undefined term used in the statute and identifies a subset of payment arrangements that are treated slightly different under the Other Payer Advanced APM criteria.

    Comment: Some commenters requested CMS consider options for categorizing IHS, Tribal, and Urban Indian health programs as Other Payer Advanced APMs. One commenter questioned how the financial risk requirement will impact IHS, Tribal, and Urban Indian facilities.

    Response: We support the pursuit of developing Other Payer Advanced APMs under a variety of health care payment programs. Payment arrangements not included under Medicare Part B could potentially qualify as Other Payer Advanced APMs for QP Performance Periods in 2019 and later.

    (4) Use of Certified Electronic Health Record Technology

    To be an Other Payer Advanced APM, as described under section 1833(z)(2)(B)(iii)(II)(bb) and (z)(2)(C)(iii)(II)(bb) of the Act, payments must be made under arrangements in which CEHRT is used. This requirement is slightly different than the requirement for Advanced APMs that “requires participants in such model to use certified EHR technology (as defined in section 1848(o)(4) of the Act),” as specified in section 1833(z)(3)(D)(i)(I) of the Act. Although the statutory requirements are phrased slightly differently, we believe that there is value in keeping the two standards—for Advanced APMs and Other Payer Advanced APMs—as similar as possible.

    We proposed that payment arrangements would meet this Other Payer Advanced APM criterion under sections 1833(z)(2)(B)(iii)(II)(bb) and (z)(2)(C)(iii)(II)(bb) of the Act by requiring participants to use CEHRT as defined for MIPS and APMs under § 414.1305. This approach is consistent with the approach for Advanced APMs as described in section II.F.4.b.(1) of this final rule with comment period. In the 2015 EHR Incentive Programs final rule (80 FR 62872 through 62873), we established the definition of CEHRT for EHR technology that must be used by eligible clinicians to meet the meaningful use objectives and measures in specific years. In the proposed rule, we proposed to adopt the specifications from within the current definition of CEHRT in our regulation at § 414.1305 for eligible clinicians participating in MIPS or in APMs. This definition is identical to the definition for use by eligible hospitals and CAHs and Medicaid eligible clinicians in the EHR Incentive Programs.

    In accordance with section 1833(z)(2)(C)(iii)(II) of the Act, we proposed that an Other Payer Advanced APM must require at least 75 percent of eligible clinicians in each participating APM Entity (or each hospital if hospitals are the APM participants) to use the certified health IT functions outlined in the proposed definition of CEHRT to document and communicate clinical care with patients and other health care professionals.

    We solicited comment on the proposed definition of CEHRT for Advanced APMs and Other Payer Advanced APMs and whether they should be the same for both. We solicited comment on the proposed method for Other Payer Advanced APMs to meet the CEHRT use criterion.

    The following is a summary of the comments we received regarding our proposal to require a payment arrangement to use CEHRT in order to become an Other Payer Advanced APM.

    Comment: One commenter agreed with the importance of leveraging EHRs and clinical data to improve the coordination of care and improved outcomes through APMs. The same commenter encouraged CMS to use the full extent of its regulatory authority to build on existing efforts to support adoption and use of HIT among behavioral health and Long Term Support Service (LTSS) providers. The commenter appreciated the steps CMS has already taken to encourage investment in the HIT infrastructure for key Medicaid providers and suppliers, including behavioral health and LTSS providers.

    An additional commenter stated CMS should not dictate which edition of CEHRT must be included in a third party contract. Likewise, the commenter stated that CMS should not lock in a level of participation at this time, but instead monitor the performance and make a determination in a later rule. Another commenter expressed concern about an increased EHR burden on clinicians because of the cost of implementing the technology. One commenter recommended a requirement that Other Payer Advanced APMs meet all measures that currently exist in Meaningful Use standards, including access to discrete records, reference disease registries, receive care alerts, provide decision support, have access to lab results, and support a patient portal.

    Response: We appreciate the comments. Sections 1833(z)(2)(B)(iii)(II)(bb) and (z)(2)(C)(iii)(II)(bb) of the Act specifies that to be an Other Payer Advanced APM, the arrangement must be one in which CEHRT is used. By aligning this requirement with the CEHRT requirements in the advancing care information and Advanced APM sections of this rule, this criterion avoids adding different EHR-related requirements that Other Payer Advanced APMs must place on their participants. Under this CEHRT criterion, we believe there is significant flexibility for other payers to tailor HIT requirements to their particular populations and goals. We do not believe any additional requirements are warranted.

    Comment: One commenter agreed the definitions for CEHRT should be the same for both Advanced APMs and Other Payer Advanced APMs. One commenter supported CMS' proposal to align the definition of CEHRT for purposes of MIPS, Advanced APMs, and other payer arrangements so as not to place undue burden on eligible clinicians participating in Other Payer Advanced APMs. The commenter requested CMS clarify the proposed method for Other Payer Advanced APMs to meet the CEHRT use criterion. The commenter also requested confirmation that, as with Advanced APMs, the requirements relate to the terms of the payment arrangement, not directly to the performance of each APM Entity or eligible clinician. One commenter suggested that other payers should be able to require that clinicians in any APM Entity using CEHRT use the functionality of the CEHRT so that they can report on applicable objectives and measures specified for the advancing care information performance category under MIPS. Another commenter expressed that EHR systems generally do not communicate well between physicians, laboratories, and hospitals, and believes that eligible clinicians should not be penalized for these system problems.

    Response: We appreciate the comments and support for alignment of criteria between Advanced APMs and Other Payer Advanced APMs. Regarding the method for meeting this criterion, we confirm for commenters that the CEHRT requirement in this final rule—like all Other Payer Advanced APM criteria—is of the payment arrangement. Payment arrangements, not clinicians or entities, are determined to be Other Payer Advanced APMs. Therefore, a payer retains the flexibility to specify the use of CEHRT in a variety of ways that may be more stringent that this criterion requires, and would still meet this criterion to be an Other Payer Advanced APM so long as it ascertains that the required percentage of eligible clinicians in each APM Entity use CEHRT. Accordingly, we do not penalize individual clinicians for performance under this criterion. These requirements exist only to determine whether the structure of a payment arrangement meets the Other Payer Advanced APM criteria. These topics are discussed in more depth in section II.F.4.b.(1) of this final rule with comment.

    Comment: One commenter objected to our proposal to require a threshold for CEHRT use as is required for Advanced APMs, and thought that a threshold for CEHRT use was supported less by the statute in the case of the Other Payer Advanced APMs. Another commenter recommended that CMS relax the requirement that 75 percent of the clinicians use CEHRT to instead allow for glide paths that are tailored to each Other Payer Advanced APM's particular needs and capabilities. For example, the commenter suggested that payers should be required to reach 75 percent within the first 3 to 6 years of implementation. One commenter requested that states have the ability to set the CEHRT use percent criterion that defines participation in Other Payer Advanced APMs. They believed that a 75 percent threshold is too high given the lack of CEHRT uptake among key Medicaid clinicians.

    Response: We appreciate the comments. As part of the alignment with CEHRT requirements across the Quality Payment Program, we are reducing the level of CEHRT use that an Other Payer Advanced APM must require of eligible clinicians in each APM Entity from 75 percent to 50 percent.

    After considering public comments, we are modifying our proposal and finalizing that to be an Other Payer Advanced APM, a payment arrangement must require at least 50 percent of participating eligible clinicians in each APM Entity to use CEHRT to document and communicate clinical care.

    (5) Application of Quality Measures Comparable to Those Under the MIPS Quality Performance Category

    Sections 1833(z)(2)(B)(ii)(II)(aa) and (C)(iii)(II)(aa) of the Act specify that, to be an Other Payer Advanced APM, a payment arrangement must apply quality measures comparable to those under the MIPS quality performance category. We proposed that the quality measures on which the Other Payer Advanced APM bases payment must include at least one of the following types of measures provided that they have an evidence-based focus and are reliable and valid:

    (1) Any of the quality measures included on the proposed annual list of MIPS quality measures;

    (2) Quality measures that are endorsed by a consensus-based entity;

    (3) Quality measures developed under section 1848(s) of the Act;

    (4) Quality measures submitted in response to the MIPS Call for Quality Measures under section 1848(q)(2)(D)(ii) of the Act; or

    (5) Any other quality measures that CMS determines to have an evidence-based focus and are reliable and valid.

    We proposed that not all quality measures in an APM are required to be “MIPS comparable” and not all payments under the APM must be based on comparable measures. This approach is similar to the requirement for Advanced APMs as described in section II.F.4.b.(2) of this final rule with comment period. We believe that under the proposed policy, Other Payer Advanced APMs would retain sufficient freedom to innovate in paying for services and measuring quality. In other words, this criterion only sets standards for payments tied to quality measurement, not other methods of payment. Conversely, a payment arrangement may test new quality measures that do not fall into the MIPS-comparable standard. So long as the payment arrangement meets the requirements set forth in this criterion, there is no additional prescription for how the payment arrangement tests additional measures that may or may not meet the standards under this criterion.

    We want to encourage the use of outcome measures for quality performance assessment in Other Payer Advanced APMs, so we also proposed that an Other Payer Advanced APM must include at least one outcome measure if an appropriate measure (that is, the measure addresses the specific patient population and is specified for the participants' clinical setting) is available on the MIPS list of measures for that specific QP Performance Period.

    We believe that this framework will provide other payers the flexibility needed to ensure that their quality performance metrics meet their unique goals. We solicited comment on this proposed criterion.

    The following is a summary of the comments we received regarding our proposal that an Other Payer Advanced APM must provide for payment for covered professional services to include quality measures comparable to MIPS measures under the performance category.

    Comment: One commenter agreed with the proposed flexibility in selecting quality measures that are evidence-based, reliable and valid. One commenter supported the proposal for Other Payer Advanced APMs to require eligible clinicians to report at least one quality measure comparable to measures included in the MIPS measures list. Another commenter stated CMS should consider Medicaid Core Measures to be MIPS-comparable and incorporate a review of private payer measures. The same commenter stated CMS should require an outcome measure, regardless of whether it is a measure included in the MIPS measure list. Another commenter also stated that an outcome measure should be required regardless of whether an appropriate measure included in the MIPS measure list. A different commenter opposed an approach that would require physicians to report on a complex set of measures that do not impact or influence the quality of care provided to patients. The commenter believes all measures used in MIPS and APMs must be clinically relevant, harmonized among all public and private payers, and minimally burdensome to report. In addition, commenters recommended CMS use the core measure sets by the multi-stakeholder Core Quality Measures Collaborative.

    Response: We believe that the proposal provides a balance between the flexibility for implementing payment arrangements that payers need while also ensuring that the statutory requirement for MIPS-comparable quality measures is met. For example, based on our review, we believe the proposed criteria for an Other Payer Advanced APM allows for the use of Medicaid Core Measures because they are comparable to MIPS quality measures. We also agree with the commenters that the Core Quality Measure Collaborative, may be a valuable source of measures for inclusion in Other Payer Advanced APMs. We continue to believe that the requirement for an outcome measure is appropriate. Given the dearth of appropriate outcome measures for some specialties, we believe it is reasonable at this time to maintain the policy as proposed, which only requires the use of an outcome measure if there is an applicable one available on the MIPS list of quality measures. In addition, we believe that when quality measures are tied to payments, they do have an impact on the quality of care patients receive. Further discussion of quality measures and their comparability to MIPS can be found in section II.F.4.b.(2) of this final rule with comment period.

    After considering public comments, we are finalizing our proposal without changes. To be an Other Payer Advanced APM, a payment arrangement must base payment on quality measures that are evidence-based, reliable, and valid. At least one such measure must be an outcome measure unless there is not an applicable outcome measure on the MIPS quality measure list for the QP Performance Period. The outcome measure used does not have to be one of those on the MIPS quality measure list.

    (6) Financial Risk for Monetary Losses

    As described in sections 1833(z)(2)(B)(iii)(II)(cc) and (C)(iii)(II)(cc) of the Act, the third criterion that a payment arrangement must meet to be an Other Payer Advanced APM is that under the arrangement, the APM Entity must either bear more than nominal financial risk if actual aggregate expenditures exceed expected aggregate expenditures or the arrangement is a Medicaid Medical Home Model that meets criteria comparable to Medical Home Models expanded under section 1115A(c) of the Act.

    The financial risk standard under this criterion is similar to the criterion we are finalizing for Advanced APMs. For purposes of determining whether the payment arrangement is an Other Payer Advanced APM, this proposal does not impose any additional performance criteria, such as actual achievement of savings, on APM Entities in other payer arrangements. As with all of the Advanced APM criteria, this requirement pertains to the payment arrangement structure, not to the performance of the participants within the payment arrangement.

    This section is divided into two main parts: (1) What it means for an APM Entity to bear financial risk if actual aggregate expenditures exceed expected aggregate expenditures under a payment arrangement; and (2) what amounts of risk are considered to be more than nominal.

    (a) Bearing Financial Risk for Monetary Losses

    We proposed a generally applicable standard for Other Payer Advanced APMs and a slightly different standard for Medicaid Medical Home Models. We want to be consistent with and comparable to the Advanced APM financial risk standard within the limits of the statute.

    (i) Generally Applicable Other Payer Advanced APM Financial Risk Standard

    We proposed that the generally applicable financial risk standard for Other Payer Advanced APMs would be that a payment arrangement must, if APM Entity actual aggregate expenditures exceed expected aggregate expenditures during a specified performance period:

    • Withhold payment for services to the APM Entity and/or the APM Entity's eligible clinicians;

    • Reduce payment rates to the APM Entity and/or the APM Entity's eligible clinicians; or

    • Require direct payments by the APM Entity to the payer.

    We believe this financial risk criterion best distinguishes most payment arrangements from those that are focused on challenging physicians and practitioners to assume risk and provide high value care. We expect that an increasing proportion of other payer arrangements will meet that bar over time. This proposal is based on the statutory requirement that the APM Entity bear risk if aggregate actual expenditures exceed aggregate expected expenditures under the model, and is consistent with our proposal for the corresponding criterion proposed for Advanced APMs. We understand that many stakeholders believe that business risk should be sufficient to meet this Advanced APM criterion. We do not intend to minimize the substantial time and financial commitments that APM Entities invest to become successful APM participants. We note that there is also difficulty in creating an objective and enforceable standard for determining whether an entity's business risk exceeds a nominal amount, and that the statutory framework for the APM Incentive Payment recognizes that not all alternative payment arrangements will meet the criteria to be considered for purposes of the QP determination. We solicited comments regarding the proposed standard and whether there are other types of arrangements that should be incorporated into the standard.

    The following is a summary of the comments we received regarding our proposal to set a generally applicable Other Payer Advanced APM financial risk standard.

    Comment: One commenter supported using similar criteria to those proposed for Advanced APM criteria to assess the financial risk in the payment arrangement. A few commenters recommended CMS consider broad financial risk requirements so that clinicians can meet the Other Payer Eligible APM criteria. One commenter noted that it may be difficult to design Other Payer Advanced APMs that both meet the proposed financial risk standards and are attractive to eligible clinicians, and requested CMS to consider adding flexible arrangements that meet the spirit of the statute while not necessarily meeting the exact criteria that eligible clinicians share risk. Another commenter recommended that CMS give Other Payer Advanced APMs more flexibility in defining risk standards and not require complete alignment of risk definitions, as payers need a period of flexibility in tailoring risk arrangements depending on the type or maturity of the APM model, its population characteristics, and unique market conditions. Specifically, the commenter recommended that CMS give other payers the same flexibility to align, not match perfectly, their risk models under Other Payer Advanced APMs as under the Comprehensive Primary Care Plus (CPC+) model. Another commenter opposed the proposed financial risk standard for Other Payer Advanced APMs because the commenter stated that it places an arbitrary imposition of financial risk upon clinicians and violates the intent of the law.

    Response: We appreciate the comments. We believe that in order to implement the statute, it is important to have a meaningful financial risk standard. We believe the proposed elements are well established. And while they are intended to be challenging, they also provide for flexibility in the design of Other Payer Advanced APMs. We also believe that the financial risk standard provides flexibility to states and private payers in the design of their payment arrangements.

    Comment: Another commenter said there are opportunities and challenges with a federally-set benchmark for risk that would be applied to Medicaid APMs, and further understanding of these issues is needed before the rule is finalized. This commenter opined states are working to incorporate shared accountability for quality and outcomes with eligible clinicians through both FFS and capitated managed care models. However, there are both merits and challenges in setting a federal benchmark for the level of risk that Medicaid APMs must assume to be considered Other Payer Advanced APMs.

    Response: We appreciate the comments. We agree that further research and analysis on the level of nominal risk would be appropriate, particularly for Medicaid APMs, which is why we are seeking additional comments on setting specific levels for nominal risk as discussed in section II.F.7.(b) of this final rule with comment period.

    After considering public comments, we are finalizing our proposal to set a generally applicable Other Payer Advanced APM financial risk standard, as proposed, without changes. The generally applicable financial risk standard for Other Payer Advanced APMs is that, if the APM Entity's actual aggregate expenditures exceed expected aggregate expenditures during a specified performance period, the payer will:

    • Withhold payment for services to the APM Entity and/or the APM Entity's eligible clinicians;

    • Reduce payment rates to the APM Entity and/or the APM Entity's eligible clinicians; or

    • Require direct payments by the APM Entity to the payer.

    (ii) Medicaid Medical Home Model Financial Risk Standard

    We proposed that for a Medicaid Medical Home Model to be an Other Payer Advanced APM if the APM Entity's actual aggregate expenditures exceed expected aggregate expenditures, the Medicaid Medical Home Model must:

    • Withhold payment for services to the APM Entity and/or the APM Entity's eligible clinicians;

    • Reduce payment rates to the APM Entity and/or the APM Entity's eligible clinicians;

    • Require direct payment by the APM Entity to the Medicaid program; or

    • Require the APM Entity to lose the right to all or part of an otherwise guaranteed payment or payments.

    For instance, a Medicaid Medical Home Model would meet our proposed financial risk criterion if it conditions the payment of some or all of a regular care management fee to medical home APM Entities upon expenditure performance in relation to a benchmark. Because the arrangement would require no direct payment as a consequence for failure to meet expenditure standards, such a medical home would not necessarily be worse off than it had been prior to the decreased payment. However, it would be worse off in the future than it otherwise would have been had it met expenditure standards. Similarly, a Medicaid Medical Home Model that offers expenditure and quality performance payments in addition to payment withholds that can be earned back for meeting minimum requirements would also meet this criterion. Consistent with the treatment of Medical Home Models under the statute, this proposal acknowledges the unique challenges of medical homes in bearing risk for losses while maintaining a more rigorous standard than mere business risk.

    We believe that because Medicaid Medical Home Models are unique types of Medicaid APMs and because they are identified and treated differently under the statute, it is appropriate to establish a unique standard for bearing financial risk that reflects these differences and remains consistent with the statutory scheme, which is to provide incentives for participation by eligible clinicians in advanced APMs.

    Similar to Medical Home Model standards for Advanced APMs, which are discussed in II.F.4.b.(3) of this final rule with comment period, we believe that it would be appropriate to impose size and composition limits for Medicaid Medical Home Models to ensure that the focus is on organizations with a limited capacity for bearing the same magnitude of financial risk as larger organizations do, namely, small primary care-focused organizations. We proposed that this limit would only apply to APM Entities that participate in Medicaid Medical Home Models and that have 50 or fewer eligible clinicians in the organization through which the APM Entity is owned and operated. Thus, in a Medicaid Medical Home Model that is an Other Payer Advanced APM, only those APM Entities that are part of a parent organization with 50 or fewer eligible clinicians would be APM Entities. We believe it is appropriate to use eligible clinicians, rather than physicians, when setting this threshold as the number of eligible clinicians both reflects organizational resources and capacity and also may differ substantially across organizations with the same number of physicians.

    We also believe that this size threshold of 50 eligible clinicians is appropriate as organizations of that size have demonstrated the capacity and interest in taking on risk, and organizations may also join together to take on risk collectively, for example, in an ACO. In the event that a Medicaid Medical Home Model happens to have criteria that meet the Advanced APM financial risk criterion that is generally applicable to all Other Payer Advanced APMs, this organizational size limitation would be moot.

    There are several unique aspects of Medicaid Medical Home Models, which statute specifically singles out for unique treatment, and their participating APM Entities (medical homes) that support the need for a separate standard to assess financial risk if actual expenditures exceed expected expenditures. Medical homes are generally more limited in their ability to bear financial risk than other entities because they tend to be smaller and predominantly include primary care practitioners, whose revenues are a smaller fraction of the beneficiaries' total cost of care than those of other eligible clinicians. Moreover, Medicaid medical home practices serve low income populations and those with significant health disparities; due to the method of payment for care for these populations, Medicaid medical home practices often have relatively low revenues. Lastly, Medicaid Medical Home Models to date have not required participants to bear substantial downside risk, and including such a requirement under this program would create a significant challenge for medical homes to serve their patients.

    We solicited comment on the proposed financial risk standard set forth for Medicaid Medical Home Models and on alternative standards that would be consistent with the statute and could achieve our stated goals. We also solicited comment on types of financial risk arrangements that may not be clearly captured in this proposal.

    The following is a summary of the comments we received regarding our proposal for the Medicaid Medical Home Model financial risk standard. Comments on the 50 eligible clinician size limit are aggregated in the comments on the correlating Medical Home Model financial risk criterion in section II.F.4. of this final rule with comment period.

    Comment: One commenter agreed with our proposed approach. One commenter believed CMS should remove the proposed financial risk standard from the proposed rule and that APM Entities in Medicaid Medical Home Models should not be subject to any financial risk requirement.

    Another commenter recommended that Medicaid Medical Home Models not be subjected to downside risk unless and until it can be clearly demonstrated generally that they are capable or caring for patients without any decrease in access or quality under the limited payments provided by Medicaid in most states. An additional commenter recommended CMS eliminate the nominal risk requirements for Medicaid Medical Home Models. By definition, physicians who treat Medicaid and dual eligible patients are assuming more than nominal financial risk, given the very low reimbursement rates.

    One commenter stated that the financial risk standard needs to be revised to ensure that Medicaid medical homes serving vulnerable populations are not forced to assume financial risks that would jeopardize patients' access to care. Another commenter agreed that APM Entities should bear the financial risk, but noted that special considerations may be appropriate for Medicaid Medical Home Models depending on size as some FQHCs, RHCs, and Tribal 638 safety net clinics may be smaller with more diverse group of primary care practitioners. The same commenter noted that CMS has historically allowed states to implement APMs for FQHC/RHCs from the traditional PPS method, and requested CMS specifically address how states' models that include FQHCs and RHCs would be assessed to meet the Other Payer Advanced APM criteria.

    Response: We appreciate the comments and concerns about applying the financial risk standard to Medicaid Medical Home Models. Section 1833(z)(2)(B)(iii) of the Act requires that in order to meet the Other Payer Advanced APM criteria, the APM Entities must bear more than nominal financial risk if actual aggregate expenditures exceed expected aggregate expenditures or be in a Medicaid Medical Home Model that meets criteria comparable to Medical Home Models expanded under section 1115A(c) of the Act. Because there are currently no expanded Medical Home Models, we do not believe there is a way to evaluate whether a Medicaid Medical Home Model meets criteria comparable to expanded Medical Home Models. As such, in order to be determined an Other Payer Advanced APM, a Medicaid Medical Home Model must require its participating APM Entities to bear more than nominal financial risk. If a Medical Home Model is expanded in the future under section 1115A(c) of the Act, we will address how Medicaid Medical Home Models that have comparable criteria will meet the financial risk portion of this criterion. We are already providing special consideration for the risk that Medicaid Medical Home Models must bear by proposing separate financial risk and nominal amount standards.

    We are also finalizing certain provisions at section II.F.6.d. of this final rule with comment period to ensure that CAH, RHC and FQHC participation in Advanced APMs is considered to the extent applicable when calculating Threshold Scores under the patient count method.

    After considering public comments, we are finalizing our proposal for the Medicaid Medical Home Model financial risk standard without changes. The Medicaid Medical Home Model financial risk standard is that, if the APM Entity's actual aggregate expenditures exceed expected aggregate expenditures during a specified performance period, the payer will:

    • Withhold payment for services to the APM Entity and/or the APM Entity's eligible clinicians;

    • Reduce payment rates to the APM Entity and/or the APM Entity's eligible clinicians;

    • Require direct payment by the APM Entity to the Medicaid program; or

    • Require the APM Entity to lose the right to all or part of an otherwise guaranteed payment or payments.

    (b) Nominal Amount of Risk

    When an other payer risk arrangement meets the proposed financial risk standard, we would then consider whether the risk is of a more than nominal amount such that it meets this nominal amount standard. Similar to the financial risk portion of this assessment, we proposed to adopt a generally applicable nominal amount standard for Other Payer Advanced APMs and a unique nominal amount standard for Medicaid Medical Home Models.

    We proposed to measure three dimensions of risk to determine whether a payment arrangement meets the nominal amount standard: (a) Marginal risk, which is a common component of risk arrangements—particularly those that involve shared savings—that refers to the percentage of the amount by which actual expenditures exceed expected expenditures for which an APM Entity would be liable under a payment arrangement; (b) minimum loss rate (MLR), which is a percentage by which actual expenditures may exceed expected expenditures without triggering financial risk; and (c) total potential risk, which refers to the maximum potential payment for which an APM Entity could be liable under a payment arrangement. An example of marginal risk is an ACO that has a sharing rate, or marginal risk, of 50 percent and exceeds its benchmark (expected expenditures) by $1 million, the ACO would be liable for $500,000 of those losses. The marginal risk could also vary with the amount of losses.

    To determine whether a payment arrangement satisfies the total risk portion of the nominal amount standard, we would identify the maximum potential payment an APM Entity could be required to make as a percentage of the expected expenditures under the payment arrangement. If that percentage exceeded the required total risk percentage, then the arrangement would satisfy the total risk portion of the nominal amount standard.

    To determine whether a payment arrangement satisfies the marginal risk portion of the nominal amount standard, we would examine the payment required under the payment arrangement as a percentage of the amount by which actual expenditures exceeded expected expenditures. We proposed that we would require that this percentage exceed the required marginal risk percentage regardless of the amount by which actual expenditures exceeded expected expenditures, with two exceptions.

    First, we proposed a maximum allowable “minimum loss rate” (MLR) of 4 percent in which the payment required by the payment arrangement could be smaller than the nominal amount standard would otherwise require when actual expenditures exceed expected expenditures by less than 4 percent; this exception accommodates payment arrangements that include zero risk for small losses but otherwise satisfy the marginal risk standard. We also proposed a process through which we could determine that a risk arrangement with an MLR higher than 4 percent could meet the nominal amount standard, provided that the other portions of the nominal amount standard are met. In determining whether such an exception would be appropriate, we would consider: (1) Whether the size of the attributed patient population is small; (2) whether the relative magnitude of expenditures assessed under the payment arrangement is particularly small; and (3) in the case of test of limited size and scope, whether the difference between actual expenditures and expected expenditures would not be statistically significant even when actual expenditures are 4 percent above expected expenditures. We note that we would grant such exceptions rarely, and we would expect APMs considered for such exceptions to demonstrate that a sufficient number of APM Entities are likely to incur losses in excess of the higher MLR. In other words, the potential for financial losses based on statistically significant expenditures in excess of the benchmark remains meaningful for participants.

    Second, we proposed that the payment required by the payer could be smaller when actual expenditures exceed expected expenditures by enough to trigger a payment greater than or equal to the total risk amount required under the nominal amount standard. This exception ensures that the marginal risk requirement does not effectively require payers to incorporate total risk greater than the amount required by the total risk portion of the standard to become Other Payer Advanced APMs.

    In evaluating both the total and marginal risk portions of the nominal amount standard, we would not include any payments the APM Entity or its participating eligible clinicians would make to the other payer if actual expenditures exactly matched expected expenditures. In other words, payments made to a payer outside the risk arrangement related to expenditures would not count toward the nominal amount standard. This requirement ensures that perfunctory or pre-determined payments do not supersede incentives for improving efficiency. For example, a payment arrangement that simply requires an APM Entity to make a payment equal to 5 percent of the payment arrangement benchmark at the end of the year, regardless of actual expenditure performance, would not satisfy the nominal amount standard.

    Finally, we proposed that the amounts described in this section need not take a shared savings structure in which financial risk increases smoothly based on the amount by which an APM Entity's actual expenditures exceed expected expenditures. The risk arrangement must be tied to expenditures, but the amount of that risk would not have to be directly proportional to expenditures. For instance, an APM Entity could be required to pay the payer a flat amount or an amount tied to the number of attributed beneficiaries in the case of exceeding an expenditure benchmark, provided that these amounts are otherwise structured in a way that satisfies the nominal amount standard.

    (i) Generally Applicable Other Payer Advanced APM Nominal Amount Standard

    Except for risk arrangements described under the Medicaid Medical Home Model Standard, we proposed that for a payment arrangement to meet the nominal amount standard the specific level of marginal risk must be at least 30 percent of losses in excess of the expected expenditures and total potential risk must be at least four percent of the expected expenditures.

    In establishing the proposed criteria for Other Payer Advanced APMs, we kept the approach to nominal risk as consistent as possible with the approach for the proposed Advanced APM criteria. The statute specifies that the Advanced APM Entity must bear more than nominal financial risk if actual aggregate expenditures exceed expected aggregate expenditures. We believe it is important, to the extent possible and consistent with the statute, to adopt consistent financial risk standards with the Advanced APM standard as described in section II.F.4.a of the proposed rule, so that eligible clinicians can base their decisions on participation in these Other Payer Advanced APMs on a consistent set of criteria. The Advanced APM nominal amount standard section of the proposed rule, II.F.4.a, describes the process by which we arrived at the proposed values.

    For Medicaid APMs we proposed the same standard as for Other Payer Advanced APMs. However, we recognize that Medicaid practitioners may be less able to bear substantial financial risk because they serve low-income populations and those with significant health disparities. Therefore, we solicited comment and supporting evidence on whether the proposal offered identifies the appropriate amounts of nominal risk for Medicaid APMs.

    The following is a summary of the comments we received regarding our proposal to set a generally applicable Other Payer Advanced APM nominal amount standard.

    Comment: Several commenters believed the nominal amount standard is overly complicated and encouraged CMS to simplify the standard. One suggested CMS include only the MLR and total potential risk requirement proposed in the regulation. This commenter further requested that CMS modify the total potential risk to include an entity's Part A and B revenue to provide the assurance that an entity is not assuming more risk than their potential revenues. One commenter requested clarification regarding what those participating in the All-Payer Combination Option will have to do in order to satisfy the nominal amount standard.

    An additional commenter requested that the nominal amount standard initially mirror the medical home approach so that it is assessed through APM Entity revenue or a choice between APM Entity revenue and Advanced APM benchmarks, and has low requirements with phased increases mirroring the approach taken with Medicaid Medical Home Models.

    Response: As we noted in the section of the rule discussing the nominal amount standard for Advanced APMs, we understand commenters' concerns that these aspects of the standard are complex enough to require additional time to understand. We note, however that these standards will not take effect until QP Performance Periods beginning in 2019 and later; we believe that this time will help mitigate commenters' concerns about complexity. Moreover, we believe that using these measures of risk will ensure the program integrity of the All-Payer Combination Option so that payment arrangements between other payers and APM Entities cannot be engineered in such a way as to provide an avenue to QP status that meets the financial risk criterion but makes the actual likelihood of losses based on performance very low. This could potentially result in payment of the APM Incentive Payment to APM Entities in payments arrangements that do not adhere to our principles of setting meaningful financial risk standards.

    We put these protections in place for Other Payer Advanced APMs and not Advanced APMs because we have direct control over the design of Advanced APMs but not of payment arrangements of other payers We must act in the interest of the Medicare Trust Funds when designing Advanced APMs, but other payers do not have the same obligation and thus may be interested in assisting APM Entities to receive the APM Incentive Payment. Although states design and implement Medicaid APMs that are generally subject to federal approval processes such as state plan amendment approvals, we have no direct or indirect control over the payment arrangements of private payers. Including marginal risk as a component of the nominal amount standard prevents the consideration of payment arrangement designs that could contribute to the attainment of QP status through arrangements far less rigorous than those in Advanced APMs. There may be other ways of achieving the same program integrity goals and we seek comment on this policy. For instance, we are considering ways to issue guidance or design federal approval processes to promote Medicaid APMs focused on high value care to Medicaid beneficiaries that also align with our program integrity objectives.

    Comment: A few commenters expressed concern that, if not set correctly, the level of risk under the nominal amount standard might jeopardize Medicaid clinicians' abilities to provide effective care. One commenter, in commenting on the nominal amount standard for Advanced APMs, stated that practices should be encouraged to serve Medicaid and dual eligible patients, but the risk requirements are likely to have the opposite effect. The commenter stated that simply providing care to Medicaid and dual eligible patients could be considered to involve more than nominal risk for monetary losses due to the very low payment rates in most Medicaid programs. Another commenter expressed concern about ongoing cuts states are making in Medicaid reimbursement rates, and believed CMS should promulgate rules that prevent damaging reimbursement and encourage exploration of innovative care delivery options. Another commenter said that any adjustment in payments must take into account socio-demographic factors such as income, race and educational attainment.

    Response: We understand that Medicaid clinicians may have less risk-bearing capacity than other clinicians, particularly in cases in which they serve a relatively high proportion of high-risk patients. We believe the proposed nominal amount standard allows Medicaid APMs and Medicaid Medical Home Models to create meaningful incentives for improving the care for their populations. However, we seek additional comments on the structure and levels of risk in the nominal amount standard as applied to Medicaid APMs.

    Comment: One commenter stated that there are both merits and challenges in setting a standard for the level of risk that Medicaid payment arrangements must meet to be considered Medicaid APMs. This commenter said there is significant variation among Medicaid clinicians' ability and willingness to assume risk, especially given the vulnerable and complex population Medicaid clinicians serve. The commenter also stated that state Medicaid directors are cautious to apply risk where it seems inappropriate or premature. However, the commenter also stated that a federal risk standard might support movement toward risk-based models in many states. The commenter expressed that this is a critical aspect of the regulation that warrants further engagement with states and urged CMS to evaluate state-specific situations where Medicaid clinicians are assuming risk and to further engage states on this issue.

    Response: We acknowledge the potential benefits and challenges of setting a nominal amount standard that applies to other payers, including state Medicaid programs, and believe we have taken the considerations into account in our finalized policy. We have engaged with stakeholders on this issue and will continue to do so. We realize that although the All-Payer Combination Option does not go into effective until the 2019 QP Performance Period for the 2021 payment year, Medicaid programs and other payers may begin their work developing payment arrangements that meet the Other Payer Advanced APM criteria. For this reason, we believe it is important to establish these policies now, even though there may be subsequent modifications through future rulemaking.

    Comment: One commenter generally supported the proposal regarding standards for nominal risk, but stated that the standard for Medicaid APMs that are not Medicaid Medical Home Models be set lower, at 3 percent. Another commenter supports the simplification of the nominal risk amount and requested CMS lower the proposed loss sharing limit for Other Payer Advanced APMs from 4 percent to a more reasonable threshold, such as 10 percent of physicians' payments for covered Part B professional services, or 1 percent of total Parts A and B target costs, whichever is lower.

    One commenter recommended that CMS use risk corridors across programs to allow APMs to align their operations and financial approach, while reducing administrative overhead. One commenter suggested a 30 percent marginal risk threshold, with a 1-2 percent minimum loss rate, and recommended that CMS consider using the full-risk structure within the Next Generation ACO model as a framework when assessing nominal risk.

    Response: We believe that the meaning of “nominal” can be relative and that for many APM Entities, 4 percent of a total cost of care benchmark could be substantially more than nominal. We discuss the nominal amount standard in depth in section II.F.4.a of this final rule with comment period. Depending on the size and clinician composition of an APM Entity, a total risk cap of 4 percent of a total cost of care benchmark could mean risk for losses that are up to or greater than 100 percent of some APM Entities' revenue from a payer. Therefore, we recognize that a revenue-based standard would provide an alternative approach under the nominal amount standard that is particularly meaningful to practices of certain sizes. However, we caution that a revenue-based standard is not easily applied to many current payment arrangements, which tend to base risk arrangements on expenditure benchmarks that are unrelated to a particular APM Entity's revenue. We believe that total cost of care benchmarks are optimal for many APMs, and those will continue to represent the preferred standard for assessing performance in terms of cost. We also caution that, under a revenue-based standard, certain types of APM Entities may have a significant probability of incurring losses outside the stop loss and thus bear no responsibility for increases in expected expenditures beyond that point, which may undermine the ability of such APMs to drive performance for those APM Entities. In seeking a risk standard that is meaningful but not excessive, we sought to balance these considerations.

    After considering the comments, we are finalizing the proposed policy with modifications. First, we are finalizing the marginal risk and MLR components as proposed. To meet the Other Payer Advanced APM nominal amount standard, a payment arrangement's level of marginal risk must be at least 30 percent of losses in excess of the expected expenditures, and the maximum allowable MLR must be 4 percent. We seek additional comments on this approach for Other Payer Advanced APMs and additional information on other approaches that ensure payment arrangements are not engineered to meet the financial risk criterion but avoid the likelihood of APM Entities experiencing losses based on their performance.

    Second, we are finalizing that a payment arrangement must require APM Entities to bear financial risk for at least 3 percent of the expected expenditures for which an APM Entity is responsible under the payment arrangement. For episode payment models, expected expenditures means the target price for an episode. We also note that we intend to establish through future rulemaking a total risk standard based on the revenue of the APM Entity from the payer in a manner that would parallel the standard we are finalizing in the Advanced APM nominal amount standard under section II.F.4.b.(4) of this final rule with comment period. Therefore, we seek comment for future consideration on the amount and structure of the revenue-based nominal amount standard for QP Performance Periods in 2019 and later. Specifically, we seek comment on: (1) Setting the revenue-based standard for 2019 and later at up to 15 percent of revenue; or (2) setting the revenue-based standard at 10 percent so long as risk is at least equal to 1.5 percent of expected expenditures for which an APM Entity is responsible under an APM. We expect to apply the same percentage standards for Other Payer Advanced APMs as for Advanced APMs; however, we seek comment on how and why this standard could differ for Medicaid APMs relative to the generally applicable Other Payer Advanced APM standard.

    Our intention in setting a revenue-based nominal amount standard is to tailor the level of risk an APM Entity must bear relative to the resources available to it. In instances where an APM Entity is one component of a larger health care provider organization, we believe that the revenue of the larger organization is a more accurate measure of the resources available to the APM Entity and should be the basis for setting the revenue-based nominal amount standard, even if only a portion of the organization is participating in the APM Entity.

    However, we do not believe that applying the nominal amount standard at a level other than the APM Entity is operationally feasible at this point in time, and doing so in the other payer context may pose unique challenges relative to those we face under Medicare. Nevertheless, ideally, the nominal amount standard would take into consideration the resources available to an APM Entity using a measure such as revenue for the parent organization. We are evaluating the feasibility of implementing such a measure in lieu of APM Entity revenue for the third year of the program and later years. Under such an approach, we would anticipate basing the revenue-based nominal amount standard on the total revenues from a payer across the APM Entity, any parent organizations, any subsidiary organizations, and any subsidiaries of parent organizations for all eligible clinicians and groups who are participants of an APM Entity. We seek comment on this approach and how such an approach could be implemented while minimizing burden on participants.

    (ii) Medicaid Medical Home Model Nominal Amount Standard

    For Medicaid Medical Home Models, we proposed that the minimum total annual amount that an APM Entity must potentially owe or forego to be considered an Other Payer Advanced APM must be at least:

    • In 2019, 4 percent of the APM Entity's total revenue under the payer.

    • In 2020 and later, 5 percent of the APM Entity's total revenue under the payer.

    We believe that because few Medicaid Medical Home Model participants have experience with financial risk, and because they tend to be smaller in size, both in terms of the number of clinicians and revenue, than other APM Entities, we should not include a potentially excessive nominal amount for such entities in the first year of the program. We have also taken into account that the statute explicitly highlights Medical Home Models for special treatment under the Quality Payment Program. We generally have less information on Medicaid Medical Home Models and their performance to date compared to our information on Medical Home Models. Medicaid Medical Home Models are still developing, and we believe the introduction of a nominal amount standard that is not currently widely represented in the marketplace should be approached in a measured manner. We therefore believe that the unique characteristics of Medicaid Medical Home Models warrant the application of a nominal amount standard that reflects these differences.

    We solicited comment on the proposed nominal amount standard. We also solicited comment on the potential inclusion of a marginal risk amount in the standard and the extent to which it would be applicable.

    The following is a summary of the comments we received regarding our proposal to set a Medicaid Medical Home Model nominal amount standard.

    Comment: One commenter supported the lower risk amount for Medicaid Medical Home Models. Another commenter expressed concern that CMS' proposed nominal amount standard for Medicaid Medical Homes Models of 4 percent of the APM Entity's total Medicaid revenue in 2019 and 5 percent in 2020 and thereafter is too high to encourage medical practices to serve Medicaid and dual eligible patients. This commenter said providing care to Medicaid and dual eligible patients would be considered by most physicians to involve more than nominal risk of financial losses due to the very low payment rates in most Medicaid programs.

    Response: We understand the concern that the proposed nominal amount standard may be too high and could serve as a deterrent to the development of and participation in Medicaid Medical Home Models. However, we believe that, as with Medical Home Models under Medicare, the proposed values are appropriate for those entities that are interested in assuming risk and participating in the Quality Payment Program. We also believe that the finalized Advanced APM financial risk criterion for Medicaid Medical Home Models, combined with this nominal amount standard, allows for payment arrangement designs that motivate improvements in the cost and quality of care while not deterring practices from participating in the program. We are finalizing the standard as proposed to be consistent with the Advanced APM nominal amount standard for Medical Home Models as discussed in section II.F.4.b.(3) of this final rule with comment. Setting the standard that starts at 4 percent of revenue and increases to 5 percent of revenue represents the meaning of “nominal” in the Medicaid Medical Home Model context.

    After considering public comments, we are finalizing the Medicaid Medical Home Model nominal amount standard as proposed. In order to be an Other Payer Advanced APM, the minimum total annual amount that a Medicaid Medical Home Model must require an APM Entity to potentially owe or forego must be at least:

    • In 2019, 4 percent of the APM Entity's total revenue under the payer.

    • In 2020 and later, 5 percent of the APM Entity's total revenue under the payer.

    (c) Capitation

    We proposed that full capitation risk arrangements would meet the Other Payer Advanced APM financial risk criterion. We proposed that for purposes of this rulemaking, a capitation risk arrangement means a payment arrangement in which a per capita or otherwise predetermined payment is made to an APM Entity for services furnished to a population of beneficiaries, and no settlement is performed for the purpose of reconciling or sharing losses incurred or savings earned by the APM Entity. Our rationale for this policy is the same as the rationale on capitation for Advanced APMs described in section II.F.4.b.(3) of this final rule with comment period. As such, we reiterated that full capitation risk arrangements are not simply a cash flow mechanism.

    We solicited comment on our proposal that capitation risk arrangements would meet the financial risk criterion for Other Payer Advanced APMs and on our proposed definition of a capitation risk arrangement. We also solicited comment on other types of arrangements that may be suitable for such treatment for purposes of this financial risk criterion.

    The following is a summary of the comments we received regarding our proposal that full capitation risk arrangements will automatically meet the Other Payer Advanced APM financial risk criterion.

    Comment: One commenter supported the proposal that capitation automatically satisfies the financial risk criterion, but requested CMS to explicitly include partial capitation as well if it meets the nominal risk criteria. Another commenter recommended existing arrangements, such as capitation, be included in the proposed rule definition of Other Payer Advanced APMs that bear more than nominal risk, because the commenter believes that such arrangements require the organization to absorb costs that exceed expected expenditures. This commenter requested clarification on whether tertiary care centers would be considered Other Payer Advanced APMs when these centers have capitated arrangements with other clinicians, and where patients' primary care clinicians are not directly affiliated with the tertiary care center.

    One commenter supported the proposal that that full capitation risk arrangements meet the financial risk criterion for APM Entities with full downside risk, but noted that some entities are in the middle of transforming their practices. The commenter stated that risk during transition could be mitigated through risk corridors and other methods that could be used while payers are obtaining and improving data necessary to improve the appropriateness of rates to health plans and clinicians.

    Response: Partial capitation arrangements can satisfy the financial risk criterion, but will not do so automatically. They will be assessed according to the nominal amount standard. We appreciate the suggested payment methodology, but we are not prescribing any specific methodology for such arrangements. We also remind commenters of the Physician-Focused Payment Model Technical Advisory Committee described in section II.F.10. of this final rule with comment period.

    After considering public comments, we are finalizing our proposal that full capitation risk arrangements will automatically meet the Other Payer Advanced APM financial risk criterion and our proposal to define capitation risk arrangement without changes.

    (d) Criteria Comparable to Expanded Medical Home Models

    In accordance with sections 1833(z)(2)(B)(iii)(II)(cc)(BB) and (C)(iii)(II)(cc)(BB) of the Act, we proposed that Medicaid Medical Home Models that meet criteria comparable to a Medical Home Model expanded under section 1115A(c) of the Act would meet the Other Payer Advanced APM financial risk criterion. We proposed that we would specify in subsequent rulemaking the criteria of any Medical Home Model that is expanded under section 1115A(c) of the Act that would be used for purposes of making this comparability assessment. We believe that the expanded Medical Home Model criteria can only be used for comparison when a Medical Home Model is, in fact, expanded as described in section II.F.4.b.(6) of the proposed rule, not merely by satisfying the expansion criteria under section 1115A(c) of the Act. If no such Medical Home Model has actually been expanded under section 1115A(c) of the Act, we would not have any criteria for comparison. In the absence of any expanded Medical Home Model to which we could draw comparisons, Medicaid Medical Home Models must meet the financial risk criterion through the other provisions (the financial risk and nominal amount standards) to be an Other Payer Advanced APM. We solicited comment on how to determine the criteria of an expanded Medical Home Model that could be used for comparison, and on how similar the Medicaid Medical Home Model criteria must be to the expanded Medical Home Model criteria in order to be considered “comparable.”

    The following is a summary of the comments we received regarding our proposal to address criteria comparable to expanded Medicaid Medical Home Models in future rulemaking.

    Comment: One commenter appreciated that CMS plans for future rulemaking in this area, and agrees that no current models meet the expansion criteria.

    Response: We appreciate the comment.

    We are finalizing our proposal that Medicaid Medical Home Models that meet criteria comparable to a Medical Home Model expanded under section 1115A(c) of the Act would meet the Other Payer Advanced APM financial risk criterion. We will specify in future rulemaking the criteria for any Medical Home Model that is expanded under section 1115A(c) of the Act, and specify how they would be used for purposes of making this comparability assessment.

    (7) Medicare Advantage (MA)

    For the APM Incentive Payment, section 1833(z)(1)(A) of the Act states that the APM Incentive Payment is based on payments for Part B covered professional services, which do not include payments for services furnished to MA enrollees. For QP determination calculations, we believe it is important to note that Advanced APMs may involve MA plans and payers other than Medicare. Under the All-Payer Combination Option for QP determinations, eligible clinicians can meet the QP threshold based in part on payment amounts or patients counts associated with MA plans and other payers, provided that such arrangements meet the criteria to be considered Other Payer Advanced APMs. However, under sections 1833(z)(2)(A), (2)(B)(i), and (3)(B)(i) of the Act, payments under MA and other payer arrangements cannot be included in the QP determination calculations under the Medicare Option, which requires that we only consider payment amounts or patient counts for Medicare Part B covered professional services.

    Regardless of which option—Medicare or All-Payer Combination—is used to determine that an eligible clinician is a QP for a year, the APM Incentive Payment calculation will only be based upon payments for Medicare Part B covered professional services, which does not include payments for services furnished to MA enrollees.

    We recognize that MA contracts can include financial risk as well as quality performance standards, CEHRT, and other health IT requirements that support high-value care. We proposed to evaluate payment arrangements between eligible clinicians, APMs Entities, and MA plans according to the proposed Other Payer Advanced APM criteria. In the assessment of MA plans for the Other Payer Advanced APM criteria, it is important to note that the requirements refer to aspects of the payment arrangement between the MA plan and the participating APM Entity, and this includes the criterion for bearing more than a nominal amount of financial risk. We noted that we will not consider an arrangement in which the MA plan meets the CEHRT and quality measures criteria, but pays the APM Entity on a FFS basis, to be an Other Payer Advanced APM because there is no risk connected to actual cost of care exceeding projected cost of care. Because this arrangement would not be an Other Payer Advanced APM, it would not be considered for purposes of QP determinations. In addition, the financial relationship between CMS and the MA plan—even if the relationship is part of an APM—is not relevant to this assessment because there would not be a direct payment arrangement between CMS and the APM Entities or eligible clinicians.

    The following is a summary of the comments we received regarding how MA plans will be treated in the Medicare Option and the All-Payer Combination Option.

    Comment: Several commenters suggested that eligible MA contracts be compared to the Advanced APM criteria rather than the Other Payer Advanced APM criteria. A few commenters requested that CMS consider MA contracts when determining whether APM Entities are participating in Advanced APMs and include MA payments when calculating Threshold Scores under the Medicare Option. In addition, one commenter stated that focusing the Medicare Option on Part B and not including MA disregards the work of many clinicians to improve care for beneficiaries and to build the accompanying infrastructure required to carry out this work. Another commenter was concerned that MA participation will not be considered until payment year 2021 and that this could potentially limit eligible clinicians' ability to become QPs because they do not participate in Advanced APMs under Medicare Part B. The commenter expressed concern that this delay will disadvantage clinicians who have already taken the initiative to incorporate quality metrics, financial risk, and CEHRT in their care of beneficiaries.

    A few commenters stated that if CMS included MA under the Medicare Option, several high-performing plans would meet the Advanced APM criteria. The commenters stated that CMS could use its section 1115A authority to designate MA plans as Advanced APMs.

    Response: We appreciate the comments and suggestions. Under section 1833(z)(2)(A) of the Act, it is clear that MA is not included in the QP determination calculations under the Medicare Option, which requires that we only consider payment amounts or patient counts for Medicare Part B covered professional services. The statute is clear that the All-Payer Combination Option will begin in payment year 2021, for which 2019 is the QP Performance Period as finalized in this rule. We believe that MA plans can play an important role in the Quality Payment Program through the All-Payer Combination Option.

    Comment: One commenter recommended that CMS align MIPS and APM measures in traditional Medicare to the CMS MA Five Star Quality Rating System, which measures how well MA and prescription drug (Part D) plans perform in several areas including quality of care and customer service. Another commenter recommended that if an APM Entity's contract with an MA Organization includes “more than nominal risk,” and if the MA plan meets a threshold star rating (for example, 4 or greater), patients and payments through that MA plan should be included in the Medicare Option.

    Response: Establishing rules related to MA contracting are outside the scope of this rule. An Other Payer Advanced APM must meet all three of the criteria set forth in this final rule with comment. As discussed in this section of the final rule with comment period, the statute does not permit inclusion of MA plans or payments in the Medicare Option, regardless of an MA plan's Star Ratings. Although the Star Rating may reflect positive activities, the statute does not permit any substitute for the Advanced APM or Other Payer Advanced APM criteria.

    Therefore, we understand the value of aligning measures across payers. Although measures of health plans are beyond the scope of this rule and do not necessarily measure the same performance as measures used under MIPS and APMs, which relate directly to health care provider performance, we recognize that there are many potential avenues for potential alignment in the selection of the MIPS quality measure set and in the design of specific APMs that engage multiple payers.

    Comment: One commenter proposed that information on the Quality Payment Program be made available to MA organizations and easily accessible by the general public. One commenter suggested CMS to use its leadership role in the LAN in order to align incentives, performance measures, and other components of value-based arrangements between public and private payers.

    Response: We agree with commenters that continuous communication and engagement are essential to the effective implementation of the Quality Payment Program. We intend to continue our strong emphasis on clinician outreach and education, and will continue to be receptive to new ideas for improving the Quality Payment Program in the future. We believe that the finalized criteria in this final rule with comment are sufficiently clear as to how an MA payment arrangement may become an Other Payer Advanced APM. Apart from defining the statutory criteria, we intentionally do not prescribe unnecessary details in our finalized policies in order to enable significant flexibility in the design of Other Payer Advanced APMs.

    Comment: Another commenter supported our proposal to consider MA plans under the All-Payer Combination Option beginning in the 2019 QP Performance Period and believes CMS should consider developing incentives for MA plan participation. One commenter believes that CMS should offer (or seek statutory changes that would allow CMS to offer) provider-affiliated MA plans to create Other Payer Advanced APMs because the commenter believes provider-affiliated MA plans already bear financial risk for care of Medicare beneficiaries. The commenter also believes that CMS should encourage MA plans to offer more APM-like options to the increasing number of MA enrollees, because commenter believes that patients in MA plans should benefit from improved care and payment reforms of APMs.

    Response: We appreciate the comments. As we mentioned above, we encourage diversity in Other Payer Advanced APMs, but we do not intend provide additional guidance or incentives in MA contracting as part of our implementation of the Quality Payment Program. We also note that the statute does not provide for special consideration for MA plans or additional or special incentives for the development of or participation in MA plan-operated Other Payer Advanced APMs. In addition, as discussed in a recent Report to Congress entitled, “Alternative Payment Models and Medicare Advantage,” we have limited tools available to encourage Medicare Advantage Organizations (MAOs) to adopt APMs or similar payment arrangements, as the statutory non-interference clause prohibits CMS from interfering in the development of contracts between MAOs and their network providers. We will continue to communicate with stakeholders, including MA plans, before the All-Payer Combination Option takes effect for the performance period in 2019.

    Comment: One commenter asked how PACE organizations will align with Other Payer Advanced APMs in the future, particularly as CMS considers options for Other Payer Advanced APMs, which may include MA payment arrangements. Another commenter suggested that CMS model future Advanced APMs after the most successful MA models.

    Response: We will evaluate each payment arrangement according to the Other Payer Advanced APM criteria. Again, we encourage diversity in Other Payer Advanced APMs and believe this rule provides flexibility in the design of innovative models.

    Comment: Some commenters requested information about how FFS payment adjustments under the Quality Payment Program, including MIPS adjustments and APM Incentive Payments, will impact the benchmark rates that are used to determine our monthly payments to MA plans. These commenters stated that we should address the effects of these adjustments on MA benchmarks before the release of our CY 2019 Advance Notice. Commenters also stated that CMS should have addressed the impacts of these FFS adjustments in the proposed rule's regulatory impact analysis.

    Response: CY 2019 is the first year that the MIPS payment adjustments will impact FFS payments and that the APM Incentive Payment will be made to QPs. We believe that it is more appropriate that we address our methodology for calculating CY 2019 MA benchmarks through the annual Advance Notice and Rate Announcement process, as set forth in section 1853(b) of the Act. Starting in CY 2017, the annual release of our Advance Notice will be followed by a comment period of no fewer than 30 days, which will provide MA organizations with sufficient opportunity to raise any concerns regarding proposed changes to our benchmark calculation methodology. MA rates are set through a separate process, and payment policies for CY 2019 will be addressed in the Advance Notice and Rate Announcement for that program.

    c. Calculation of All-Payer Combination Option Threshold Score (1) Use of Methods

    We may apply one or both of two different methods—using payment amounts or patient counts—to arrive at an eligible clinician's Threshold Score. We would compare the Threshold Score against the relevant QP Threshold or Partial QP Threshold to determine an eligible clinician's QP status for the year.

    We proposed that we would calculate Threshold Scores for eligible clinicians in an Advanced APM Entity under both the payment amount and patient count methods for each QP Performance Period. We also proposed that we would assign QP status using the more advantageous of the Advanced APM Entity's two scores.

    We believe that both the payment amount and patient count methods should be considered in order to produce Threshold Scores. As the two calculations differ there may be cases in which Threshold Scores vary enough that different QP determinations could result depending on which is used. In such an event, we do not believe that prioritizing the Threshold Score using one calculation over the other would yield an appropriate, non-arbitrary result. By using the greater of the Threshold Scores achieved, we hope to promote simplicity in QP determinations and to maximize the number of eligible clinicians that attain QP status each year. We solicited comment on the use of the payment and patient count methods for the All-Payer Combination Option.

    The following is a summary of the comments we received regarding our proposal to calculate the Threshold Score for eligible clinicians participating in Other Payer Advanced APMs by either the payment amount or patient count method.

    Comment: One commenter supported CMS' proposal to include and calculate both the revenue and patient count methodologies for QP determination, and use the most advantageous calculation.

    Response: We thank the commenter for supporting our proposal. We are finalizing our policy as proposed, and note that the policies for calculating Threshold Scores under the All-Payer Combination Option mirror those for the Medicare Option. Both options use similarly defined numerators and denominators, and both apply the more advantageous result of the two methods for calculating the Threshold Score for purposes of QP determination. Section II.F.6. of this final rule with comment period contains a fuller discussion of the Medicare Option policy.

    We are finalizing our proposal to calculate Threshold Scores for eligible clinicians in an APM Entity under both the payment amount and patient count methods for each QP Performance Period. We will make QP determinations using the more advantageous of the APM Entity's two scores.

    (2) Excluded Payments

    Section 1833(z)(2)(B)(ii)(I) and (C)(ii)(I) of the Act specifies that the calculation under the All-Payer Combination Option is based on the sum of both payments for Medicare Part B covered professional services and, with certain exceptions, all other payments, regardless of payer. We proposed that we would include such “all other” payments in the numerator and the denominator, and we would exclude payments as specified in the statute. We also proposed to exclude patients associated with these excluded payments from the patient count method.

    The statute excludes payments made:

    • By the Secretary of Defense for the costs of Department of Defense health care programs;

    • By the Secretary of Veterans Affairs for the costs of Department of Veterans Affairs health care programs; and

    • Under Title XIX in a state in which no Medicaid Medical Home Model or APM is available under the state plan.

    We proposed that title XIX payments or patients would be excluded in the numerator and denominator for the QP determination unless: (1) A state has at least one Medicaid Medical Home Model or Medicaid APM in operation that is determined to be an Other Payer Advanced APM; and (2) the relevant Advanced APM Entity is eligible to participate in at least one of such Other Payer Advanced APMs during the QP Performance Period, regardless of whether the APM Entity actually participates in such Other Payer Advanced APMs. This would apply to both the payment amount and patient count methods. We believe this Medicaid exclusion avoids penalizing eligible clinicians who do not have the possibility of participation in an Other Payer Advanced APM under Medicaid. We believe that failing to exclude such payments and/or patients would unduly disadvantage potential QPs by inflating denominators based on circumstances beyond their control. For example, if a state's Medicaid Medical Home Model is determined to be an Other Payer Advanced APM and is operated on a statewide basis, Medicaid payments would be included in the denominator for all eligible clinicians in that state assessed under the All-Payer Combination Option. However, if the state operates such an Other Payer Advanced APM at a sub-state level, and eligible clinicians who do not practice in the geographic area where the Medicaid Medical Home Model is available are not eligible to participate, Medicaid payments would not be included in such eligible clinicians' QP calculations. We plan to more fully develop the approach to identify Medicaid Medical Home Models and Medicaid APMs, as well as eligible clinicians participating in them, through subsequent rulemaking.

    We solicited comment on our proposals to determine payment exclusions and on how we could account for eligible clinician participation in Medicaid APM or Medicaid Medical Home Models, such as pilots where participation may be intentionally limited by the state.

    Comment: One commenter requested that CMS clarify this proposal. Another commenter requested clarification of what “all other payments regardless of payer” means, which establishes the basis for determining the payments in the denominator of the threshold calculations.

    Response: “All other payments regardless of payer,” described previously in this final rule, means the aggregate of all payments from all payers, except those explicitly excluded by statute.

    After considering the public comments, we are finalizing our proposal for determining exclusions of payments in the numerator and denominator for the QP determination without changes. The calculation under the All-Payer Combination Option is based on the sum of both payments for Medicare Part B covered professional services and, with certain exceptions, all other payments, regardless of payer. We will include such “all other” payments in the numerator and the denominator and exclude payments as specified in the statute. We will also exclude patients associated with these excluded payments from the patient count method, as proposed.

    The payments excluded are those made:

    • By the Secretary of Defense for the costs of Department of Defense health care programs;

    • By the Secretary of Veterans Affairs for the costs of Department of Veterans Affairs health care programs; and

    • Under Title XIX in a state in which no Medicaid Medical Home Model or APM is available under the state plan.

    (3) Payment Amount Method

    We proposed to calculate an All-Payer Combination Option Threshold Score for eligible clinicians in an Advanced APM Entity using the proposed payment amount method, which would then be compared to the relevant QP Payment Amount Threshold and Partial QP Payment Amount Threshold to make a QP determination.

    (a) Threshold Score Calculation (i) In General

    We proposed to calculate the All-Payer Threshold Score for eligible clinicians in an Advanced APM Entity (or an eligible clinician that participates in multiple APMs, as this exception was discussed in the proposed rule) by dividing the numerator value described under section II.F.7.c.(3)(a)(ii) of this final rule with comment period by the denominator value described under section II.F.7.c.(3)(a)(iii) of this final rule with comment period. This calculation would result in a percent value Threshold Score that we would compare to the QP Payment Amount Threshold and the Partial QP Payment Amount Threshold to determine the QP status of the eligible clinicians for the payment year. The calculations occur in two steps because there is a Medicare QP Threshold and an All-Payer QP Threshold.

    (ii) Numerator

    We proposed that the numerator would be the aggregate of all payments from all other payers, except those excluded under sections 1833(z)(2)(B)(ii)(I) and (C)(ii)(I) of the Act, to the Advanced APM Entity's eligible clinicians—or the eligible clinician in the event of an individual eligible clinician assessment—under the terms of all Other Payer Advanced APMs during the QP Performance Period. Medicare Part B covered professional services will be calculated under the All-Payer Combination Option in the same manner as it will be under the Medicare Option.

    (iii) Denominator

    We proposed that the denominator would be the aggregate of all payments from all payers, except those excluded under sections 1833(z)(2)(B)(ii)(I) and (C)(ii)(I) of the Act, to the Advanced APM Entity's eligible clinicians—or the eligible clinician in the event of an individual eligible clinician assessment—during the QP Performance Period. The portion of this amount that relates to Medicare Part B covered professional services will be calculated under the All-Payer Combination Option in the same manner as it is for the Medicare Option.

    (b) Examples of Payment Amount Threshold Score Calculation

    In this example, an Advanced APM Entity participates in a Medicare ACO initiative, a commercial ACO arrangement, and a Medicaid APM. Each of the APMs is determined to be an Advanced APM. In the QP Performance Period for payment year 2021 (proposed in the proposed rule to be 2019), the Advanced APM Entity receives the following payments:

    Table 38—All-Payer Combination Option Example 1 Payer Payments through ACO Total payments from applicable payer Threshold score
  • (%)
  • Medicare* 300,000 1,000,000 30 Commercial 300,000 500,000 60 Medicaid 80,000 100,000 80 Total 680,000 1,600,000 43 * For Medicare Part B payments, the amount used for the All-Payer Combination Option will be the same as the amount tied to attribution-eligible beneficiaries used in the denominator of the calculation under the Medicare Option.

    In Table 38, the Advanced APM Entity meets the minimum Medicare threshold (30% > 25%) to be considered under the All-Payer Combination Option. However, it fell short of the QP Payment Amount Threshold (43% < 50%). In this case, the Advanced APM Entity would meet the Partial QP Payment Amount Threshold (43% > 40%).

    Another Advanced APM Entity in the same year receives the following payments:

    Table 39—All-Payer Combination Option Example 2 Payer Payments through ACO Total payments from applicable payer Threshold score Medicare* 200,000 500,000 40 Commercial 400,000 500,000 80 Medicaid 100,000 150,000 67 Total 700,000 1,150,000 61 * For Medicare Part B payments, the amount used for the All-Payer Combination Option will be the same as the amount tied to attribution-eligible beneficiaries used in the denominator of the calculation under the Medicare Option.

    In Table 39, the Advanced APM Entity meets the minimum Medicare threshold (40% > 25%) to be considered under the All-Payer Combination Option. It also exceeds the QP Payment Amount Threshold (61% > 50%). In this case, the eligible clinicians in the Advanced APM Entity would become QPs.

    We solicited comment on the payment amount method described in this proposal and any potential alternative approaches.

    The following is a summary of the comments we received regarding our payment amount method proposal.

    Comment: One commenter supported our proposal for using the payment amount method to calculate the All-Payer Combination Option Threshold Score. Another commenter supported the definition of the numerator because if a beneficiary is attributed to an ACO and sees a clinician outside that ACO, payments made to the non-ACO clinician will not count towards this numerator, even if the ACO is in an Other Payer Advanced APM.

    An additional commenter requested more details around how the data for the Threshold Score numerator and denominator under the All-Payer Combination Option would be collected and calculated. One commenter requested clarification as to whether 100 percent of a clinician's qualifying risk-based payments for Medicaid services from an Other Payer Advanced APM would be eligible to count towards the All Payer Combination Option.

    Response: We appreciate the comments. The collection and submission of data is described in section II.F.7.d. of this final rule with comment period, and we seek further comments on that topic. All of the payments an eligible clinician receives through an Other Payer Advanced APM, except for those excluded as detailed above, will count in the numerator of the Threshold Score.

    We are finalizing our proposal to calculate the All-Payer Combination Option Threshold Score for eligible clinicians in an Advanced APM Entity (or an eligible clinician that participates in multiple APMs) by dividing the numerator by the denominator value, as described above. This calculation will result in a percent value Threshold Score that we would compare to the QP Payment Amount Threshold and the Partial QP Payment Amount Threshold to determine the QP status of the eligible clinicians for the payment year. The calculations occur in two steps because there is a Medicare QP Threshold and an All-Payer QP Threshold. We are finalizing our proposal that the numerator is the aggregate of all payments from all other payers, except those excluded under sections 1833(z)(2)(B)(ii)(I) and (C)(ii)(I) of the Act, to the Advanced APM Entity's eligible clinicians—or the eligible clinician in the event of an individual eligible clinician assessment—under the terms of all Other Payer Advanced APMs during the QP Performance Period.

    We are finalizing our proposal that the denominator is the aggregate of all payments from all payers, except those excluded under sections 1833(z)(2)(B)(ii)(I) and (C)(ii)(I) of the Act, to the Advanced APM Entity's eligible clinicians—or the eligible clinician in the event of an individual eligible clinician assessment—during the QP Performance Period.

    (4) Patient Count Method

    We proposed to calculate a Threshold Score for the eligible clinician group in an Advanced APM Entity—or eligible clinician in the exception situations under sections II.F.5 and II.F.6 of the proposed rule—using the patient count method, which would then be compared against the relevant QP Patient Count Threshold and Partial QP Patient Count Threshold to determine the QP status of an eligible clinician for the year based on the higher of the two values.

    (a) Threshold Score Calculation (i) In General

    We proposed that the Threshold Score calculation for the patient count method would include patients for whom the eligible clinicians in an Advanced APM Entity furnish services and receive payment under the terms of an Other Payer Advanced APM, with certain exceptions as outlined in the previous section. This calculation would result in a percent value Threshold Score that CMS would compare to the QP Patient Count Threshold and the Partial QP Patient Count Threshold to determine the eligible clinicians' QP status for the payment year. The calculations occur in two steps as there is a Medicare Threshold requirement and an All-Payer Threshold requirement.

    (ii) Unique Patients

    First, we proposed that, like the Medicare Option, the patient count method under the All-Payer Combination Option would only count unique patients, with multiple eligible clinicians able to count the same patient. Similarly, we proposed to count a single patient, where appropriate, in the numerator and denominator for multiple different Advanced APM Entities when counting the number of beneficiaries under this method section II.F.6 of the proposed rule. We also proposed that we would not count any patient more than once for any single Advanced APM Entity. In other words, for each Advanced APM Entity, we would count each unique patient one time in the numerator, and one time in the denominator.

    We believe that counting patients this way maintains integrity by preventing double counting of patients within an Advanced APM Entity while recognizing the reality that patients often have relationships with eligible clinicians in different organizations. We expect to avoid distorting patient counts for such overlap situations, especially in Advanced APM Entity-dense markets.

    We solicited comment on our proposal for counting unique patients for the patient count method.

    (iii) Numerator

    We proposed that the numerator would be the number of unique patients to whom eligible clinicians in the Advanced APM Entity furnish services that are included in the measures of aggregate expenditures used under the terms of all of their Other Payer Advanced APMs during the QP Performance Period, plus the patient count numerator for Advanced APMs. A patient would count in the non-Medicare portion of this numerator only if, as stated in the proposed rule, the eligible clinician furnishes services to the patient and receives payment(s) for furnishing those services under the terms of an Other Payer Advanced APM.

    (iv) Denominator

    We proposed that the denominator would be the number of unique patients to whom eligible clinicians in the Advanced APM Entity furnish services under all non-excluded payers during the QP Performance Period.

    (b) Examples of Patient Count Threshold Score Calculation

    In the QP Performance Period for payment year 2021the Advanced APM Entity experienced the following patient counts:

    Table 40—All-Payer Combination Option Example 3 Payer Patients through ACO Total patients from payer Threshold score
  • (%)
  • Medicare* 3,000 10,000 30 Commercial 1,000 5,000 20 Medicaid 800 1,000 80 Total 4,800 16,000 30 * For Medicare Part B patients, the amount used for the All-Payer Combination Option will be the same as the number of attribution-eligible beneficiaries used in the denominator of the calculation under the Medicare Option.

    In Table 40, the Advanced APM Entity meets the minimum Medicare threshold (30% > 20%) to be considered under the All-Payer Combination Option. However, it fell short of the QP Patient Count Threshold (30% < 35%). In this case, the Advanced APM Entity would meet the Partial QP Patient Count Threshold (30% > 25%).

    Another Advanced APM Entity in the same year experienced the following patient counts:

    Table 41—All-Payer Combination Option Example 4 Payer Patients through ACO Total patients from payer Threshold score
  • (%)
  • Medicare* 2,000 5,000 40 Commercial 4,000 5,000 80 Medicaid 1,000 1,500 67 Total 7,000 11,500 61 * For Medicare Part B patients, the amount used for the All-Payer Combination Option will be the same as the number of attribution-eligible beneficiaries used in the denominator of the calculation under the Medicare Option.

    In Table 41, the Advanced APM Entity meets the minimum Medicare threshold (40% > 20%) to be considered under the All-Payer Combination Option. It also exceeds the minimum QP Patient Count Threshold (61% > 35%). In this case, the eligible clinicians in the Advanced APM Entity would become QPs.

    We solicited comment on the patient count method described above and any potential alternative approaches.

    We received no comments in response to our proposed patient count method. Section II.F.6.(c) of this final rule with comment has a detailed discussion of comments on this policy as it pertains to the Medicare Option.

    We are finalizing our proposal to calculate the All-Payer Combination Option Threshold Score for eligible clinicians in an Advanced APM Entity (or an eligible clinician that participates in multiple APMs) by dividing the numerator by the denominator value, as described above. This calculation will result in a percent value Threshold Score that we would compare to the QP Patient Count Threshold and the Partial QP Patient Count Threshold to determine the QP status of the eligible clinicians for the payment year. The calculations occur in two steps because there is a Medicare QP threshold and an All-Payer QP threshold. We are finalizing our proposal that the numerator is the number of unique patients to whom eligible clinicians in the Advanced APM Entity furnish services that are included in the measures of aggregate expenditures used under the terms of all of their Other Payer Advanced APMs during the QP Performance Period, plus the patient count numerator for Advanced APMs.

    We are finalizing our proposal that the denominator is the number of unique patients to whom eligible clinicians in the Advanced APM Entity furnish services under all non-excluded payers during the QP Performance Period.

    d. Submission of Information for Assessment Under the All-Payer Combination Threshold Option

    Under sections 1833(z)(2)(B)(ii)(III) and (C)(ii)(III) of the Act, an eligible clinician can only become a QP using the All-Payer Combination Option by providing the Secretary such information as is necessary for the Secretary to determine whether a payment arrangement is an Other Payer Advanced APM and to determine the eligible clinician's Threshold Score.

    We have the necessary data to make QP determinations and an APM Incentive Payments for Advanced APMs because they are administered within the Medicare program. Because Other Payer Advanced APMs are administered outside of the Medicare program, CMS needs to collect analogous data from specific sources who have that data to make QP determinations and APM Incentive Payments to those participating in Other Payer Advanced APMs. In order for CMS to perform QP determinations using the All-Payer Combination Option, submissions must include specific payment and patient numbers for each payer from whom the eligible clinician has received payments during the QP Performance Period.

    We proposed that APM Entities or individual eligible clinicians must submit by a date and in a manner determined by us: (1) Payment arrangement information necessary to assess whether each payment arrangement is an Other Payer Advanced APM, including information on financial risk arrangements, use of certified EHR technology, and payment based on quality measures; and (2) for each payment arrangement, the amounts of revenues for services furnished through the arrangement, the total revenues from the payer, the numbers of patients furnished any service through the arrangement (that is, patients for whom the eligible clinician is at risk if actual expenditures exceed expected expenditures), and (3) the total number of patients furnished any service through the payer.

    If we do not receive sufficient information to complete our evaluation of all other payer arrangements to perform the QP threshold calculation, we would not evaluate the eligible clinicians under the All-Payer Combination Option. If sufficient information is submitted, we would then assess the characteristics of the other payer arrangement to determine if it is an Other Payer Advanced APM and would notify the APM Entities and/or eligible clinicians of the Other Payer Advanced APM determinations based on their submissions. Because we proposed that an Other Payer Advanced APM is required to have an outcome measure, we propose that if an Other Payer Advanced APM has no outcome measure, the Advanced APM Entity must attest that there is no applicable outcome measure on the MIPS list. We intend to establish specific requirements regarding the timing and manner of submission of such information through future rulemaking.

    We proposed that each payer attest to the accuracy of all submitted information including the reported payment and patient data. We proposed that if a payer does not attest to the accuracy of the reported payment and patient data, these data would not be assessed under the All-Payer Combination Option. However, we recognize that such a requirement leaves eligible clinicians dependent on a payer over which they may have limited control. We therefore solicited comment on alternatives to requiring payer attestation, such as addressing the scope and intensity of audits to verify the submitted data. For APM Entities and eligible clinicians participating in Medicaid, we would initiate a review and determine in advance of the QP Performance Period the existence of Medicaid Medical Home Models and Medicaid APMs based on information obtained from state Medicaid agencies and other authorities, such as professional organizations or research entities.

    We solicited comment from stakeholders on the specific types of payment arrangement information that would be necessary to assess whether payment arrangement is an Other Payer Advanced APM, and the format in which we could reasonably expect to receive this information. We solicited comment on the level of detail which we should require, and whether certain pieces of information would be most easily submitted directly from individual eligible clinicians or from an APM Entity. We also solicited comment on the timing of when we could expect to receive this information from individual eligible clinicians and APM Entities for a performance year. In addition, we solicited comment on the proposed requirement that an Other Payer Advanced APM must have an outcome measure.

    We solicited comment on the possibility of receiving information on Other Payer Advanced APMs and their participants directly from other payers in order to minimize reporting burden for APM Entities and eligible clinicians. We solicited comment on the extent to which collecting voluntary submissions of data from other payers could reduce burden and increase program integrity through more accurate determinations of QP status based on payment or patient threshold calculations for Other Payer Advanced APMs. Likewise, we solicited comment on the extent to which such data collection is operationally feasible or could infringe upon other payers' interests in maintaining the confidentiality of their business practices.

    In addition, we proposed to make early Other Payer Advanced APM determinations on other payer arrangements if sufficient information is submitted at least 60 days before the beginning of a QP Performance Period. This would allow us to offer eligible clinicians advance notice of their prospects of achieving QP status in the event they are assessed under the All-Payer Combination Option. This early determination would be considered final for the QP Performance Period based on the payment arrangement information submitted. If new information is submitted based on a change in the payment arrangement during the QP Performance Period, the initial determination could be subject to review and revision. We also proposed that, to the extent permitted by federal law, we would maintain confidentiality of certain information that the APM Entities and/or eligible clinicians submit regarding Other Payer Advanced APM status to avoid dissemination of potentially sensitive contractual information or trade secrets. We proposed that, unlike our proposal for Advanced APM determinations, the Other Payer Advanced APM determinations would be made available directly to participating APM Entities and eligible clinicians rather than through public notice, and we would explain how and within what timeframes such notifications will occur in subregulatory guidance. We may consider publicly releasing information on Other Payer Advanced APMs on the CMS Web site with general and/or aggregate information on the payers involved and the scopes of such agreements.

    We solicited comment on the proposed timing and method of feedback to APM Entities and eligible clinicians regarding the status of Other Payer Advanced APMs for which they have submitted information and on the proposed early determination process and the ability of APM Entities and eligible clinicians to submit sufficient information prior to the beginning of a QP Performance Period. We also solicited comment on the types of information that contain potentially sensitive information.

    The information submitted to determine whether an eligible clinician is a QP under the All-Payer Combination Option may be subject to audit, and eligible clinicians and APM Entities will be required to maintain copies of any supporting documentation. If an audit reveals a material discrepancy in the information submitted to us, and such discrepancy affected the eligible clinician's QP status, the APM Incentive Payment may be recouped. Providing false information may reflect a false claim subject to investigation and prosecution. We may provide further details on the audit and recoupment process under the All-Payer Combination Option in future rulemaking.

    The following is a summary of the comments we received regarding our proposal to require APM Entities or eligible clinicians to submit information regarding their payment arrangements in order to be assessed under the All-Payer Combination Option.

    Comment: We received several comments on the requirements for submission of information. Many commenters suggested CMS to be mindful of the need to limit potential burden on clinicians and APM entities to collect information for calculation of the All-Payer Combination Option. Many commenters expressed concern that the validation process will be burdensome for both eligible clinicians and payers and requested CMS keep an open dialogue with all involved parties to design a process that is administratively feasible. Several commenters requested that the requirements for data requests be very specific and limited to protect sensitive and proprietary information, and that the process have safeguards in place to protect data.

    Several commenters expressed concern or opposition to CMS requiring APM Entities and eligible clinicians to submit information for CMS to assess whether other payer arrangements meet the Other Payer Advanced APM criteria. Some of these commenters stated that payers should be required to submit the information because clinicians may not have the necessary information readily available. One of these commenters stated that, in order to ensure that plans and clinicians can continue to focus on delivering high-quality care, CMS should minimize the reporting required under the All-Payer Combination Option. Another commenter expressed concern that eligible clinicians could be reluctant to share their non-Medicare payment information with CMS. One commenter opposed CMS requiring eligible clinicians to submit the entirety of a contract with another payer, particularly sections including negotiated fee schedule or payment rates.

    Response: We appreciate the comments. We understand that both eligible clinicians and payers are concerned with which parties will be responsible for the submission of information, the timing and method of submission, and who will be held accountable for the accuracy of the information submitted. We intend to implement a process that requires reporting the least amount of information needed to determine participation in Other Payer Advanced APMs and calculate Threshold Scores while ensuring the integrity of the program. Because these provisions of the statute will not be implemented until the 2019 QP Performance Period, we are seeking additional comments on these information submission requirements.

    Comment: Some commenters opposed requiring payers to verify or attest to the data being submitted by APM Entities or eligible clinicians. These commenters expressed that the task would be burdensome and that APM Entities or eligible clinicians should be responsible for all reporting requirements. One commenter stated some private payers have no relationship with CMS and the attestation would be a burden to establish. Several commenters believe the proposed rule provided insufficient detail regarding payer responsibility and recommended that CMS clearly explain payer responsibilities and expectations with regard to attestation of payment arrangements with physicians. One commenter stated that, as currently written, this provision of the proposed rule could include disclosure of proprietary contracting information that CMS does not have authority to collect. This may violate the contractual limitations between the payer and clinician. The same commenter said that without any appropriate guidance that set parameters around this requirement, operational implementation is likely to be overly burdensome. One commenter requested CMS strike this requirement from the final rule and instead include a criteria checklist in the attestation. One commenter recommended CMS minimize administrative burdens for eligible clinicians to demonstrate their participation with these payers and looked forward to submitting more detailed comments when CMS proposes more specifics for how data will be handled and calculations will made under the All-Payer Combination Option.

    Response: We appreciate the comments. We believe payer involvement in attesting to the accuracy of data submitted is essential to the integrity of the program. We do not believe the process poses an unreasonable burden, even for private payers who have no relationship with CMS. We intend to put in place guidelines that will ensure proprietary information is not disclosed. We seek additional comments on the process for submitting information.

    Comment: Several commenters recommended that CMS have a conversation with multiple stakeholders regarding how information will be submitted to CMS. Several commenters also suggested that CMS establish the detailed reporting requirements through a formal rulemaking process with an opportunity for interested parties to provide feedback on the requirements. Two commenters suggested that, rather than attend to the details through subregulatory guidance, CMS should include a thorough proposal in the CY 2018 PFS.

    One commenter recommended that CMS should consider the same approach for Other Payer Advanced APMs that is used for Medicare Advantage plans. However, the commenter suggested that if CMS believes a different standard should apply to MA plans because of their contractual relationship with CMS, then CMS should apply the reasonableness standard that is enforced through the Medicare Advantage program in which a health plan would acknowledge, to the best of its knowledge, information, and belief, that the reported payment and patient counts were accurate. The commenter recommended that CMS apply this standard in a way that minimizes the reporting burden on MA plans. Another commenter encouraged CMS to consider establishing a data submission process that would allow MA plans to submit data on their arrangements in lieu of attestation. One commenter requested more detailed requirements for MA contracts and an explanation for how ACOs can attest to participating in such contracts.

    Another commenter recommended CMS to consider expanding the third-party data partners to include state all-payer claims databases (APCDs) as a data submitter for those payment arrangement electing to utilize the state aggregator for reporting. This option would also have the potential to enhance the analytic opportunities for the APM Entity to work with the APCD to implement analytic tools and data products that benefit the patient population and the APM Entity beyond Medicare reporting requirements.

    Response: We appreciate the comments and suggestions. We do intend to consult further with stakeholders about the process for submitting information. We will consider existing reporting rules and attestations with payers, such as MA plans, and adopt similar ones where appropriate. We intend to use future rulemaking to potentially make changes to our approach.

    After considering the public comments, we are finalizing our proposed information submission requirements with no changes, but seek further comments on the process for submitting information. APM Entities or individual eligible clinicians must submit by a date and in a manner determined by CMS: (1) payment arrangement information necessary to assess whether each payment arrangement is an Other Payer Advanced APM, including information on financial risk arrangements, use of certified EHR technology, and payment tied to quality measures; and (2) for each payment arrangement, the amounts of payments for services furnished through the arrangement, the total payments from the payer, the numbers of patients furnished any service through the arrangement (that is, patients for whom the eligible clinician is at risk if actual expenditures exceed expected expenditures), and (3) the total numbers of patients furnished any service through the payer.

    We are also finalizing our proposal that each payer attest to the accuracy of all submitted information including the reported payment and patient data. We proposed that if a payer does not attest to the accuracy of the reported payment and patient data, these data would not be assessed under the All-Payer Combination Option. We note that while we cannot require other payers to submit information, we could only be confident in the accuracy of information eligible clinicians submitted to us—and use such information in the All-Payer Combination Option—if other payers attest to the accuracy of that information.

    8. APM Incentive Payment

    The APM Incentive Payment is specified under section 1833(z)(1) of the Act.

    a. Amount of the APM Incentive Payment

    This section describes our proposal for calculating the amount of the APM Incentive Payment and accounts for the specific scenarios outlined under sections 1833(z)(1)(A)(i) and 1833(z)(1)(A)(ii) of the Act. This section also describes the process by which we proposed to disburse these APM Incentive Payments to QPs.

    In accordance with section 1833(z)(1)(A) of the Act, we would make an APM Incentive Payment for a year to eligible clinicians that achieve QP status for the year during years 2019 through 2024. In accordance with the statute, we proposed that this APM Incentive Payment must be equal to 5 percent of the estimated aggregate amounts paid for Medicare Part B covered professional services furnished by the eligible clinician from the preceding year across all billing TINs associated with the QP's NPI.

    The following is a summary of the comments we received in response to our proposals regarding the amount of the APM Incentive Payment.

    Comment: One commenter recommended that CMS delay the expiration of the APM Incentive Payment until eligible clinicians have meaningful opportunities to participate in an Advanced APM, specifically Advanced APMs that promote access to advanced illness and palliative care. The commenter noted that developing and implementing such a new Advanced APM would take time and investment.

    Response: The years for which the APM Incentive Payment is in effect are specified under section 1833(z)(1) of the Act. We do not believe we have authority to extend availability of the five percent APM Incentive Payment beyond the statutory timeframe. Additionally, we remind readers that after the APM Incentive Payments expire, QPs will continue to be excluded from MIPS reporting requirements and payment adjustments for each year that they meet the QP Thresholds. Additionally, beginning in 2026, QPs will receive a differential, higher PFS update each year.

    Comment: Several commenters suggested that CMS include payments made under MA plans when calculating the 5 percent APM Incentive Payment. We also received one comment suggesting that CMS include payments made under the FQHC PPS and the RHC AIR when calculating the 5 percent APM Incentive Payment.

    Response: We thank commenters for their suggestions regarding the inclusion of payments made under Medicare Advantage, the FQHC PPS, and the RHC AIR plans when calculating the estimated aggregate payments made to eligible clinicians. However, section 1833(z)(1) of the Act stipulates that the APM Incentive Payment be equal to 5 percent of the estimated aggregate amounts paid only for Medicare Part B covered professional services, which do not include Medicare Advantage, FQHC PPS, and RHC AIR payments.

    Comment: One commenter requested clarification on how Medicare crossover payments would be taken into consideration for calculating the APM Incentive Payment.

    Response: A Medicare crossover claim occurs when Medicare is the primary payer for a beneficiary that has supplemental insurance coverage, including Medicaid. Under the crossover payment process, after the Medicare claim is adjudicated, the Medicare Administrative Contractor automatically sends the adjudicated claim to the designated insurer for payment. Medicare payments made under this process that are for Part B covered professional services will be included in our calculations when determining QP Thresholds using the Medicare Option and will also be included in the amount of the APM Incentive Payment.

    Comment: One commenter expressed concern that a 5 percent bonus on Part B payments may not be enough of an incentive to offset taking on risk for both Parts A and B expenditures for aligned beneficiaries, as is done in ACO initiatives with downside risk. Another commenter recommended that we set the APM Incentive Payment amounts to be at least the same amount as the maximum allowable MIPS bonus with the intent of further increasing participation in Advanced APMs. One commenter supported our belief that the APM Incentive Payment is based on participation in an Advanced APM, and is not based on performance in the APM. Conversely, we also received a comment that expressed concern for paying eligible clinicians a 5 percent incentive to participate in Advanced APMs that are not supported by strong evidence of success in controlling cost or improving quality, or both. The commenter stated that APM Incentive Payments should be provided only for those eligible clinicians in APM Entities proven to improve value for beneficiaries. The commenter believes that the relationship between guaranteed additional payment and payment at risk must be substantial enough so that eligible clinicians are motivated to improve their care processes and reduce unnecessary utilization.

    Response: We note that section 1833(z)(1) of the Act stipulates that the APM Incentive Payment be equal to 5 percent of the estimated aggregate amounts paid for Medicare Part B covered professional services. Likewise, as stated in section II.F.1. of this final rule with comment period, we believe that the process for determining whether an eligible clinician is a QP and receives the APM Incentive Payment should focus on the relative degree of participation by eligible clinicians in Advanced APMs, not on their performance within the APM. The Quality Payment Program does not alter how each particular APM, or Advanced APM, measures and rewards success within its design. Rather, it rewards a substantial degree of participation in Advanced APMs.

    Comment: We received one comment in support of our proposal that the amount of APM Incentive Payment be calculated across all billing TINs associated with the QP's NPI.

    Response: We thank commenters for their feedback and support of this proposal.

    Comment: One commenter requested feedback on how we would calculate the APM Incentive Payment if an APM Entity contract ends during the incentive payment base period.

    Response: QP Threshold Scores and APM Incentive Payments are calculated based on the data that CMS has available at the time of the calculations. We reiterate that our proposal is to calculate the APM Incentive Payment across all billing TINs during the incentive payment base period, which we are finalizing to be the calendar year preceding the payment year. As an example, using 2017 as the performance period for the 2019 payment year, we would calculate the amount of the APM Incentive Payment based on payments during CY 2018. Even if an APM Entity contract involving the QP ends during CY 2018, we would still base the amount of the APM Incentive Payment across all of a QP's billing TINs during the incentive payment base period.

    After considering public comments, we are finalizing our proposals regarding the calculation of the amount of the APM Incentive Payment as required by section 1833(z)(1)(A) of the Act. Specifically, we finalize our proposals that APM Incentive Payments will be made to eligible clinicians who are determined to be QPs during years 2019 through 2024. In accordance with the statute, we are finalizing our proposal that this APM Incentive Payment must be equal to 5 percent of the estimated aggregate payment amounts for Medicare Part B covered professional services furnished by the QP during the preceding year across all billing TINs associated with the QP's NPI.

    (1) Incentive Payment Base Period

    The incentive payment base period is the range of dates that would be used to calculate the estimated aggregate payment amounts for the year preceding the QP payment year that would serve as the basis for the incentive payment. Section 1833(z)(1)(A) of the Act states that in calculating the amount that is equal to 5 percent of the estimated aggregate payment amounts for Medicare Part B covered professional services under this part for the preceding year, the payment amount for the preceding year may be an estimation for the full preceding year based on a period of such preceding year that is less than the full year. We believe this provision provides flexibility in determining the incentive payment base period. We proposed to use the full calendar year prior to the payment year as the incentive payment base period from which to calculate the estimated aggregated payment amounts.

    Using a complete calendar year of claims would allow for the most accurate representation of the covered professional services delivered by each eligible clinician, which we believe outweighs a modest potential delay in making the APM Incentive Payment. We solicited comment on our proposal to use the entire preceding calendar year as the incentive payment base period.

    The following is a summary of the comments we received in response to our proposal pertaining to the APM incentive payment base period.

    Comment: Several commenters supported our proposal to use the entire calendar year prior to the incentive payment year when calculating the amount of APM Incentive Payment.

    Response: We appreciate commenters' feedback and support of our proposal.

    Comment: One commenter suggested that we calculate the APM Incentive Payment based on the number of months an NPI was participating in an Advanced APM.

    Response: We appreciate this commenter's feedback. However, we disagree that the amount of APM Incentive Payment should only be based on the number of months of participation in an Advanced APM. Not only would this potentially conflict with our policies setting the QP Performance Period and the incentive payment base period, but the statute provides that the APM Incentive Payment is based on estimated aggregate payment amounts for the entire “preceding year.”

    After considering public comments, we are finalizing our proposal that the incentive payment base period is the full calendar year prior to the payment year.

    (2) Timeframe of Claims

    Section 1833(z)(1)(A) of the Act directs us to make the APM Incentive Payment in a lump sum on an annual basis “as soon as practicable.” We believe that, in implementing this provision, it is important to balance the desire for accuracy in the data used to calculate the APM Incentive Payment with the desire to expedite the payments so that the APM Incentive Payments are made in an appropriate and timely manner.

    We proposed to calculate the APM Incentive Payment based on data available 3 months after the end of the incentive payment base period in order to allow time for claims to be processed. For example, for the 2019 payment year, we would capture claims submitted with dates of service from January 1, 2018 through December 31, 2018 and processing dates of January 1, 2018 through March 31, 2019. We believe that 3 months of claims run-out is sufficient to conduct the APM Incentive Payment calculations in an accurate and timely manner. This methodology is consistent with the claims run-out timeframes used for reconciliation payments in several current APMs, such as the Shared Savings Program, the Pioneer and Next Generation ACO Models, and the CEC model. We solicited comment on the potential use of a completion factor. We note that several current APMs apply the 3-month claims run-out in conjunction with a completion factor. However, where a completion factor may be appropriate for payments based on claims submitted by groups of providers and suppliers that may be billing under multiple TINs, we believe that with payments based on individual eligible clinician claims, categorical variability in claims completion across types of eligible clinicians would cause inequitable results.

    In summary, for the incentive payment base period we proposed to use a complete calendar year of claims with 3 months of claims run-out from the end of the calendar year. We believe our proposed approach balances our goals of providing incentive payments in a reasonable timeframe while being able to account for the vast majority (on average, 99.3 percent of claims for) covered professional services. Given these parameters, we estimated that APM Incentive Payments could be made approximately 6 months after the end of the incentive payment base period, or roughly mid-way through the payment year. However, we proposed that the APM Incentive Payment would be made no later than 1 year from end of the incentive payment base period. We did not propose to set a specific deadline mid-way during the payment year because we believe doing so could pose operational risks in the event that 6 months is impracticable in a given year for reasons that CMS cannot predict. We solicited comment on our proposed timing for when we will make the APM Incentive Payment during a payment year.

    The following is a summary of the comments we received regarding our proposals for using 3 months of claims run-out when calculating the APM Incentive Payment and for the timing of making the APM Incentive Payment.

    Comment: Several commenters supported our proposal to use 3 months of claims run-out when calculating the APM Incentive Payment. Some commenters noted that additional run-out time is unlikely to yield more meaningful data and that further lag time may dilute the impact or incentive to eligible clinicians in receiving the APM Incentive Payment.

    Response: We agree that 3 months of claims run-out will allow us to make accurate APM Incentive Payment calculations without diluting the impact or incentives to QPs receiving APM Incentive Payments.

    Comment: Several commenters opposed our proposal to not specify a specific date during CY 2019 to make the APM Incentive Payment. Many of those commenters stated that CMS should be able to commit to making the APM Incentive Payment before the end of the payment year. The majority of commenters stated that CMS should identify a shorter and more defined period for eligible clinicians to receive their APM Incentive Payment and that a shorter, more defined period would encourage Advanced APM participation. Other commenters stated that too much lag time in making the APM Incentive Payment may negatively impact financial operations for, and subsequent-year quality performance of, entities that operate under risk-adjusted financial arrangements. One commenter suggested that we align the APM Incentive Payment with the shared savings payment from the Shared Savings Program.

    Response: We note that under section 1833(z)(1)(B) of the Act we are required to make the APM Incentive Payment “as soon as practicable.” We recognize the importance of the APM Incentive Payment and we believe that accuracy of the APM Incentive Payment is of the utmost importance under the Quality Payment Program. An accurate APM Incentive Payment will maintain and encourage participation in Advanced APMs. While we estimate that the APM Incentive Payment could be made approximately mid-way through the payment year, we reserve the right to take additional time to calculate the APM Incentive Payment if necessary.

    After considering public comments, we are finalizing our proposal to use 3 months of claims run-out when calculating the amount of APM Incentive Payment, and we are finalizing our proposal to make the APM Incentive Payment no later than 1 year from end of the incentive payment base period.

    (3) Treatment of Payment Adjustments in Calculating the Amount of APM Incentive Payment

    Part B covered professional services under the Medicare PFS are currently subject to several statutory provisions that are geared towards improving quality and efficiency in service delivery. Eligible clinicians are subject to payment adjustments under the Medicare EHR Incentive Program for Eligible Professionals (MU), the PQRS, and the VM. Beginning in 2019, the MIPS adjustment, as described in section II.E.5. of the final rule, will replace payment adjustments under the MU, PQRS, and VM for all MIPS eligible clinicians. These special payment adjustments directly adjust the payment amount that eligible clinicians receive under the PFS. In contrast, we consider the APM Incentive Payment to be separate from, and, as indicated under section 1833(z)(1)(A) of the Act, in addition to the amount of payments made for covered professional services under the Medicare PFS.

    We proposed to exclude the MIPS, VM, MU and PQRS payment adjustments when calculating the estimated aggregate payment amount for covered professional services upon which to base the APM Incentive Payment amount. For example, a QP who receives an upward fee adjustment during 2018 in VM would not see that adjustment reflected in the estimated aggregate payment amount for covered professional services used to calculate his or her APM Incentive Payment in 2019. Similarly, a QP who receives a downward fee adjustment during 2018 in VM would not see that amount reflected in the aggregate payment amount for the APM Incentive Payment.

    We believed this proposed policy is most consistent with the specification in section 1833(z)(1)(A) of the Act that the APM Incentive Payment is based on the estimated aggregate payment amounts for “such” covered professional services for the preceding year, which refers to the Part B covered professional services furnished by the particular eligible clinician.

    While we considered the alternative of including these performance-related payment adjustments in calculating the APM Incentive Payment, we were concerned that such a policy would create incentives that are not aligned with the intent of the APM Incentive Payment. As previously stated in our policy principles, we believe that the APM Incentive Payment is best viewed as a complementary reward for eligible clinicians that have a substantial degree of participation in the most advanced APMs, not an evaluation of their performance within the APM or in another statutorily required performance-based payment adjustment.

    We also proposed in section II.F.6.b.(1) of the proposed rule to account for payment adjustments in the QP determination process in the same manner as when calculating the amount of the APM Incentive Payment. If we were to include statutory payment adjustments when determining QP status, there could be situations where an eligible clinician could become a QP because of a positive payment adjustment amount, or conversely, there could be situations where an eligible clinician would not meet the QP threshold because of a negative payment adjustment. We believe that our proposal to not include payment adjustments when determining QP status for a year, or when calculating the amount of the APM Incentive Payment, allows us to assess all eligible clinicians on the same merits throughout the entire QP determination and when calculating the APM Incentive Payment. We do not believe the intent of the statute was to enhance or negate an eligible clinician's opportunity to become a QP in a given performance year, or to enhance or negate the amount of APM Incentive Payment a QP receives, based on factors that are extraneous to APM participation.

    We solicited comment on this proposed approach to coordinating the various PFS payment adjustments when calculating the amount of the APM Incentive Payment.

    The following is a summary of the comments we received regarding our proposals for how to treat PFS payment adjustments when calculating the amount of the APM Incentive Payment.

    Comment: Several commenters expressed support for our proposal to exclude the MIPS, VM, MU, and PQRS payment adjustments when calculating the estimated aggregate payment amount for covered professional services upon which to base the APM Incentive Payment amount. All commenters who responded to this proposal agreed with our belief that the intent of the APM Incentive Payment is not to further magnify existing and future payment adjustments.

    Response: We thank commenters for their feedback.

    After considering public comments, we are finalizing our proposal to exclude the MIPS, VM, MU and PQRS payment adjustments when calculating the estimated aggregate payment amount for covered professional services upon which to base the APM Incentive Payment amount.

    (4) Treatment of Payments for Services Paid on a Basis Other Than Fee-for-Service

    We recognize that many APMs use incentives and financial arrangements that differ from usual fee schedule payments. Section 1833(z)(1)(A)(i) of the Act requires us to establish policies for payments that are made to an Advanced APM Entity rather than directly to the QP. Section 1833(z)(1)(A)(ii) of the Act requires us to establish policies for when payment is made on a basis other than FFS. For the purposes of this rule, we place such payments into three categories: Financial risk payments, supplemental service payments, and cash flow mechanisms. We also recognize that payment methods and financial arrangements may evolve over time and those would need to be addressed in future rulemaking. We solicited comment on the proposals for accounting for risk-based payments, supplemental service payments, and cash flow mechanisms when calculating the amount of APM Incentive Payment.

    (a) Financial Risk Payments

    Financial risk payments are non-claims-based payments based on performance in an APM when an APM Entity assumes responsibility for the cost of a beneficiary's care, whether it be for an entire performance year, or for a shorter duration of time, such as over the course of a defined episode of care. We note that in the context of categorizing these types of payments as “financial risk payments,” we refer to payments that may be based on the cost of a beneficiary's care and do not necessarily limit these payments to financial arrangements that would require an APM Entity to accept downside risk. For instance, we would consider the shared savings payments made to ACOs in all tracks of the Shared Savings Program to be financial risk payments. We would also consider net payment reconciliation amounts from us to an Awardee (or vice versa) under the BPCI Initiative, and reconciliation payments from us to a participant hospital or repayment amounts from a participant hospital to us under the CJR model to be examples of financial risk payments.

    We proposed to exclude financial risk payments when calculating the estimated aggregate payment amount for covered professional services upon which to base the APM Incentive Payment amount. Financial risk payments are not for specific Medicare Part B covered professional services; rather they are for performance in an APM. Therefore, we believe their inclusion in the estimated aggregate payment amount would be inconsistent with the statutory language and our stated policy principles. In addition, the difficulty of disaggregating payments to individual QPs and the lagged timing of some financial risk payments creates significant policy and operational barriers that we do not believe are in line with our objective of making APM Incentive Payments in a timely manner.

    The following is a summary of the comments we received regarding our proposal to exclude financial risk payments when calculating the amount of the APM Incentive Payment.

    Comment: Some commenters expressed support for our proposal to exclude financial risk payments when calculating the estimated aggregate payment amount and encouraged CMS to finalize this proposal. Conversely, some commenters did not believe that CMS should exclude financial risk payments when calculating the amount of the APM Incentive Payments. These commenters noted that financial risk payments under CMS shared savings models are the only way that eligible clinicians can be compensated for services not directly paid under the fee schedule, and that these payments are actually compensation that are contingent on performance in an APM.

    Response: We note that while financial risk payments may be considered compensation for physician services, many financial risk payments are inclusive of services paid under Medicare Part A in addition to services paid under Medicare Part B. We are not currently able to distinguish which portion of financial risk payments is from services paid under Part A from covered professional services paid under Medicare Part B. We also note that section 1833(z)(1)(A) of the Act stipulates that we are to calculate the amount of the APM Incentive Payment based on the amount that is equal to 5 percent of the estimated aggregate payment amounts for Medicare Part B covered professional services.

    Additionally, we note that many financial risk payments are calculated based on the performance of the APM Entity as a whole, not on the performance of individual eligible clinicians that participate in the APM Entity. We do not currently have a way in which we are able to attribute portions of a financial risk payment to an APM Entity to individual eligible clinicians.

    After considering public comments, we are finalizing our proposal to exclude financial risk payments when calculating the amount of APM Incentive Payment.

    (b) Supplemental Service Payments

    Supplemental service payments are Medicare Part B payments for longitudinal management of a beneficiary's health or for services that are within the scope of medical and other health services under Medicare Part B that are not separately reimbursed through the PFS. Often these are per-beneficiary per-month (PBPM) payments that are made for care management services or separately billable services that share the goal of improving quality of care overall, enabling investments in care improvement, and reducing Medicare expenditures for services that could be avoided through care coordination. For example, OCM makes a per beneficiary Monthly Enhanced Oncology Services (MEOS) payment to practices for care management and coordination during episodes of care initiated by chemotherapy treatment.

    We proposed to determine whether certain supplemental service payments are in lieu of covered services that are reimbursed under the PFS. In cases where payments are for covered services that are in lieu of services reimbursed under the PFS, those payments would be considered covered professional services and would be included in the APM Incentive Payment amounts. We proposed to include a supplemental service payment in calculation of the APM Incentive Payment amount if it meets all of the following 4 criteria:

    (1) Payment is for services that constitute physicians' services authorized under section 1832(a) of the Act and defined under section 1861(s) of the Act;

    (2) Payment is made for only Part B services under the first criterion above, that is, payment is not for a mix of Part A and Part B services;

    (3) Payment is directly attributable to services furnished to an individual beneficiary; and

    (4) Payment is directly attributable to an eligible clinician.

    We further proposed to establish a process by which we notify the public of the supplemental service payments in all APMs and identify the supplemental service payments that meet our proposed criteria and would be included in the APM Incentive Payment calculations. Similar to our proposal to announce Advanced APM determinations, we proposed to post an initial list of supplemental service payments that would be included in our APM Incentive Payment calculations on the CMS Web site. As new APMs are announced, we would include the determination of whether supplemental service payment related to that APM would be included in our APM Incentive Payment calculations, if applicable, in conjunction with the first public notice of the APM. We proposed to update the list of supplemental service payments that would be included in our APM Incentive Payment calculations on an ad hoc basis, but no less frequently than on an annual basis.

    We solicited comment on this proposed approach to include certain supplemental service payments when calculating the basis for the amount of the APM Incentive Payment. Specifically, we solicited comment on our proposed criteria to include supplemental service payments in the basis for the APM Incentive Payment amounts, and our proposed method for announcing which supplemental service payments would be included in the basis for the APM Incentive Payment amounts.

    The following is a summary of the comments we received regarding our proposals for how to consider certain supplemental service payments when calculating the amount of the APM Incentive Payment.

    Comment: One commenter supported our proposal to include supplemental service payments in the calculation of the APM Incentive Payment when the four proposed criteria are met.

    Other commenters stated that CMS should withdraw its proposal to make specific determinations on each supplemental services payment based on the proposed criteria. These commenters were concerned this proposal adds unnecessary complexity and uncertainty to the calculations and could provide a disincentive for physicians who want to transition away from a FFS approach.

    Response: Although we recognize that determining whether certain supplemental service payments are included in the APM Incentive Payment may add limited complexity to calculating the APM Incentive Payment, we intend to mitigate this complexity by clearly communicating the results of these determinations. Additionally, we believe that by recognizing that certain supplemental service payments are in lieu of services traditionally billed under the Medicare PFS, we are incentivizing clinicians to transition away from FFS payment approaches with no link to quality by including supplemental service payments when calculating the amount of the APM Incentive Payment.

    Comment: A few commenters stated that CMS should consider ACO shared savings payments as supplemental service payments, and that these payments should always be included when calculating the APM Incentive Payment.

    Response: We thank commenters for their input. For the reasons discussed in this section of this final rule with comment, we disagree that shared savings payments to ACOs should be considered supplemental service payments. As clearly indicated in the previous section, we consider shared savings payments to ACOs to be financial risk payments and are finalizing our proposal not to include financial risk payments when calculating the amount of the APM Incentive Payment.

    Comment: We received one comment supporting our proposals related to public notification of supplemental service payments, which would include an initial posting of supplemental service payments included in estimated aggregate payment amounts and updates to that list no less than annually.

    Response: We thank commenters for their feedback and their support of these proposals.

    After considering public comments, we are finalizing our proposal to determine whether certain supplemental service payments are in lieu of covered professional services that are paid under the PFS on the basis of the four proposed criteria:

    (1) Payment is for services that constitute physicians' services authorized under section 1832(a) of the Act and defined under section 1861(s) of the Act;

    (2) Payment is made for only Part B services under the first criterion above, that is, payment is not for a mix of Part A and Part B services;

    (3) Payment is directly attributable to services furnished to an individual beneficiary; and

    (4) Payment is directly attributable to an eligible clinician.

    We are also finalizing our proposal to establish a process by which we notify the public of the supplemental service payments in all APMs and identify the supplemental service payments that will be included in the APM Incentive Payment calculations. This process includes posting an initial list of supplemental service payments that would be included in our APM Incentive Payment calculations on the CMS Web site. We are finalizing our proposal that we will update this list no less frequently than annually and that we will include determinations and updates to this list as new APMs with supplemental service payments are announced.

    (c) Cash Flow Mechanisms

    Cash flow mechanisms involve changes in the method of payments for services furnished by providers and suppliers participating in an APM Entity. In themselves, cash flow mechanisms do not change the overall amount of payments. Rather, they change cash flow by providing a different method of payment for services. An example of a cash flow mechanism is the population-based payment (PBP) available in the Pioneer ACO Model and the Next Generation ACO Model. A PBP is a monthly lump sum payment in exchange for a percentage reduction in Medicare FFS payments to certain ACO providers and suppliers.

    For expenditures affected by cash flow mechanisms, we proposed to calculate the estimated aggregate payment amount using the payment amounts that would have been incurred for Part B covered professional services if the cash flow mechanism had not been in place. For example, for QPs in an ACO receiving a PBP that have agreed to a 50 percent reduction in FFS payments, we would use the amount that would have been paid for Part B covered professional services in the absence of the 50 percent reduction. Cash flow mechanisms represent a potential reallocation of dollars between eligible clinicians and APM Entities for specific purposes related to care improvement. We do not believe that the presence of cash flow mechanisms should impact the APM Incentive Payment amount, and we do not intend for the APM Incentive Payment to influence the use or attractiveness of cash flow mechanisms in current and future APMs.

    The following is a summary of the comments we received regarding our proposal for how to account for payments affected by any cash flow mechanism when calculating the amount of the APM Incentive Payment.

    Comment: We received one comment supporting our proposal to calculate the estimated aggregate payment amount using the payment amount that would have been made for Part B covered professional services if the cash flow mechanism had not been in place.

    Response: We thank the commenter for supporting our proposal.

    After considering public comments, we are finalizing our proposal to calculate the estimated aggregate payment amount using the payment amount that would have been made for Part B covered professional services if the cash flow mechanism had not been in place.

    (d) Payments Made to an APM Entity Instead of to an Eligible Clinician

    Section 1833(z)(1)(A)(i) of the Act requires us to establish policies for payments that are made to an Advanced APM Entity rather than directly to a QP. We recognize that new payment methods and financial arrangements may be developed as part of APMs that meet this criterion. For instance, in the recently announced CPC+ Model, the supplemental service payments (that is, the CMFs) would meet all of our proposed criteria to be included in the APM Incentive Payment calculations. The CMFs are for Medicare Part B covered professional services and only Medicare Part B covered professional services. The CMF payment amounts would be risk-adjusted based on each individual beneficiary's HCC risk scores; therefore, these payments will be attributable to individual beneficiaries. Additionally, the attribution method in the CPC+ Model uses a combination of the TIN/Individual NPI/Practice Address when attributing an individual beneficiary to a CPC+ Practice site. However, the CMF payments for attributed beneficiaries are aggregate payments made to each CPC+ Practice Site. We recognize that throughout the course of a QP Performance Period more than one NPI may furnish covered professional services to an attributed beneficiary. If that occurs, more than one NPI could potentially receive the corresponding CMF for that eligible beneficiary. We do not believe it would be appropriate to count the same CMF for more than one NPI. Therefore, assuming that the CPC+ Model is determined to be an Advanced APM and the APM Entity group achieves the QP threshold for a year, we could split the CMF amounts equally between the multiple NPIs, or we could develop a method based on the plurality of visits with that beneficiary to “assign” the NPI to which the CMFs would be credited for purposes of the APM Incentive Payment calculation.

    We solicited comment on how to allocate payments made to an APM Entity rather than an eligible clinician.

    The following is a summary of the comments we received regarding our proposal.

    Comment: We received two comments with respect to allocating the supplemental service payments to individual NPIs in scenarios in which payment for a supplemental service payment is made in the aggregate to an APM Entity. One commenter stated that it would be ideal to attribute the payments to an individual NPI to whom the patient is attributed. If that were not possible, then the commenter favored splitting the CMF amounts equally between the multiple eligible clinicians within the APM Entity as long as those eligible clinicians are limited to the ones actually providing care management. Another commenter stated that any allocation method for CMFs under the APM Incentive Payment should reduce burden by using the same calculation as that of the CMFs themselves.

    Response: We appreciate this input. We note that when payments are paid to an APM Entity it may not be possible to identify which eligible clinicians are providing care management services, especially if a beneficiary is attributed to an APM Entity rather than a specific NPI. It is possible that this beneficiary could receive care management services from more than one eligible clinician within the APM Entity. We sought an approach that could provide the most equitable solution for how to identify NPIs to which payment is attributable without resulting in additional operational complexity.

    After considering public comments, we are finalizing our proposal that when payments are paid to an APM Entity instead of to an individual eligible clinician, and those payments are not attributable to an individual eligible clinician, we will divide the amount of such payments s equally across all eligible clinicians who are on the Participation List for that APM Entity, and each eligible clinician who is a QP will be considered to have been paid that portion of the payments for purposes of the APM Incentive Payment amount calculations.

    (5) Treatment of Other Incentive Payments in Calculating the Amount of APM Incentive Payments

    Section 1833(z)(1)(D) of the Act specifies that we shall not include certain existing Medicare incentive payments in the calculation of the APM Incentive Payment. This includes payments made under section 1833 of the Act (sections (m), (x), and (y)).

    Section 1833(m) of the Act describes the HPSA Physician Bonus Program. The HPSA Physician Bonus Program provides bonus payments to physicians for physicians' services furnished in geographic areas that are designated as of December 31 of the prior year by HRSA as HPSAs under section 332(a)(1)(A) of the PHS Act. The HPSA bonus payment is 10 percent of the Medicare Part B payment amount for the service; and this bonus is paid as a quarterly lump sum payment.

    Section 1833(x) of the Act describes the Primary Care Incentive Payment (PCIP) program. The PCIP payment amount was 10 percent of the payment amount for Medicare Part B primary care services furnished by primary care practitioners for whom primary care services accounted for at least 60 percent of their allowed FFS charges in a prior qualification period. For purposes of the PCIP program, primary care practitioners were defined as physicians with certain Medicare specialty codes and as certain types of non-physician practitioners. The PCIP payment was made on a quarterly basis. This bonus payment expired under the statute on December 31, 2015.

    Section 1833(y) of the Act describes the HPSA Surgical Incentive Payment (HSIP). For major surgical procedures furnished by physicians with a primary specialty designation of “general surgeon” in HPSAs (under section 332(a)(1)(A) of the PHS Act), physicians received an additional 10 percent bonus payment in addition to the amount of payment that would have otherwise been made. This additional payment was combined with any other HPSA payment outlined in section 1833(m) of the Act and was paid on a quarterly basis. This bonus payment expired under the statute on December 31, 2015.

    Section 1833(z)(1)(D) of the Act also directs us not to include APM Incentive Payments when calculating payments made under section 1833 (sections (m), (x), and (y)) of the Act. We consider the APM Incentive Payment to be separate from the incentive payments as previously discussed in the proposed rule, and we have established procedures to ensure that the APM Incentive Payment would not be included when calculating the amount of incentive payments made under section 1833(m), (x), and (y) of the Act.

    We received no comments in response to our proposal this section.

    As directed by the statute, we are finalizing our proposal not to include incentive payments made under section 1833(m), (x), and (y) of the Act when calculating the amount of the APM Incentive Payment, and not to include APM Incentive Payments when calculating payments made under section 1833(m), (x), and (y) of the Act.

    (6) Treatment of the APM Incentive Payment in APM Calculations

    Section 1833(z)(1)(C) of the Act states that the amount of the APM Incentive Payment shall not be taken into account for purposes of determining actual expenditures under an APM and for purposes of determining or rebasing any benchmarks used under the APM. As a lump sum payment, the APM Incentive Payments will be made outside of the Medicare claims processing system. Current APMs, such as the Medicare ACO initiatives and the CJR model, have established procedures for ensuring that lump sum payments from other APMs are excluded when they do their APM reconciliations and rebasing calculations. We anticipate that each APM will have in place a procedure to avoid counting APM Incentive Payments toward determining actual expenditures or rebasing any benchmarks under the APM.

    The following is a summary of comments we received in response to our proposals for how to treat the APM Incentive Payment in APM-related calculations.

    Comment: We received several comments supporting exclusion of the APM Incentive Payment when calculating expenditures under an APM. Some commenters specifically requested that APM Incentive Payments not be taken into account when determining shared savings payments for ACOs and considered it reasonable that we would expect each APM to have a procedure in place to avoid counting APM Incentive Payments when determining actual expenditures or determining or rebasing any benchmarks under an APM.

    Another commenter requested further confirmation from CMS that the MIPS payment adjustments are not included in Medicare ACO expenditures for benchmark calculations. The commenter stated that if this were not the case, it would create a disincentive for participation in an ACO and nullify the incentive of an upward payment adjustment.

    Response: We note that decisions regarding whether or not to include fee schedule adjustments when calculating expenditures under an APM are typically made on an APM-by-APM basis, and we anticipate that each APM will have procedures in place to exclude the APM Incentive Payment and provide clarification on whether fee schedule adjustments are included when calculating expenditures under that APM.

    b. Services Furnished Through CAHs, RHCs, and FQHCs (1) Critical Access Hospitals (CAHs)

    Eligible clinicians who furnish services at CAHs that have elected to be paid for outpatient services under section 1834(g)(2)(B) of the Act (Method II) will be eligible to become QPs and receive the APM Incentive Payment if they are part of an Advanced APM Entity. As stated in section II.F.6.d.(1) of this final rule with comment, professional services furnished at a Method II CAH are considered “covered professional services” because they are furnished by an eligible clinician and payments are based on the Medicare PFS. Therefore, we proposed that the APM Incentive Payment would be based on the amounts paid for those services attributed to the eligible clinician in the same manner as all other covered professional services.

    For an eligible clinician who becomes a QP based on covered professional services furnished at a Method II CAH, we proposed that the APM Incentive Payment would be made to the CAH TIN that is affiliated with the Advanced APM Entity. This proposal was consistent with the way in which we proposed to make the APM Incentive Payment to eligible clinicians who practice at locations other than Method II CAHs. We solicited comment on this proposal.

    We did not receive any specific comments on this proposal, and we are finalizing our proposal to make the APM Incentive Payment for an eligible clinician who becomes a QP based on covered professional services furnished at a Method II CAH to the CAH TIN that is affiliated with the Advanced APM Entity.

    (2) Rural Health Clinics (RHCs) and Federally Qualified Health Centers (FQHCs)

    As explained in section II.F.6.d.(2) of this final rule with comment, payment for services furnished by eligible clinicians in RHCs and FQHCs is not reimbursed under or based on the PFS. Therefore, professional services furnished in those settings would not constitute covered professional services under section 1848(k)(3)(A) of the Act and would not be considered part of the estimated aggregate payment amount upon which the APM Incentive Payment is based. For eligible clinicians who practice in RHCs or FQHCs, this does not preclude the inclusion of payment amounts for covered professional services furnished by those eligible clinicians in other settings. This only excludes payments made for RHC and FQHC services furnished by the eligible clinicians. For example, an eligible clinician may practice at both an FQHC and with a separate physician group practice that receives payment under the PFS. If the eligible clinician becomes a QP under the methodologies described in II.F.6. of this final rule with comment, whether based on their participation in an Advanced APM Entity that includes the FQHC as outlined in section II.F.6.d.(2) of this final rule with comment or based on their participation in an Advanced APM Entity that includes the separate physician group practice, or both, only the eligible clinician's payments for covered professional services at the separate physician group practice setting would form the estimated aggregate payment amount for the APM Incentive Payment.

    We did not receive any specific comments on our proposal for eligible clinicians who become a QP who may also practice at an RHC or FQHC.

    We are finalizing our proposal that professional services furnished in RHCs and FQHCs would not constitute covered professional services under section 1848(k)(3)(A) of the Act and would not be considered part of the amount upon which the APM Incentive Payment is based.

    c. Payment of the APM Incentive Payment (1) Payment to the QP

    In the proposed rule, we proposed that the APM Incentive Payment would be made to QPs who are identified by their unique NPI. We proposed that we would make the APM Incentive Payment for a QP to the eligible clinician's TIN that is affiliated with the Advanced APM Entity through which the eligible clinician was determined to be a QP. For both individual eligible clinicians and group practices, we would use the TIN as the billing unit. We proposed that the APM Incentive Payment would be calculated across all billing TINs associated with an NPI. Medicare has the ability to track all unique TIN/NPI combinations associated with an individual NPI, including which TINs are affiliated with an Advanced APM Entity. We considered making separate payments for each TIN/NPI combination associated with the individual eligible clinician's APM Incentive Payment, similar to how the current PQRS incentive payment program operates. Under the current PQRS incentive payment program, incentive payments are paid to the holder of the TIN, aggregating individual incentive payments for groups that bill under one TIN. For eligible clinicians who submit claims under multiple TINs, we group claims by TIN for payment purposes, and any incentive payments earned are paid to that specific TIN. As a result, an eligible clinician with multiple TINs who qualifies for the PQRS incentive payment under more than one TIN would receive a separate PQRS incentive payment associated with each TIN.

    However, we believe that making the APM Incentive Payments to the TIN associated with the Advanced APM Entity during the QP Performance Period would be most consistent with the requirements of section 1833(z) of the Act and would incentivize participation in Advanced APMs. Rewarding TINs that are not involved in an Advanced APM for their constituent NPI's activities through separate entities is antithetical to the objective of the Quality Payment Program. We also believe that making the APM Incentive Payments to the TIN associated with the Advanced APM Entity during the QP Performance Period is most consistent with section 1833(z) of the Act with regards to making the APM Incentive Payments to eligible clinicians who become QPs. We believe that making multiple separate payments would increase complexity for both CMS and eligible clinicians.

    Additionally, we finalized in section II.F.5. of this final rule with comment, that to be a QP, an eligible clinician must be identified on a CMS-maintained Participation List of an Advanced APM Entity. That will allow us to track the APM participant identifiers for each eligible clinician, and we believe that this information will allow us to determine which of the QPs' TINs should receive APM Incentive Payments.

    We recognize that there may be scenarios in which an eligible clinician may change his or her affiliation between the QP Performance Period and the payment year such that the eligible clinician no longer practices at the TIN affiliated with the Advanced APM Entity. In this instance, we proposed to make the APM Incentive Payment to the TIN provided on the eligible clinician's CMS-588 EFT Application. This proposal is consistent with the process that we have used to make incentive payments under other programs, such as the PCIP program.

    We solicited comment on our proposal to make the APM Incentive Payments to the TIN affiliated with the Advanced APM Entity through which an individual eligible clinician becomes a QP and our proposal to make the APM Incentive Payment to the TIN provided on the eligible clinician's CMS-588 EFT Application in the event that an eligible clinician no longer practices at the TIN affiliated with the Advanced APM Entity at the time of payment. We also solicited comment on alternative options that maintain the goals of equity and simplicity and encourage and reward participation in Advanced APMs.

    The following is a summary of the comments we received regarding our proposal to make the APM Incentive Payment to the TIN affiliated with the APM Entity through which an eligible clinician becomes a QP.

    Comment: We received a few comments in support of our proposal. One commenter stated that this proposal would allow for maximum flexibility in the development of APMs, their various organizational structures, and the ways in which revenues might flow through APM Entities. Another commenter supported the suggestion that the APM Incentive Payment is a coordinated effort among eligible clinicians and other aligned providers and suppliers.

    We received several comments suggesting alternatives to our proposal. Some commenters stated that they believe the APM Incentive Payment should be made directly to the QP, as identified by either the QP's NPI or by the QP's unique TIN/NPI combination. Some commenters also cited statutory language in section 1833(z) of the Act stating that APM Incentive Payments should be made to “such professionals.” Some of the commenters also stated that paying eligible clinicians directly will encourage them to become more engaged in an Advanced APM and its potential impact on patient care. One commenter stated that eligible clinicians have more control over their performance and can respond more quickly to incentives.

    Conversely, we received some comments stating that the APM Incentive Payment should be made to the Advanced APM Entity TIN, similar to how shared savings payments are distributed to ACOs in the Shared Savings Program.

    Response: APM Incentive Payments will be calculated and made for each QP as identified by an NPI. We further clarify that when referring to the “TIN associated with the Advanced APM Entity,” our intent is that the APM Incentive Payment would be sent to the Medicare enrolled billing TIN that is affiliated with the Advanced APM Entity, and not the TIN of the Advanced APM Entity itself.

    Even in instances where an incentive payment has been calculated at an NPI level, CMS has traditionally used the TIN as the billing unit such that any incentive payments earned are paid to the TIN holder of record. This precedent has been followed in various other incentive payment programs, such as the Physician Quality Reporting Initiative (PQRI) incentive payment and the PQRS incentive payment program, and we intend to follow this established precedent of making incentive payments to billing TINs. However, under those incentive payment programs, CMS grouped eligible clinician' claims by TIN for payment purposes, and any incentive payments earned were paid to that TIN. As a result, an eligible clinician with multiple TINs who qualified for an incentive payments under more than one TIN would have received a separate incentive payment associated with each TIN.

    We believe that making the APM Incentive Payments to the TIN associated with the Advanced APM Entity through which an eligible clinician becomes a QP would be most consistent with the requirements of section 1833(z) of the Act and would incentivize participation in Advanced APMs. We also believe that making the APM Incentive Payments to the TIN associated with the Advanced APM Entity during the QP Performance Period is most consistent with section 1833(z) of the Act with regards to making the APM Incentive Payments to eligible clinicians who become QPs.

    Given the precedent of making incentive payments to the TIN holder of record, we will make APM Incentive Payments to the Medicare-enrolled billing TIN of a QP's NPI that is in the Advanced APM Entity. We do not prescribe whether or how APM Incentive Payments are to be distributed to QPs within the TIN.

    Comment: We received one comment suggesting that CMS allow QPs to share APM Incentive Payments with beneficiaries.

    Response: We thank this commenter for their feedback. The Quality Payment Program does not change any existing laws or regulations regarding provider or supplier payments or incentives to beneficiaries.

    Comment: Some commenters requested that CMS support fair and timely distribution of the APM Incentive Payment to QPs and encourage all Advanced APM Entities to issue notifications to participating eligible clinicians regarding the distribution of an APM Incentive Payment.

    Response: We appreciate the commenter's feedback, and we encourage clear and open communication between Advanced APM Entities, participating TINs, and eligible clinicians regarding the distribution of the APM Incentive Payment. We refer readers to section II.F.8.c.(3) of this final rule with comment period for further details on how we will notify APM Entities, Medicare enrolled billing TINs, and QPs of the amount of the APM Incentive Payment calculated for each QP, as identified by the QP's NPI, so that all involved parties are informed of the amount of the APM Incentive Payment associated with each QP.

    After considering public comments, we are finalizing our proposal to make the APM Incentive Payment to the TIN affiliated with the Advanced APM Entity through which an eligible clinician becomes a QP, and we further clarify that the APM Incentive Payment would be sent to the Medicare-enrolled billing TIN associated with the Advanced APM Entity. As discussed in our responses to comments, we note that all Medicare payments are made to a billing TIN, and the ultimate distribution of the APM Incentive Payment is a consideration of the Medicare-enrolled billing TIN and their associated QPs. We are also finalizing our proposal to make the APM Incentive Payment to the TIN provided on the eligible clinician's CMS-588 EFT Application in the event that an eligible clinician no longer practices at the TIN affiliated with the Advanced APM Entity at the time of payment. QP status is determined for, and attached to, an eligible clinician for the payment year based on Advanced APM participation during the QP Performance Period; therefore, changes in practice afterward should not affect a QP's ability to receive the APM Incentive Payment or to be excluded from MIPS reporting requirements and payment adjustments.

    (2) Exception for Eligible Clinicians in Multiple Advanced APMs

    We recognize that there may be instances where none of the multiple Advanced APM Entities with which an individual eligible clinician participates meets the QP threshold. In this instance, we have proposed to assess the eligible clinician individually, using services furnished through all Advanced APM Entities during the QP Performance Period. When we make the QP determination at the individual eligible clinician level, we proposed to split the APM Incentive Payment amount proportionally across all of the QP's TINs associated with Advanced APM Entities. For example, if an eligible clinician is determined to be a QP at the individual level based on participation in two Advanced APM Entities (Advanced APM Entity 1 and Advanced APM Entity 2), and has 75 percent of his or her payments used to make the QP determination are through Advanced APM Entity 1 and 25 percent of his or her payments used to make the QP determination are through Advanced APM Entity 2, we would make 75 percent of the APM Incentive Payment to the QP's billing TIN associated with Advanced APM Entity 1, and 25 percent of the APM Incentive Payment to the QP's billing TIN affiliated with Advanced APM Entity 2. We believe that splitting the APM Incentive Payment in this way is consistent with section 1833(z) of the Act as well as our goal to encourage participation in multiple Advanced APMs where applicable. We also believe that splitting the incentive payment in this way appropriately recognizes the several activities of the individual eligible clinician toward achieving the QP threshold.

    We solicited comment on the proposal to split the APM Incentive Payment among the QP's TINs associated with Advanced APM Entities in instances where the QP determination is made at the individual level based on participation in multiple Advanced APMs. We also welcomed comments regarding to which TIN(s) payments should be made in the cases where the QP changes TIN affiliations between the QP Performance Period and the payments of the APM Incentive Payment.

    We did not receive any comments with regards to our proposal to split the APM Incentive Payment among a QP's TINs associated with Advanced APM Entities in instances where the QP determination is made at the individual eligible clinician level.

    Comment: We received a few comments regarding our proposal to make the APM Incentive Payment to the TIN provided on the eligible clinician's CMS-588 Electronic Funds Transfer (EFT) Application in scenarios when the eligible clinician is no longer affiliated with the TIN affiliated with the Advanced APM Entity. Some commenters disagreed with our proposal to make the APM Incentive Payment to the TIN provided on the eligible clinician's CMS-588 EFT Application and instead stated that CMS should make the APM Incentive Payment to the individual QP's NPI, not a TIN.

    Some commenters questioned why, in the event an eligible clinician is no longer associated with the TIN associated with the Advanced APM Entity, the APM Incentive Payment would be made to a new entity, and questioned why the APM Incentive Payment would not stay with the billing TIN participating in the Advanced APM Entity. In this instance, the commenter suggested we split the payment amount based on either the predominance of where that clinician provided services or based on an end date.

    Response: We thank commenters for their feedback, and note that for both individual eligible clinicians and group practices, we use the TIN as the billing unit, meaning that we must be able to track all Medicare payments to a TIN. We also note that not all individual eligible clinicians who are enrolled in Medicare have their own personal billing TIN. We also believe that the APM Incentive Payment is meant to reward eligible clinicians for their participation in an APM Entity. We do not believe that the individual QP's receipt of the APM Incentive Payment for a year should be affected by whether the QP maintains a relationship with the APM Entity between the performance and payment years, and proposed this policy in accordance with that belief.

    We are finalizing the proposal to split the APM Incentive Payment amount proportionally, based on the payment amounts used to make the QP determination across all of the QP's TINs associated with Advanced APM Entities when the QP determination is made at the individual level.

    We also further clarify that in the event that an eligible clinician participates in more than one Advanced APM Entity, and that eligible clinician meets the QP threshold through more than one Advanced APM Entity, as determined at the group level, we would split the total amount of the APM Incentive Payment in the same manner.

    (3) Notification of APM Incentive Payment Amount

    We proposed to send notification to both Advanced APM Entities and QPs of the APM Incentive Payment amount as soon as we have calculated the amount of the APM Incentive Payment and performed all necessary validation of the results. Following our proposed method to notify eligible clinicians of their QP status, we proposed that the APM Incentive Payment amount notification would be made directly to QPs in combination with a general public notice that such calculations have been completed for the year. For the direct QP notification, we intended to include the amount of APM Incentive Payment and the TIN to which the incentive payments will be made. In the case that the APM Incentive Payment is split across multiple TINs, we proposed to identify which TINs would receive the payment and include the amount that would be paid to each TIN. For the notification to Advanced APM Entities, and other recipient TINs, we intend to include the total amount of APM Incentive Payments that will be made to each participating TIN within the Advanced APM Entity, as well as QP-specific payment amounts. We believed that this would be the most efficient method to disseminate of this information to all QPs.

    We solicited comment on other methods for the notification of the APM Incentive Payment amount. We also solicited comment on the content of such notifications so that they may be as clear and useful as possible.

    The following is a summary of the comments we received regarding our proposal to notify Advanced APM Entities and QPs of the amount of the APM Incentive Payment.

    Comment: Several commenters supported our proposal to send a notification to both Advanced APM Entities and QPs of the APM Incentive Payment amount as soon as CMS has calculated the amount of the APM Incentive Payment and performed all necessary validation of the results. These commenters recommended that the notification include information that allows QP to verify that the payment is correct. Other commenters requested that we include a timeframe for making notifications regarding the APM Incentive Payment amount.

    Response: We appreciate the commenters' feedback and support of our proposals. We intend that the notifications of APM Incentive Payment Amounts will include contextual information that will allow QPs to verify the calculation of the APM Incentive Payment amount. We will provide more information on the format of the APM Incentive Payment notifications and the data included with such notifications before they are distributed. We further anticipate that the timing of the APM Incentive Payment amount notification will follow a similar timeline to that outlined in section II.F.8.a.(2) of this final rule with comment period, where we finalize our proposal for the incentive payment base period and timeframe of claims we will use to determine the estimated aggregate payment amounts used for the APM Incentive Payment Amount. We anticipate that the notification of the APM Incentive Payment amounts would occur once CMS has calculated the APM Incentive Payment Amounts but before the APM Incentive Payments are distributed to QPs.

    After considering public comments, we are finalizing our proposal to send a notification to both Advanced APM Entities and QPs of the APM Incentive Payment amount as soon as CMS has calculated the amount of the APM Incentive Payment and performed all necessary validation of the results as proposed.

    9. Monitoring and Program Integrity

    In an effort to accurately award the APM Incentive Payment and preserve the integrity of the Medicare program, we will monitor APM Entities, Advanced APM Entities, and eligible clinicians on an ongoing basis for non-compliance with Medicare program requirements and for non-compliance with the law, regulation, or agreement governing the relevant Advanced APMs during the QP Performance Period. These efforts include vetting of the individuals and entities applying to participate in Advanced APMs and periodically assessing Advanced APM Entities and eligible clinicians by Advanced APMs in conjunction with the CMS Center for Program Integrity and other relevant federal departments and agencies. This vetting and monitoring already takes place for APMs and will continue.

    We proposed that if an Advanced APM terminates an Advanced APM Entity or eligible clinician during the QP Performance Period for program integrity reasons, or if the Advanced APM Entity or eligible clinician is out of compliance with program requirements, we may reduce or deny the APM Incentive Payment to such eligible clinicians. In addition, if the APM Incentive Payment is paid based on a QP Performance Period and the Advanced APM Entity or eligible clinician is later terminated due to a program integrity matter arising during that QP Performance Period, we may recoup all or a portion of the amount of the APM Incentive Payment from the individual or entity to which we made the payment.

    We also proposed that we would reopen and recoup any payments that were made in error in accordance with procedures similar to those set forth at §§ 405.980 and 405.370 et seq. or established under the relevant Advanced APM.

    As discussed in section II.F.7.b.(7) of this final rule with comment period, APM Entities or eligible clinicians who seek to be assessed under the All-Payer Combination Option must submit certain information for us to assess whether their other payer arrangements meet the Other Payer Advanced APM criteria and to calculate the Threshold Score for a QP determination under the All-Payer Combination Option.

    Relatedly, we proposed that Advanced APM Entities and eligible clinicians must maintain copies of all records related to assessment under the All-Payer Combination Option for at least 10 years from the time of submission and must provide the government with access to these records for auditing and inspection purposes. If an audit reveals that the information submitted is inaccurate, we may recoup the APM Incentive Payment.

    Nothing in this final rule imposes any limitations or restrictions on the authority of the Department of Health and Human Services Office of Inspector General.

    We solicited comment on our monitoring and program integrity proposals.

    The following is a summary of the comments we received regarding our proposals to continue CMS vetting of those applying to be and ongoing monitoring of those Advanced APM Entities and eligible clinicians.

    Comment: One commenter encouraged CMS to consider additional APM modifications and stringent monitoring mechanisms that will prevent stinting of care and encourage the delivery of high quality care while lowering overall costs.

    Response: We appreciate the commenter's input. We consider potential modifications to APM design to better promote program integrity and the delivery of high quality care on an ongoing basis and will continue to do so.

    We are finalizing our proposals to vet and monitor APM Entities, Advanced APM Entities, and eligible clinicians.

    The following is a summary of the comments we received regarding our proposals to deny, reduce, or recoup APM Incentive Payments made to eligible clinicians if an Advanced APM Entity or eligible clinician is either out of compliance with the Advanced APM's program requirements or if the Advanced APM Entity or eligible clinician is terminated from participating in the APM for program integrity reasons and to reopen and recoup any payments that were made in error in accordance with procedures similar to those set forth at §§ 405.980 and 405.370 et seq. or established under the relevant Advanced APM.

    Comment: Many commenters supported these proposals. One commenter stated that Advanced APM Entities behave more like insurers by taking on more than nominal risk; therefore, Advanced APMs should be subject to regulations traditionally imposed on risk-bearing entities, such as solvency standards or other equivalent program integrity rules. One commenter stated that physical therapists and other non-physician practitioners in APMs should not have conflicts of interest and improper financial motivations, and requested that CMS monitor any negative effect hospital and physician market dominance may have, especially on small non-physician providers in private practice. Alternatively, one commenter opposed penalizing non-compliant clinicians.

    Response: We appreciate commenters' suggestions of ways to preserve program integrity. We note that each APM and Advanced APM will be designed to include appropriate program integrity safeguards that will account for the risk-bearing nature of the APM and to protect public funds and the integrity of the Medicare program. We are not setting forth specific program integrity standards in this rule; rather, such standards are incorporated on an APM-specific basis as APMs and Advanced APMs are developed. Additionally, CMS does consider the effects that APMs and Advanced APMs may have in different marketplaces. For Advanced APM Entities and eligible clinicians who fail to comply with an Advanced APM's program requirements, the agreement, law, or regulation governing that Advanced APM defines the process for addressing these issues. Although we understand that some may oppose policies that protect public funds from those who fail to comply with the terms of an Advanced APM, CMS must have the ability to do so in order to preserve program integrity.

    Comment: Several commenters suggested that we consider beneficiary impacts of APMs, such as ensuring Advanced APMs have beneficiary protections in place, using patient-centered quality measures, collaborating with beneficiaries regarding APM design, and monitoring for access issues and risks to beneficiary freedom of choice.

    Response: We appreciate commenters' focus on protecting beneficiaries. Although largely outside the scope of this final rule, both APMs and Advanced APMs have requirements for beneficiary protections in their relevant agreements or regulations. We have taken outcome measures into account when finalizing the criteria for Advanced APMs, as discussed further in section II.F.4.b.(2) of this final rule with comment period. We also emphasize that if beneficiaries have concerns about their clinicians or the quality of care that they are receiving, they can seek assistance by filing a complaint. More information about filing complaints is available at https://www.medicare.gov/claims-and-appeals/file-a-complaint/complaint.html.

    Comment: A few commenters noted that there are barriers to APM Entity creation posed by the physician self-referral law, anti-kickback statute, and other fraud, waste, and abuse laws. The commenters requested exceptions, safe harbors, and clear guidelines on the application of these laws to APM participants in order to foster collaboration among clinicians that is beneficial to patients. One commenter requested that CMS ensure that APM Entities are prohibited from waiving copays, giving deep discounts, or offering other incentives to incentivize patients to receive services within the APM Entity. Another commenter requested that the federal government institute a system under which it continually assesses APM Entity compliance with physician self-referral laws, anti-kickback statutes, and gainsharing civil monetary penalty provisions.

    Response: Although addressing fraud and abuse laws is not within the scope of this final rule, we will send these comments to the appropriate subject matter experts.

    Comment: One commenter requested that we provide guidance to providers, suppliers, and other stakeholders on methods by which the health care community can disclose or report potential violations of fraud and abuse laws.

    Response: One way to report potential fraud is by contacting the HHS Office of Inspector General at 1-800-HHS-TIPS or by visiting https://forms.oig.hhs.gov/hotlineoperations/index.aspx. Potential fraud can also be reported by calling 1-800-MEDICARE. More information about what information to report is available at https://www.medicare.gov/forms-help-and-resources/report-fraud-and-abuse/report-fraud/reporting-fraud.html.

    Providers or suppliers who wish to voluntarily disclose self-discovered evidence of potential fraud to CMS or the Office of Inspector General may do so under their respective self-disclosure protocols.

    We are finalizing our proposals with no changes. We will deny, reduce, or recoup APM Incentive Payments made to eligible clinicians if an Advanced APM Entity or eligible clinician is either out of compliance with the Advanced APM's program requirements or if the Advanced APM Entity or eligible clinician is terminated from participating in the APM for program integrity reasons and to reopen and recoup any payments that were made in error in accordance with procedures similar to those set forth at §§ 405.980 and 405.370 et seq. or established under the relevant Advanced APM.

    The following is a summary of the comments we received regarding our proposal to require that all Advanced APM Entities and eligible clinicians who submit information in order to obtain a QP determination under the All-Payer Combination Option retain all records, and provide the government with access to these records for auditing and inspection purposes, for at least 10 years.

    Comment: Several commenters expressed concern that this requirement is excessively burdensome and becomes more difficult when clinicians change practices. Some commenters suggested alternative retention timeframes such as 3 or 7 years. Some commenters requested that CMS clearly communicate the requirements so that new entrants into APMs can understand the expectations and not be unduly penalized in the future.

    Response: In the Medicare Option for QP determinations, we have the Medicare claims information necessary for us to make QP determinations, and there are pre-existing rules that govern record retention of that information. In the All-Payer Combination Option, CMS will make QP determinations based on information created by payers other than Medicare, and for this information to be used, it must be submitted to CMS. CMS must be able to verify this information, and the government must have access to all of these records. We appreciate commenters' concerns about the burdens this requirement may impose, but this 10-year record retention requirement is consistent with other Medicare record retention rules, such as that in the Shared Savings Program, and it aligns with the statute of limitations for claims arising under the False Claims Act.

    In order to address the requests for more detail, we intend to issue further details regarding the All-Payer Combination Option before the 2019 QP Performance Period, which is when it first becomes available.

    We are finalizing that Advanced APM Entities and eligible clinicians must retain, maintain, and provide the government with access to copies of all records related to submitting data or information to CMS for purposes of QP determinations under the All-Payer Combination Option for at least 10 years from the date that the record was created. We clarify that for any single record, the responsibilities finalized here may be carried out by either an Advanced APM Entity or an eligible clinician so that collectively, all necessary records are retained, maintained, and accessible to the government.

    10. Physician-Focused Payment Models a. Introduction and Overview

    Section 101(e)(1) of the MACRA statute entitled, “Increasing the Transparency of Physician-Focused Payment Models,” adds a new section 1868(c) to the Act. In general, this subsection establishes an innovative process for individuals and stakeholder entities (stakeholders) to propose PFPMs to the Physician-Focused Payment Model Technical Advisory Committee (PTAC). A copy of the PTAC's charter, established by the Secretary on January 5, 2016, is available at https://aspe.hhs.gov/charter-physician-focused-payment-model-technical-advisory-committee.

    (1) Overview of the Roles of the Secretary, the PTAC, and CMS

    Section 1868(c)(2)(A) of the Act requires the Secretary to establish, through notice and comment rulemaking following an RFI, criteria for PFPMs (PFPM criteria), including models for specialist physicians, that could be used by the PTAC in making comments and recommendations on PFPMs. We issued the MIPS and APMs RFI requesting stakeholder input on PFPMs on October 1, 2015, and we proposed PFPM criteria in section II.F.10.c. of the Quality Payment Program proposed rule.

    The PTAC, established under section 1868(c)(1)(A) of the Act, is a federal advisory committee comprised of 11 members that provides independent advice to the Secretary. As required under section 1868(c)(1)(B) of the Act, the initial appointments to the PTAC were made by the Comptroller General of the United States (GAO) on October 9, 2015.

    Section 1868(c)(2)(B) of the Act specifies that stakeholders may submit proposals to the PTAC on an ongoing basis for PFPMs that they believe meet the PFPM criteria established by the Secretary. We recognize this statutory directive, but did not propose to define “ongoing basis” because we believe that the process for submitting proposals to the PTAC should be determined by the PTAC.

    Section 1868(c)(2)(C) of the Act requires the PTAC to review stakeholders' proposed PFPMs, prepare comments and recommendations regarding whether such proposed PFPMs meet the PFPM criteria established by the Secretary, and submit those comments and recommendations to the Secretary.

    Section 1868(c)(2)(D) of the Act requires the Secretary to review the PTAC's comments and recommendations on proposed PFPMs and to post “a detailed response” to those comments and recommendations on the CMS Web site.

    Without being able to predict the volume, quality, or appropriateness of the proposed PFPMs that the PTAC will make comments and recommendations on, we are not in a position to propose a commitment to test all such models. Section 1868(c) of the Act does not require us to test models that are recommended by the PTAC. However, this does not imply that we would not give serious consideration to proposed PFPMs recommended by the PTAC.

    The PTAC serves an important advisory role in the implementation of PFPMs, but there are additional considerations that must be made by the Secretary beyond what is provided by the PTAC, such as competing priorities and available resources. We believe that this flexibility is important because the Secretary and CMS must retain the ability to make final decisions on which models to test and when, based on multiple factors including those that the Innovation Center currently uses to determine which payment models to test, available on the Innovation Center Web site: https://innovation.cms.gov/Files/x/rfi-Websitepreamble.pdf.

    While we would consider these factors separately from the PTAC's comments and recommendations, the decision to test a model recommended by the PTAC would not require submission of a second proposal to us; we would review the proposal submitted to the PTAC along with comments from the PTAC and the Secretary, and any other resources we believe would be useful. In order to test a PFPM based on a recommendation from the PTAC, CMS may seek to obtain additional information based on the contents of the proposal. After a PFPM proposal has been recommended by the PTAC, if it is selected for implementation, we may work with the individual stakeholders who submitted their proposals to consider design elements for testing the PFPM and make changes as necessary. We note that if a PFPM we select for implementation requires those interested to apply in order to participate, a stakeholder who submitted the proposal would have to apply in order to participate.

    Proposed PFPMs that the PTAC recommends to the Secretary but that are not immediately tested by us may be considered for testing at a later time. We may continue to test PFPMs that are developed within CMS but believe that the PTAC process will be instrumental in our goal to develop more PFPMs.

    (2) Deadlines for the Duties of the Secretary, the PTAC, and CMS

    We did not propose to set deadlines for these tasks through regulations. We believe that setting a deadline for the PTAC's comments and recommendations could potentially interfere with the PTAC's ability to develop its own process and timeline for reviewing proposed PFPMs. We wish to preserve the PTAC's ability to determine how and when it would review proposed PFPMs.

    We believe that setting a deadline through rulemaking for the Secretary's review of the PTAC's comments and recommendations, publication of a response to them, and our potential testing of a proposed PFPM submitted to the PTAC is inappropriate because these tasks would take varying amounts of time depending on factors that we cannot predict. Proposed PFPMs may be submitted to the PTAC on “an ongoing basis” in accordance with section 1868(c)(2)(B) of the Act, and given that there may be variation in the number and frequency of proposals, setting a deadline for the Secretary's response would be difficult. We do not believe we can effectively set deadlines through rulemaking because we do not know how many PFPM proposals the PTAC would receive or review. The Secretary would need varying lengths of time to review, comment on, and respond to PFPM proposals depending on the volume and nature of each proposal.

    We do not believe it would be reasonable to require that we adhere to a deadline in deciding whether to test a particular proposed PFPM. It is important for us to retain the flexibility to test APMs when we believe that it is the right time to do so, taking into account the other APMs we are currently testing, any potential design changes to the proposed PFPM, interactions with our other policies, and resource allocation. APMs generally take 18 months for us to develop, although the period of development may vary in length significantly, making a deadline difficult to establish.

    We believe that setting deadlines for testing proposed PFPMs would be inappropriate. Entities need time to complete applications for voluntary models and we need time to review applications and prepare participation agreements for entities to sign. Entities need time to review these participation agreements and to begin planning for implementation of the model. To maintain rigorous evaluation of model outcomes, we also need time to build the necessary model infrastructure for such functions as quality measurement, financial calculations, and payment disbursements, and to coordinate with other payers if they are included in the model's design.

    We believe that proposed PFPMs that meet all of the PFPM criteria and are recommended by the PTAC may need less time to go through the development process; however, we cannot guarantee that the development process would be shortened, or estimate by how much it would be shortened. These processes depend on the nature of the PFPM's design, and any attempt to impose a deadline on them would not benefit stakeholders because it would not allow us to tailor the review and development process to the needs of the proposed PFPM.

    The following is a summary of the comments we received regarding the roles of the Secretary, the PTAC, and CMS.

    Comment: Commenters encouraged CMS to be open to the PTAC's comments and recommendations and commit to testing PFPMs recommended by the PTAC. A few commenters stated they want CMS to test as many PFPMs as possible. Two commenters expressed concern that CMS would not test PFPMs because CMS has stated it would not commit to testing PFPMs recommended by the PTAC or specifically commit to testing all PFPMs recommended by the Secretary.

    Response: We are open to the PTAC's comments and recommendations and believe the PTAC review and recommendation process will be an essential resource for us to use in developing new APMs. While we cannot commit in advance to pursue any particular model before knowing its substance, we are committed to giving all models recommended by the PTAC a thorough and thoughtful review, and to testing high-quality PFPMs, within the limits of our resources and other constraints.

    Comment: We received many comments on our process for testing new PFPMs, with an emphasis on PFPMs that would be Advanced APMs if tested. Many commenters requested that we provide details regarding the process for HHS review of comments and recommendations from the PTAC, the Secretary's response to comments and recommendations from the PTAC, and our process for testing PFPMs; and that such details include deadlines. Many commenters requested a clear path to implementation of PFPMs. One commenter agreed that CMS need not establish a deadline in regulations for potential testing of a proposed PFPM. One commenter suggested that not setting deadlines effectively allows the agency to have no responsibility in evaluating PFPM proposals, and requested that we establish deadlines and public criteria for CMS to use in reviewing PFPM proposals. One commenter was concerned that the process for proposing PFPMs is overly complicated and the timeframe is unrealistically aggressive. One commenter disagreed that setting a deadline through rulemaking for the Secretary's review of the PTAC's comments and recommendations, publication of a response, and potential testing is inappropriate and stated that having a time frame should be standard practice.

    Response: We did not propose to establish a process or timeline for our review of proposed PFPMs or to provide additional information regarding such a process in this rule. To allow us flexibility in considering diverse models of varying scope and features, we do not believe it would be appropriate to establish through rulemaking a single process we would follow, with or without timelines. However, we appreciate that commenters seek additional information from us on our process. Section 1868(c)(2)(D) of the Act requires the Secretary to review the PTAC's comments and recommendations on proposed PFPMs and to post a “detailed response” to those comments and recommendations on the CMS Web site. Therefore, the Secretary has a responsibility to review comments and proposals from the PTAC on PFPM proposals and a responsibility to respond. We are mindful of stakeholders' interest in a timely process and are committed to reviewing (and where appropriate, implementing) model proposals as quickly as possible. We intend to provide more information about this process outside of notice and comment rulemaking.

    Comment: A few commenters requested that CMS consider how to increase transparency and to incorporate public input into the development of APMs. One commenter expressed concern that the process for designing and updating APMs does not consistently include feedback from consumers and purchasers, which they believed is an essential piece that should always be included. One commenter stated that prior to the implementation of any PFPM or Advanced APM, CMS must be transparent concerning the model design and provide the public with the opportunity to review and provide comments on the model with ample time built in for preparation and implementation.

    Response: We aspire to foster transparency and cooperation with regard to testing PFPMs, including feedback from consumers and purchasers. We have made public the factors the Innovation Center uses in considering whether to test a model, which would also be relevant to its review of PFPMs, on the Innovation Center Web site: https://innovation.cms.gov/Files/x/rfi-websitepreamble.pdf. Of note, the PTAC has made public information regarding its process for its review of PFPM proposals, and information about this process can be found on the PTAC Web site at https://aspe.hhs.gov/ptac-physician-focused-payment-model-technical-advisory-committee. We intend to provide more information about our process outside of notice and comment rulemaking.

    Comment: Multiple commenters requested that the PTAC, the Secretary, and CMS test new PFPMs as soon as possible. Commenters requested CMS facilitate a quick review and approval process for testing PFPMs, including expediting or altering its normal process. A few commenters requested CMS expedite the approval process for Advanced APM proposals, particularly those with specialty physician participants, or otherwise prioritize testing of PFPMs that would be Advanced APMs. Two commenters were concerned about the length of time CMS would need to develop, approve, and implement a PFPM after it is recommended for testing. A commenter stated that many specialty societies that have been working to develop PFPM proposals have been alarmed by comments from CMS officials indicating that even after these proposals have been recommended by the PTAC to the Secretary, they would still need to go through a separate, potentially years-long CMS process before they could be implemented and qualify as Advanced APMs under the Quality Payment Program. Commenters wanted to verify that there are PFPMs that are Advanced APMs available to eligible clinicians during the years the APM Incentive Payment is available. To this end, one commenter suggested making changes to the timing of when Advanced APM criteria need to be met so that PFPMs implemented in 2019 would apply to the 2017 QP performance period, and another commenter suggested we waive the application of section 1833(z)(2) of the Act, such that participants in APMs approved by the Innovation Center after 2017 receive a transition period in which such participants' QP eligibility is determined under the eligibility criteria for CY 2017. One commenter requested that PFPMs be an opportunity for multiple APMs to be available to physicians to qualify for the APM Incentive Payment. One commenter suggested CMS interact with the submitter of a PFPM proposal to ensure determinations are made timely based on complete and accurate information with the benefit of full clinical and operational context received directly from the original source. One commenter suggested fast-tracking models that focus on expansion of existing APMs when adequate supporting data are available, and collaborating with specialty societies to provide sufficient feedback on drafts and upfront data to assist with impact modeling.

    Response: We appreciate that there is significant interest in creating new opportunities for APMs and Advanced APMs and that commenters would like these opportunities to be implemented quickly. We are mindful of stakeholders' interest in additional models and are committed to reviewing (and where appropriate, implementing) PFPM proposals as quickly as possible. We intend to provide additional information outside of the rulemaking process.

    Comment: A few commenters requested an appeals process or opportunity to resubmit if a PFPM proposal is (1) not recommended by the PTAC or (2) not selected for testing by the Secretary, or another form of feedback on proposals that are not commented on favorably by the Secretary. One commenter asked for templates and examples of APMs the PTAC would recommend to CMS. Commenters made requests related to the PTAC's review process, such as the frequency with which they will collect PFPM proposals and whether they will allow resubmissions.

    Response: The PTAC will decide whether it will include an appeals process for PFPM proposals it does not recommend and will set its own review process. CMS will not establish through rulemaking a formal reconsideration process for PFPM proposals that are recommended by the PTAC but not responded to favorably by the Secretary. However, we hope that stakeholders will be open to pursuing changes to the model so that it might be selected in future years for testing by the Secretary. We also appreciate that commenters seek additional information from us on our process. We are committed to transparency and encourage commenters to review the factors the Innovation Center uses in considering whether to test a model on the Innovation Center Web site: https://innovation.cms.gov/Files/x/rfi-websitepreamble.pdf. We intend to provide additional information outside of rulemaking. With respect to the PTAC review process, we refer commenters to the PTAC Web site at https://aspe.hhs.gov/ptac-physician-focused-payment-model-technical-advisory-committee.

    Comment: We received comments about how PFPMs relate to specialty physicians and to primary care physicians. Commenters requested CMS prioritize PFPMs proposed by the specialty community and including specialist physicians, or just prioritize PFPMs in general. One commenter suggested that CMS prioritize PFPMs that use all members of the health care team. One commenter recommended that CMS invest in academic medical centers to support the testing of specialty Advanced APMs. A few commenters recommended CMS collaborate with the PTAC to expand opportunities for primary-care focused Advanced APMs and engage the PTAC to assist in the development, tracking, and reporting of primary care spending as a share of total health care spending.

    Response: We thank commenters for their feedback and appreciate the enthusiasm in the development of PFPMs. We are not finalizing a policy that establishes priorities for PFPMs beyond prioritizing those that meet the Secretary's criteria. We plan to pay close attention to PFPM proposals reviewed by the PTAC and look forward to reviewing the PTAC's comments and recommendations on PFPM proposals from stakeholders for both specialty models and primary care models.

    Comment: We received comments on the role and composition of the PTAC. Commenters stated they believe the PTAC can play an important role in developing new APMs and support this opportunity for ideas for APMs to come from a variety of sources. Two commenters were in favor of the role of the PTAC and emphasized the value of input from clinicians and other stakeholders. A few commenters recommended that clinical experts be consulted as part of the PTAC review process, particularly for specialty models, and one commenter requested that the PTAC ensure all stakeholders be included in the PTAC review process. One commenter recommended that the PTAC focus on overarching strategic goals, including private sector initiatives, to lead to greater alignment. One commenter requested more care team diversity, including patients, on the PTAC, in order to further the goal of patient-centeredness. One commenter suggested that a particular specialty should be represented on the PTAC.

    Response: We appreciate the interest of commenters in the composition of the PTAC and the PTAC review process. We encourage commenters to engage with the PTAC and to follow its development of the processes it will use to review PFPM proposals. The statute requires that the PTAC be appointed by the GAO, and CMS does not have the authority to appoint members of the PTAC.

    Comment: A few commenters encouraged CMS and the PTAC to provide technical assistance for stakeholders in their work to develop and implement APMs. A few commenters requested technical assistance include support for a new team approach, data collection and analysis, access to financial resources, and overall ability to achieve APM status. One commenter noted that assistance and Medicare data need to be provided to organizations developing APM proposals to help them design APMs that will qualify as Advanced APMs.

    Response: We will explore opportunities for guidance and assistance that may be provided to stakeholders in drafting PFPM proposals outside of the rulemaking process.

    b. Definition of PFPM (1) Definition of PFPM

    Section 1868(c) of the Act does not define the term “physician-focused payment model” (PFPM). In § 414.1465 of the proposed regulatory text, we proposed to add the following definition of PFPM: An Alternative Payment Model wherein Medicare is a payer, which includes physician group practices (PGPs) or individual physicians as APM Entities and targets the quality and costs of physicians' services. We proposed to require that a PFPM target physicians' services to meet the definition of PFPM. To address physicians' services, we proposed PFPMs might address such elements as physician behavior or clinical decision-making. APM Entities may be individual eligible clinicians, physician group practices (PGPs), or other entities, depending on the payment model's design. We proposed a PFPM must focus on physicians' services and contain either individual physicians or PGPs as APM Entities, although it might also include facilities or other practitioner types.

    We proposed to require that PFPMs be designed to be tested as APMs with Medicare as a payer. Other Payer APMs would therefore not be PFPMs. We believe this is an appropriate standard for PFPMs because the Secretary is interested in reviewing comments and recommendations from the PTAC on models that may be tested with Medicare as a payer and because the statutory provisions regarding PFPMs and the PTAC are within section 1868 of the Act and title XVIII of the Act, which governs Medicare. A PFPM may include other payers in addition to Medicare under the proposed definition. We believe this definition is appropriate because it could include APMs with arms of their design that would include other payers beyond Medicare, but would not include models that involve only Other Payer APMs.

    We did not propose to limit a PFPM to exclusively targeting physicians and physicians' services because we believe that stakeholders should be able to propose payment models that include additional types of entities, as well as additional services. We did not propose to define PFPM as an APM that exclusively addresses Medicare FFS payments. A proposed PFPM may also include other payers in addition to Medicare, including Medicaid, Medicare Advantage, CHIP, and private payers, which may promote broader participation in PFPMs and greater potential for cost reduction. A PFPM that includes payers in addition to Medicare could potentially include an Other Payer Advanced APM as part of its design in addition to being an APM.

    (2) Relationship Between PFPMs and Advanced APMs

    Section 1868(c) of the Act does not require PFPMs to meet the criteria to be an Advanced APM for purposes of the incentives for participation in Advanced APMs under section 1833(z) of the Act, and we did not propose to define PFPMs solely as Advanced APMs. Stakeholders may therefore propose as PFPMs either Advanced APMs or other APMs that might lead to better care for patients, better health for our communities, and lower health care spending.

    The following is a summary of the comments we received regarding our proposal to define Physician-Focused Payment Model as an APM wherein Medicare is a payer, which includes physician group practices (PGPs) or individual physicians as APM Entities and targets the quality and costs of physicians' services.

    Comment: Many commenters agreed with the proposed definition of a PFPM.

    Response: We thank commenters for their feedback.

    Comment: Commenters requested clarification or expressed confusion about the relationship between PFPMs, APMs, and Advanced APMs. A few commenters requested that proposed PFPMs, if implemented, should not have to meet the Advanced APM criteria to be Advanced APMs. One commenter requested the nominal risk standard for Advanced APMs specifically be changed for PFPMs, and one suggested that we consider payments through Advanced APMs in 2019, not 2017, for purposes of the QP determination for the APM Incentive Payment in 2019 to allow more time for new PFPMs to be included in this calculation. A few commenters stated that the clear Congressional intent was that PFPMs should be included in the Quality Payment Program as a way to support eligible clinician participation in APMs, even if CMS has determined that PFPMs are not necessarily, by definition, Advanced APMs. One commenter requested clarification that CMS is not limited to considering PFPMs only on the timeline and recommendation of the PTAC, and that CMS can develop its own specialty-related PFPMs.

    Response: The definition of PFPM specifies that a PFPM is an APM. APM is defined under section 1833(z)(3)(C) of the Act as any of the following: (1) A model under section 1115A (other than a health care innovation award) of the Act; (2) the Shared Savings Program under section 1899 of the Act; (3) a demonstration under section 1866C; or (4) a demonstration required by federal law. Therefore, if a model is a PFPM it is also an APM. A model that does not meet the definition of APM is not a PFPM. We anticipate PFPMs that are recommended by the PTAC and implemented by CMS will be tested under section 1115A authority. However, a model does not need to be tested under section 1115A of the Act to be a PFPM.

    If a PFPM meets criteria for Advanced APMs under section 1833(z)(3)(D) of the Act, as finalized in this rule, it is an Advanced APM. The criteria for Advanced APMs are specified by statute: The APM must require participants to use certified EHR technology; the APM must provide for payment for covered professional services based on quality measures comparable to those in the quality performance category under MIPS; and the APM must either require that participating APM Entities bear risk for monetary losses of a more than nominal amount under the APM, or be a Medical Home Model expanded under section 1115A(c) of the Act. For example, if a model is tested under section 1115A of the Act and it is not a health care innovation award, it is by definition an APM. If it is tested under section 1115A of the Act, is not a health care innovation award, and meets the criteria for Advanced APMs, it is an Advanced APM. We will not categorically waive the requirements for Advanced APMs for PFPMs we test. Section 1833(z)(3)(C) and (D) of the Act makes a clear distinction between APMs and Advanced APMs, and we do not believe the statutory requirements for Advanced APMs can or should be categorically waived for PFPMs. We retain the flexibility to consider and test PFPMs that are developed within CMS.

    For the QP determination timeline, we note that under the proposed and final policies in section II.F.5. of this final rule with comment period, participation in a PFPM that is determined to be an Advanced APM for 2017, which is the QP Performance Period for the 2019 APM Incentive Payment, offers the opportunity for participants to become QPs. Because eligible clinicians would have the opportunity to become QPs through participation in PFPMs that are Advanced APMs in the same way they would through participation in other Advanced APMs, we do not believe it would be appropriate to modify the QP determination process and APM Incentive Payment timeframe. Further, as stated in section II.F.5. of this final rule with comment period, the exclusion of QPs from MIPS reporting requirements and payment adjustments dictates an operational timeline that permits us to determine and communicate an eligible clinician's status for a payment year (whether QP, Partial QP, or MIPS eligible clinician) to facilitate timely decisions that impact payment and budget neutrality for the relevant payment year. We do not believe this operational timeline would allow us to retrospectively exclude eligible clinicians from MIPS adjustments already in effect based on later participation in a PFPM that is an Advanced APM.

    Comment: A few commenters requested that the definition of PFPM be expanded to include models that do not include physicians or PGPs but include other clinicians as participants. One commenter recommended that PFPM applicants should be required to document how they will include APRN services and how they will use APRNs to the fullest extent of their training.

    Response: We agree with commenters that non-physician practitioners are appropriate for inclusion in PFPMs, and we believe offering all eligible clinicians who have the potential to qualify as QPs the opportunity to propose PFPMs will benefit stakeholders and us in pursuing new opportunities for APMs. We believe it is appropriate to change the definition of PFPMs to include models that include any eligible clinicians that fall under the definition in section 1848(k)(3)(B) of the Act. We appreciate that commenters are concerned that non-physician practitioners should be included in Advanced APMs. The list of eligible clinicians for purposes of the APM incentive is defined in section 1833(z)(3)(B) of the Act and includes: Physicians, physician assistants, nurse practitioners, clinical nurse specialists, certified registered nurse anesthetists, certified nurse-midwifes, clinical social workers, clinical psychologists, registered dietitians or nutrition professionals, physical or occupational therapists, qualified speech-language pathologists, and qualified audiologists. We are revising the definition of PFPM to include this group of clinicians because these are eligible clinicians under the APM track of the Quality Payment Program. Proposed PFPMs can include non-physician eligible clinicians as the participants or as vital to model design. We do not believe it is necessary to require as part of the definition of PFPMs or within the PFPM criteria that a particular specialty or category of clinician be addressed.

    Comment: We received comments on how the definition of PFPM relates to APMs and Advanced APMs. One commenter agreed with the proposal not to limit PFPMs to Advanced APMs and was pleased the definition of PFPMs would mean that any PFPM is an APM if tested by CMS. One commenter stated that CMS has gone too far in restricting the ability of APMs to become Advanced APMs and should ensure a more realistic and attainable pathway allowing physician-developed APMs to be recognized.

    Response: As stated above, to be a PFPM, a model must meet the definition of APM under section 1833(z)(3)(C) of the Act. If it meets the criteria for an Advanced APM under section 1833(z)(3)(D) of the Act, it will be an Advanced APM. We do not believe that our policy in defining Advanced APMs will impair the ability of PFPMs to be tested as Advanced APMs. We do not believe there is a reason that PFPMs, as a type of APM, should not be subject to the same criteria as other APMs in order to be considered Advanced APMs.

    Comment: A couple of commenters supported the decision to permit PFPMs to contain both Medicare and other payers, while prohibiting consideration of PFPMs that contain only third-party payers without also involving Medicare. A few commenters supported efforts to broaden the scope of PFPMs by not limiting the definition to only physicians' services and by permitting models to address payments other than Medicare FFS. One commenter suggested the PTAC focus on PFPMs based on models from private payers that are not currently APMs.

    Response: We are finalizing our policy to define a PFPM as requiring Medicare be a payer while not precluding the inclusion of other payers in addition to Medicare such as Medicaid or private payers in the PFPM's design.

    Comment: One commenter proposed changing the language of the definition that addresses physicians' services to instead target “the quality and costs of physician services that the physicians participating in the payment model deliver, order, or can significantly influence.” One commenter suggested that physicians should not be held accountable for being part of an APM in which they do not have a role to contribute. One commenter asked that CMS ensure that PFPMs place physicians at the nexus of control for managing a patient's care, rely on evidence-based guidelines (for example, to avoid reducing care without regard for quality), and incorporate sufficient safeguards to ensure that beneficiaries have access to the best care (for example, proper risk-adjustment and outlier payments). One commenter supported the PFPM definition and suggested that CMS implement stringent safeguards to ensure that the physician(s) remain in an indisputable position of leadership in these cases—reflecting the goal of this aspect of MACRA as being “physician focused.” Another commenter stated that APM Entities in PFPMs, if not explicitly physician-owned, should provide a means for physicians to influence the policies and goals of the organization.

    Response: We thank commenters for their feedback and agree that PFPMs should meaningfully engage eligible clinicians. We are finalizing the definition of PFPM to specify that eligible clinicians in the PFPM must play a core role in implementing the APM's payment methodology, and the PFPM must target the quality and costs of services that these eligible clinicians provide, order, or can significantly influence. We believe this addresses the need for PFPMs to be driven by eligible clinicians without restricting APM Entities in PFPMs to a particular category.

    Comment: One commenter suggested that CMS allow stakeholders the opportunity to develop and test models that are simple to implement and flexible enough to allow clinicians to provide patient-centered care that yields improved patient outcomes. One commenter stated that the review criteria for the PFPM payment methodology should provide flexibility and encourage innovation.

    Response: We thank commenters for their feedback and agree that flexibility and innovation are important goals for PFPMs. We believe that the definition of PFPM and criteria for PFPMs in this rule provide stakeholders the flexibility to develop PFPM proposals that will fit their specialties and support patient-centered care.

    In response to the comments requesting that we include a broader category of clinicians than physicians in PFPMs and that PFPMs be flexible in their focus on the leadership and decision-making of such clinicians, we have changed the definition of PFPM to not require that PGPs or individual physicians be included as APM Entities. Instead, PFPMs must give eligible clinicians a core role in implementing the payment methodology. In response to the comments, we are changing the definition of PFPMs to include models that include any eligible clinicians that fall under the definition of EP in section 1848(k)(3)(B) of the Act. We are also changing the definition to include models that address the services of all clinicians that fall under the definition of EP within section 1848(k)(3)(B) of the Act. We are finalizing our proposal not to limit PFPMs to Advanced APMs but instead to include models that, if tested, would be APMs and could potentially be Advanced APMs.

    We are finalizing the definition of PFPM to mean an APM: (1) In which Medicare is a payer; (2) in which eligible clinicians that are EPs as defined in section 1848(k)(3)(B) of the Act are participants and play a core role in implementing the APM's payment methodology, and (3) which targets the quality and costs of services that eligible clinicians participating in the Alternative Payment Model provide, order, or can significantly influence.

    c. Finalized PFPM Criteria

    Section 1868(c)(2)(A) of the Act requires the Secretary to establish criteria for PFPMs, including models for specialist physicians, not later than November 1, 2016. The PFPM criteria would be used by the PTAC in discharging its duties under section 1868(c)(2)(C) of the Act to make comments and recommendations to the Secretary on proposed PFPMs. The proposed PFPM criteria were listed in section II.F.10.c.(1). of the proposed rule and at § 414.1465(b) of the proposed regulatory text. We designed the proposed criteria to be broad enough to encompass all physician specialties and provide stakeholders with flexibility in designing PFPMs.

    We proposed PFPM criteria organized into three categories that are consistent with the Administration's strategic goals for achieving better care, smarter spending and healthier people: Payment incentives; care delivery; and information availability. First, we proposed a category of criteria that promote payment incentives for higher-value care, including paying for value over volume and providing resources and flexibility necessary for practitioners to deliver high-quality health care.

    To address paying for value over volume, we proposed a criterion that PFPMs should provide incentives to practitioners to deliver high-quality health care, and that these incentives should be specifically expected to lead to high-quality health care. We believe that the correct incentives are necessary to drive change to improve quality of care. Similarly, we believe that it is important for a PFPM to provide sufficient flexibility for practitioners to deliver high-quality health care. Flexibility relates to operational feasibility, the PFPM's ability to adapt to accommodate clinical differences in patient subgroups, and the APM Entity's ability to respond to changes in health care.

    This category of criteria also aligns with the Innovation Center's statutory authority under section 1115A of the Act to test models aimed to improve care, reduce expenditures, or achieve both of these goals, by proposing a criterion that assesses to what extent a PFPM proposal is expected to achieve these goals. We believe estimates of any cost reduction under the PFPM to the most precise extent possible would also be useful in addressing this criterion.

    We proposed a criterion that the PFPM proposal must be designed to pay APM Entities under a payment methodology that furthers the PFPM Criteria. The payment methodology must address how it is different from current Medicare payment methodologies, and why the payment methodology cannot be tested under current payment methodologies. We believe it is necessary for PFPM proposals to contain such a payment methodology because the PTAC is tasked with reviewing payment models and therefore cannot evaluate a proposal without knowing the payment methodology.

    We also proposed to include in the first category a criterion that the PFPM must either aim to solve an issue in payment policy not addressed in the CMS APM portfolio at the time it is proposed or include in its design APM Entities who have had limited opportunities to participate in APMs. For a list of models in the CMS APM portfolio, please see https://innovation.cms.gov/initiatives/index.html#views=models. We proposed this criterion to promote participation in APMs by broadening and expanding our portfolio of APMs in areas such as geographic location, specialty, condition, and illness, without overly limiting proposed PFPMs. We proposed that because proposed PFPMs may satisfy this criterion by either addressing a new issue or including a new specialty, the criterion was sufficiently broad to allow stakeholders to submit many proposed PFPMs that could expand the CMS APM portfolio. Physicians and practitioners whose opportunities to participate in other PFPMs with us have been limited to date include, for example, those who have not been able to apply for any other PFPM because one has not been designed that would include physicians and practitioners of their specialty. We proposed that a proposed PFPM that includes multiple specialties might meet the PFPM criteria where a minimum of one of the specialties in the proposed PFPM is not currently being addressed by another APM. We made this proposal to reflect the intent of section 1868(c)(2)(A)(i) of the Act which specifically directs the Secretary to establish PFPM criteria, including models for specialist physicians.

    We also proposed a criterion that a PFPM proposal must have evaluable goals for the impact of cost and quality under the PFPM. To make the decision to expand an APM under section 1115A(c) of the Act, the Secretary must evaluate its success. This standard informed our proposed criterion not only because it would be important for any APMs that are tested under section 1115A(c) of the Act, but also because it is necessary for measuring the success of any APM that it be evaluable. It is the evaluation of an APM that tells us whether the APM is successful in reducing cost and improving quality of health care.

    Second, we proposed a category of criteria that address care delivery improvements that promote better care. Here we proposed criteria to address integration and care coordination, patient choice, and patient safety.

    Third, we proposed a category of criteria that address information enhancements that improve the availability of information to guide decision-making. We believe that information enhancements, particularly through use of technology are important to improving Medicare payment policy and delivering better care. Here we proposed a criterion for encouraging use of health information technology.

    In carrying out its review of PFPM proposals, the PTAC shall assess whether the PFPM meets the following criteria for PFPMs sought by the Secretary as required by section 1868(c)(2)(C) of the Act. We proposed the following PFPM criteria. The Secretary seeks PFPMs that:

    (1) Incentives: Pay for higher-value care.

    Value over volume: Provide incentives to practitioners to deliver high-quality health care.

    Flexibility: Provide the flexibility needed for practitioners to deliver high-quality health care.

    Quality and Cost: Are anticipated to improve health care quality at no additional cost, maintain health care quality while decreasing cost, or both improve health care quality and decrease cost.

    Payment methodology: Pay APM Entities with a payment methodology designed to achieve the goals of the PFPM criteria. Addresses in detail through this methodology how Medicare and other payers, if applicable, pay APM Entities, how the payment methodology differs from current payment methodologies, and why the Physician-Focused Payment Model cannot be tested under current payment methodologies.

    Scope: Aim to either directly address an issue in payment policy that broadens and expands the CMS APM portfolio or include APM Entities whose opportunities to participate in APMs have been limited.

    Ability to be evaluated: Have evaluable goals for quality of care, cost, and any other goals of the PFPM.

    (2) Care delivery improvements: Promote better care coordination, protect patient safety, and encourage patient engagement.

    Integration and Care Coordination: Encourage greater integration and care coordination among practitioners and across settings where multiple practitioners or settings are relevant to delivering care to the population treated under the PFPM.

    Patient Choice: Encourage greater attention to the health of the population served while also supporting the unique needs and preferences of individual patients.

    Patient Safety: Aim to maintain or improve standards of patient safety.

    (3) Information Enhancements: Improving the availability of information to guide decision-making.

    Health Information Technology: Encourage use of health information technology to inform care.

    d. CMS Consideration of Models

    In the proposed rule, we described “supplemental information elements” that we find particularly useful in our review when we consider potential APMs. The “supplemental information” is meant to increase the transparency of our process and is not included within the PFPM criteria.

    The following is summary of the comments we received regarding the Secretary's proposed PFPM criteria and the supplemental information.

    Comment: Many commenters were generally in favor of the proposed criteria and enthusiastic about the opportunity for stakeholders to develop PFPMs. While one commenter was in favor of the proposed criteria because they do not limit PFPMs to a particular specialty, many commenters were concerned that the PFPM criteria narrow the field of potential PFPMs and gave recommendations for specific services, practitioners, specialties, and guidelines that should be incorporated into PFPMs. One commenter requested CMS allow flexibility for PFPMs to meet the criteria to promote parity in the availability of specialty-focused models. One commenter requested we incorporate the preamble language regarding the supplemental information into the body of the criteria. One commenter was in favor of the proposed criteria because they did not require specific quality measures.

    Response: We appreciate the feedback from commenters regarding the proposed PFPM criteria. We are finalizing the quality and cost criterion that the PFPM be anticipated to improve health care quality at no additional cost, maintain health care quality while decreasing cost, or both improve health care quality and decrease cost. This criterion establishes the importance of quality measurement in PFPMs while allowing stakeholders flexibility in identifying the most appropriate way to measure quality in different PFPMs. In response to commenters that expressed concern about the role of non-physician clinicians and non-physician services, we are modifying the proposed definition of PFPMs to include models that include a broader group of clinicians and their services.

    Comment: One commenter was concerned that the proposed PFPM criteria are overly burdensome.

    Response: We designed the PFPM criteria to be broad enough to encompass all physician specialties and provide stakeholders with flexibility in designing proposed PFPMs, and to be consistent with the strategic goals for achieving better care, smarter spending, and healthier people. We believe these criteria will attract model proposals that are specifically aligned to achieve these goals.

    Comment: One commenter suggested each subcategory of the care delivery goal not be an absolute requirement, particularly where not applicable to a specialty PFPM.

    Response: We understand that the Integration and Care Coordination criterion within the care delivery improvements category may not apply to all specialty PFPMs, and as proposed, we accounted for this by stating within the criterion that this applies only “where multiple practitioners or settings are relevant.”

    Comment: One commenter stated that providing information about how the PFPM could incorporate CEHRT would be problematic for a pathology PFPM. Another commenter suggested that criteria under the Information Enhancements category should be modified to explicitly address improving the availability of information to all members of the care team, including pharmacists, to guide decision-making in order to encourage communication and information sharing. One commenter supported the information we stated in the preamble would inform the criterion in the Information Enhancements category.

    Response: Information about use of CEHRT might inform this criterion, but it is not restricted only to CEHRT. We decline to add more specificity to this criterion to allow for more explicit flexibility, but we believe information about how the PFPM would improve the availability of information to all members of the care team would inform this criterion as well as the Integration and Care Coordination criterion.

    Comment: One commenter stated that the PFPM criteria should include an evaluation of whether the entity to which payment will be directed is physician-led and if the majority of the governing board(s) is comprised of independent physicians, members of a participating Independent Practice Association, or physicians employed by physician organizations.

    Response: We have not added a criterion requiring that APM Entities in PFPMs be physician-led or requiring a specific composition of governing boards because we do not wish to limit the scope of potential PFPMs.

    Comment: One commenter stated that our direction regarding “high value services” runs counter to a push toward capitated payments.

    Response: We stated that payments for high-value services that we do not currently (or separately) pay for are changes that can be an important part of moving toward value-based delivery system reform, but that adding payment for specific services without any other change does not constitute a sufficient departure from current payment methodologies to meet our proposed PFPM criteria or to be considered an Advanced APM. This does not preclude PFPM proposals from including capitated payments.

    Comment: We received multiple comments emphasizing the importance of patients in the design of PFPMs. Commenters suggested that the PFPM design should strive to not further fragment care delivery and that PFPMs should be approved that support the move of Medicare to a program that is truly patient-centered and available on a constant basis, regardless of where the patient is located at a given time. One commenter suggested that CMS and the PTAC consider a proposal's impact on patient care, quality, and outcomes in addition to costs and believes that applicants may not be able to analyze the full impact a proposed PFPM may have on quality of care and cost. One commenter suggested adding criteria for patient access and experience. One commenter stated that CMS should require PFPMs to document policies and procedures to ensure that they do not employ discriminatory practices that result in the restriction of patient access to services and treatments furnished by any health care provider acting within the scope of their license. One commenter supported that the criteria had strong patient choice focus. One commenter supported CMS' proposed criteria to address integration and care coordination, patient choice, and patient safety, and suggested that PFPM adherence to these criteria should be assessed in the context of the model's proposed quality measures. One commenter recommended that CMS include patient and consumer advocacy in the development of new PFPMs and quality measures including establishing a separate, independent consumer advisory committee to help bring the consumer perspective for PFPM proposals coming from PTAC.

    Response: We appreciate feedback from commenters that underscores the importance of PFPMs emphasizing quality and patient-centered care. We believe that our criteria sufficiently require elements related to quality and in particular that the care delivery improvements category of criteria addresses patient experience.

    Comment: We received multiple comments on the incentive section of the PFPM criteria. A commenter supported CMS' proposed criteria promoting payment incentives for higher-value care. Another commenter asked that the PTAC and CMS be cautious in approaching procedural episode-based payments, as the commenter believed it is better to structure episodes involving hospice and palliative medicine as a separate bundle, commencing once the services are necessary, rather than including them in a more general condition-specific bundle. One commenter requested a specific payment methodology be included in the design of PFPMs. Another commenter encouraged CMS to add questions to the PFPM review criteria related to whether a model submitted to PTAC considers the inclusion of Hospice and Palliative Medicine providers or, at a minimum, how it will deliver care to patients with serious, life-limiting illness. One commenter stated that there was too much emphasis in the language for the PFPM section on “incentives” and not enough on paying adequately for needed care. This commenter stated that the PFPM incentives were set up to benefit PFPMs that pay adequately for lower volumes of services rather than those that try to incentivize higher quality and included suggested language changes to fix this part of Quality Payment Program.

    Response: We appreciate the comments on the incentive category of PFPM criteria. These criteria were designed to promote payment incentives for higher-value care, including paying for value over volume and providing resources and flexibility necessary for practitioners to deliver high-quality health care.

    Comment: One commenter suggested that PFPMs should be designed to mitigate the risk of excess spending, perhaps by limiting guaranteed additional payments, or ensuring a balance between guaranteed payment and performance-based payment.

    Response: We agree that these are sound ideas for the payment structure of PFPMs, but we are not requiring the payment methodology criterion be met through a specific payment structure.

    Comment: A commenter suggested that entities should be large enough to detect changes in spending and outcome measures. A commenter recommended that CMS provide more detail on evaluable goals, specifically on evaluation study design and the level of precision the evaluation may reach.

    Response: We agree that a means to assess the impact of a PFPM is an important part of its design and would inform the “ability to be evaluated” criterion. Because the diversity of potential proposed PFPMs will necessitate a variety of evaluation designs, we do not require that a specific evaluation design be utilized. As we do for other APMs, we will evaluate the scope of impact of potential PFPMs, and consider whether the potential outcomes merit the required investments and opportunity costs, and whether the impact of the payment model can be measured to determine if it should be expanded.

    Comment: Two commenters requested that CMS or the PTAC provide formal guidance and clarification on the definition of “supplemental information” and how it impacts a PFPM proposal. One commenter suggested that CMS specify that other items “the PTAC may request or stakeholders may wish to provide” are not essential and will not result in any negative consequences in the PTAC consideration process. One commenter asked CMS to clarify if the entity submitting a proposal will be able to recruit participants after submission of the proposal to the PTAC and/or CMS.

    Response: We thank commenters for their interest in the “supplemental information” discussed in the proposed rule. The “supplemental information” is meant to increase the transparency of our APM review process and is not included within the PFPM criteria. If a PFPM is tested it will not be necessary for the entity submitting a proposal to recruit applicants to participate in the PFPM.

    Comment: One commenter suggested that it may be particularly helpful to ensure that there are sufficient models addressing vulnerable and underserved beneficiary populations. Another commenter believed that applicants should describe how they will monitor changes in disparities during the model implementation.

    Response: We appreciate the concerns from commenters that PFPMs should address vulnerable populations and monitor changes in disparities during implementation. While we do not have a criterion that requires considerations for any specific population, the scope criterion requires that PFPMs aim to solve an issue in payment policy that broadens and expands the APM portfolio at the time it is tested. We will consider how changes in disparities during model implementation would be monitored as part of our consideration of the scope criterion.

    Comment: We received numerous comments about the scope criterion. A few commenters supported requiring that proposed PFPMs expand the CMS portfolio of APMs. A few commenters recommended that PFPM proposals should focus on physicians who do not have the opportunity to participate in other APMs because they are not available to such physicians' specialties. One commenter stated that PFPMs should not duplicate existing efforts and should harmonize with one another to ensure appropriate care coordination and transitions of care for patients. Commenters expressed concern that the scope criterion as proposed is vague and could be interpreted to mean, for example, that the agency is uninterested in models that address cancer care, because there is already an APM specific to cancer care: The Oncology Care Model (OCM). A few commenters stated that multiple APMs should be available to physicians and recommends development of a policy that the current availability of APMs addressing a disease, condition, or episode should not preclude PFPM proposals on the same disease, condition, or episodes(s) within a different APM. A few commenters stated that different designs and approaches for the same disease, condition, or episode should be encouraged and that the approaches should identify decision points and treatment protocols. One commenter suggested physicians that have already participated in a PFPM with CMS be excluded from participation in the proposed PFPMs. One commenter requested that CMS not be overly restrictive in that the commenter believes innovation in PFPMs could generate ideas about how to better address those issues that are perhaps already somewhat incorporated into existing models. One commenter suggested that we specify that PFPMs should rely on evidence-based information to either directly address an issue in payment policy that broadens and expands the CMS APM portfolio or include APM entities whose opportunities to participate in APMs have been limited.

    Response: In response to comments we agree that the scope criterion should be broadened and clarified. Regarding who may be included in the PFPM's design, we recognize the opportunity the PTAC represents for clinicians who have not already participated in APMs, but at the same time we do not want to unduly limit the scope of proposals we receive through the PTAC by excluding PFPMs from consideration that include clinicians who have had other opportunities to participate in APMs. We understand the desire of clinicians who have not already participated in an APM with CMS to begin participating through a proposal submission to the PTAC. To ensure we do not obstruct proposals that may have significant positive outcomes for patients and CMS, however, we will not limit proposals to eligible clinicians based on their past participation in APMs. Additionally, we recognize that while CMS may already have an APM addressing a specific disease, condition, or episode, there may still be unique, valuable payment approaches to similar conditions. We are finalizing the scope criterion to require that PFPMs aim to broaden or expand the CMS APM portfolio by addressing an issue in payment policy in a new way or including APM Entities whose opportunities to participate in APMs have been limited. We believe that this criterion will further our goal to promote participation in APMs by broadening and expanding our portfolio of APMs in areas such as geographic location, specialty, condition, and illness, without overly limiting proposed PFPMs. This criterion can be met by either addressing an issue in payment policy in a new way or including APM Entities whose opportunities to participate in APMs have been limited, therefore it is broad to allow stakeholders to submit many proposed PFPMs that could expand the CMS APM portfolio.

    We are finalizing our proposed criteria with one modification. We are broadening the proposed scope criterion. The final scope criterion now requires that PFPMs aim to broaden or expand the CMS APM portfolio by addressing an issue in payment policy in a new way or including APM Entities whose opportunities to participate in APMs have been limited. We are finalizing the other PFPM criteria as proposed.

    III. Collection of Information Requirements

    Under the Paperwork Reduction Act of 1995, we are required to provide 30-day notice in the Federal Register and solicit public comment before a collection of information requirement is submitted to the Office of Management and Budget (OMB) for review and approval. In order to fairly evaluate whether an information collection should be approved by OMB, section 3506(c)(2)(A) of the Paperwork Reduction Act of 1995 requires that we solicit comment on the following issues:

    • The need for the information collection and its usefulness in carrying out the proper functions of our agency.

    • The accuracy of our estimate of the information collection burden.

    • The quality, utility, and clarity of the information to be collected.

    • Recommendations to minimize the information collection burden on the affected public, including automated collection techniques.

    In the proposed rule (81 FR 28350 through 28364), we solicited public comment on each of the section 3506(c)(2)(A)-required issues for the following information collection requirements. PRA-related comments were received as indicated below under the relevant information collection requirements (ICRs).

    In response to our request for public comment on our Information Collections, we received several general comments regarding the burden of data collection and the privacy of CMS information collection.

    Comment: Several commenters agreed with CMS's effort to streamline multiple reporting programs under one single program to save both time and cost for healthcare providers in tracking and reporting quality to CMS. Several commenters recommended further streamlining and simplifying data reporting to reduce the burden of reporting.

    Response: In response to public comments, we have further streamlined reporting in the quality, advancing care information, and improvement activities performance categories between the proposal and the final rule with comment period. In part because of this additional streamlining, the total burden estimate has been reduced between the proposal and the final rule with comment period. The gross burden estimate in the proposal was 12,493,654 burden hours and a burden cost of $1,327,177,693 (81 FR 28362). The finalized burden estimates are 10,947,453 burden hours and a burden cost of $1,311,245,806.

    Comment: Several commenters disagreed with the proposed rule and suggested that it be withdrawn. The commenters stated that the proposals were unethical and would jeopardize patient confidentiality through the sharing of patient data with the government.

    Response: Patient confidentiality is very important to us. Please note that we will collect and disclose personally identifiable information (PII) and/or individually identifiable health information only in accordance with applicable privacy and security laws, including, but not limited to, the Privacy Act of 1974 and the HIPAA Privacy Rule.

    In summary, we are finalizing policies that further streamline reporting and reduce burden and have provided additional information to commenters on privacy protections in response to public comments.

    The remainder of this section focuses on the estimated burden of clinicians and groups that submit data in response to information collections established by this final rule with comment period. This estimated burden is expressed in terms of time and labor costs. First, we discuss the wage estimates that are used to calculate the labor costs associated with data submission for all the information collection requirements established by this final rule with comment period. Second, we provide a framework summarizing how the information collection requirements vary by the type of data submitted and the type of respondent submitting the data (individual clinician, group, APM Entity, or APM billing TIN). Third, we provide burden estimate calculations for each of the information collection requirements established by this final rule with comment period. Finally, we calculate the total gross and net burden across all information collection requirements.

    A. Wage Estimates

    To derive wage estimates, we used data from the U.S. Bureau of Labor Statistics' (BLS) May 2015 National Occupational Employment and Wage Estimates. Table 42 presents the mean hourly wage, the cost of fringe benefits and overhead, and the adjusted hourly wages for billing and posting clerks, computer systems analysts, physicians, practice administrators, and licensed practical nurses as derived from this data. We believe these are the primary positions that will be involved in the collection and reporting of information under this regulation. We have adjusted these employee hourly wage estimates by a factor of 100 percent to reflect current HHS department-wide guidance on estimating the cost of fringe benefits and overhead. These are necessarily rough adjustments, both because fringe benefits and overhead costs vary significantly from employer to employer and because methods of estimating these costs vary widely from study to study. Nonetheless, there is no practical alternative and we believe that these are reasonable estimation methods. In addition, to calculate beneficiary time costs, we have used wage estimates for Civilian, all occupations, using the same BLS data discussed above. We have not adjusted these costs for fringe benefits and overhead because direct wage costs represent the “opportunity cost” to beneficiaries themselves for time spent in health care settings.

    Table 42—Adjusted Hourly Wages Used in Burden Estimates Occupation title Occupational code Mean hourly wage
  • ($/hr.)
  • Fringe benefits and overhead
  • ($/hr.)
  • Adjusted
  • hourly wage
  • ($/hr.)
  • Billing and Posting Clerks 43-3021 $17.60 $17.60 $35.20 Computer Systems Analysts 15-1121 43.36 43.36 86.72 Physicians 29-060 97.33 97.33 194.66 Practice Administrator 11-91111 50.99 50.99 101.98 Licensed Practical Nurse (LPN) 29-2061 21.17 21.17 42.34 Civilian, All Occupations Not applicable 23.23 N/A 23.23 Source: “Occupational Employment and Wage Estimates May 2015,” U.S. Department of Labor, Bureau of Labor Statistics. http://www.bls.gov/oes/current/oes_nat.htm.

    We added additional occupational titles to the list of occupational titles used in the proposed rule as part of our burden estimates here in order to better reflect the skill mix of the staff that we believe will take part in reviewing measure specifications. Specifically, we are adding practice administrator and licensed practical nurse (LPN) to this list. These changes were in response to comments discussed below under section III.C. ICRs Related to Quality Performance Category and Previously Approved under PQRS.

    B. A Framework for Understanding the Burden of MIPS Data Submission

    Because of the wide range of information collection requirements under MIPS, Table 43 presents a framework for understanding how the organizations permitted or required to submit data on behalf of clinicians varies across the types of data, and whether the clinician is a MIPS eligible clinician, MIPS APM participant, or an Advanced APM participant. As shown in the first row of Table 43, MIPS eligible clinicians that are not in MIPS APMs and other clinicians voluntarily submitting data will submit data either as individuals or groups to the quality, advancing care information, and improvement activities performance categories.

    For MIPS APMs, the organizations submitting data on behalf of participating MIPS eligible clinicians will vary across categories of data, and in some instances across APMs. For the performance period in 2017, the quality data submitted by Shared Savings Program Accountable Care Organizations (ACOs) and Next Generation ACOs on behalf of their participants will fulfill both MIPS submission requirements for the quality performance category. For the advancing care information performance category, billing TINs will submit data on behalf of participants who are MIPS eligible clinicians. For the improvement activities performance category, we will assume no reporting burden for MIPS APM participants because CMS will assign the improvement activities performance category score at the MIPS APM level and all APM Entity groups in the same MIPS APM will receive the same score. Advanced APM participants who are determined to be Partial QPs will be required to submit elections as to whether they will participate in MIPS, which is discussed in more detail in section III.I. of this final rule with comment period.

    Table 43—Clinicians or Organizations Submitting MIPS Data on Behalf of Clinicians, by Type of Data and Category of Clinician Category of clinician Type of data submitted Quality performance category Advancing care information performance category Improvement activities performance category Partial QP election MIPS Eligible Clinicians (not in MIPS APMs)
  • And other clinicians voluntarily submitting data
  • As groups or individuals As groups or individuals As groups or individuals Not applicable.
    Eligible Clinicians participating in the Shared Savings Program ACOs submit to the CMS Web Interface on behalf of their participating MIPS eligible clinicians Each TIN in the APM Entity group reports advancing care information to MIPS 36 CMS will assign the same improvement activities performance category score to each APM Entity group based on the activities involved in participation in the Shared Savings Program.* [The burden estimates assume no improvement activity reporting burden for APM participants.] Advanced APM Entities will make election for participating MIPS eligible clinicians. Eligible Clinicians in the Next Generation ACO Model ACOs submit to the CMS Web Interface on behalf of their participating MIPS eligible clinicians Each MIPS eligible clinician in the APM Entity group reports advancing care information to MIPS through either group TIN or individual reporting. [The burden estimates assume TIN-level reporting.] CMS will assign the same improvement activities performance category score to each APM Entity group based on the activities involved in participation in the Next Generation ACO Model.* [The burden estimates assume no improvement activities reporting burden for APM participants.] Advanced APM Entities will make election for participating eligible clinicians. Eligible Clinicians participating in MIPS APMs other than the Shared Savings Program or Next Generation ACO Model The APM Entity group would not be assessed on quality under MIPS in the first performance period. The APM Entity group would submit quality measures to CMS required by the APM. [No burden for submitting MIPS quality data.] Each MIPS eligible clinician in the APM Entity group reports advancing care information to MIPS through either group TIN or individual reporting. [The burden estimates assume TIN-level reporting.] CMS will assign the same improvement activities performance category score to each APM Entity group based on the activities involved in participation in the MIPS APM.* [The burden estimates assume no improvement activities performance category reporting burden for APM participants.] Advanced APM Entities will make election for participating eligible clinicians. * APM Entity groups participating in MIPS APMs do not need to report improvement activities data unless the CMS-assigned improvement activities scores is below the maximum improvement activities score.

    We did not receive comments on the framework for understanding the burden of MIPS data submission. However, we are updating the framework to reflect changes in reporting requirements for participants in MIPS APMs, as discussed in section II.E.h of this final rule with comment period.

    36 For MIPS APMs other than the Shared Savings Program, both group and individual clinician advancing care information data will be accepted. If both group and individual scores are submitted for the same MIPS APM Entity, CMS would take the higher score for each TIN/NPI. The TIN/NPI scores are then aggregated for the APM Entity score.

    C. ICRs Regarding Quality Performance Category (§ 414.1330 and § 414.1335) and Previously Approved Under PQRS

    We anticipate that two groups of clinicians will submit quality data under MIPS, those who submit as MIPS eligible clinicians and other clinicians who opt to submit data voluntarily in, but will not be subject to MIPS payment adjustments. Based on 2015 data from the PQRS and other CMS sources,37 we estimate that up to 611,876 (or 88 percent of) MIPS eligible clinicians will submit quality performance category data including those participating as groups. Historically, the PQRS has never experienced 100 percent participation; the participation rate for 2014 was 63 percent. For purposes of these analyses, we assume that clinicians who participated in the 2015 PQRS will continue to submit quality data under MIPS as either MIPS eligible clinicians or voluntary reporters. We also assume that the number of MIPS eligible clinicians will be the same in the transition year as it was in our estimate based on 2015 data. Similarly, we assume that the population of clinicians excluded from MIPS will be the same size in 2017 as it was in our 2015 data. We anticipate that the professionals submitting data voluntarily will include Medicare clinicians that are ineligible clinician types, clinicians that meet the low-volume threshold, and newly enrolled Medicare clinicians.38 Based on those assumptions, we estimate that an additional 296,776 clinicians, or 44 percent of clinicians excluded from MIPS, will submit MIPS quality data voluntarily.

    37 The other data sources include 2014 VM data, 2015 PECOS data, and Medicare Part B claims data from 2014 and 2015.

    38 The category of 668,090 clinicians permitted to voluntarily submit data includes 199,308 ineligible clinician types, 85,268 newly enrolled Medicare clinicians, and 383,514 low-volume clinicians. See Table 57 in section V.D of this final rule with comment period for additional details on the estimated counts of clinicians excluded from or ineligible for MIPS.

    Our burden estimates for quality data submission combine the burden for MIPS eligible clinicians and other clinicians submitting data voluntarily. We assume clinicians will continue to submit quality data under the same submission mechanisms that they used under the 2015 PQRS. Using the 2015 PQRS counts of individuals and groups submitting through various mechanisms, we assume that 332,729 clinicians will submit as individuals through claims submission mechanisms; 258,993 clinicians will submit as individuals or groups through qualified registry or QCDR submission mechanisms; 105,987 clinicians will submit as individuals or groups through EHR submission mechanisms; and 107,884 clinicians will submit as groups through CMS Web Interface. We also assume that clinicians that submitted quality data as groups under the 2015 PQRS will continue to do so under the MIPS first performance year. Specifically, we assume that 2,678 groups will submit data via QCDR and registry submission mechanisms on behalf of 139,772 clinicians; 903 groups will submit via EHR submission mechanisms on behalf of 54,460 eligible clinicians; and 299 groups will submit data via the CMS Web Interface on behalf of 107,884 clinicians. For CMS Web Interface submission by Shared Savings Program ACOs and Next Generation ACOs, we assume that the 2017 counts of APM Entities and their participants will be the same as the 2016 counts. Specifically, we assume that 433 Shared Savings Program ACOs will submit on behalf of 140,341 participants and 18 Next Generation ACOs will submit on behalf of 24,144 participants.

    For clinicians or groups, the burden associated with the requirements of the MIPS quality performance category is the time and effort associated with clinicians identifying applicable quality measures, and submission of the measures.

    The burden estimates were revised to reflect differences between the policies established in this final rule with comment period, and those proposed in the proposed rule. In addition, the burden estimates were revised in response to public comments about the underlying assumptions, which are discussed at the end of this section. As a result of these revisions, the gross burden estimate in the proposed rule was 12,493,654 burden hours with an associated burden cost of$1,327,177,693 (81 FR 28362). The finalized burden estimates are 10,894,214 burden hours with an associated burden cost of $1,311,245,806.

    Several differences between the revised policies set forth in this final rule with comment period and the policies in the proposed rule are reflected in the burden estimates, including the reduction in the number of required advancing care information measures from 11 to five and the reduction in the number of recommended improvement activities from six to four. The burden estimates also reflect a simplification of the data submission requirements for MIPS APM participants. Specifically, this final rule with comment period does not generally require MIPS APM participants to submit improvement activities data, whereas the proposed rule did. For the advancing care information performance category, this final rule with comment period establishes the capability for participants in MIPS APMs other than the Shared Savings Program to submit data at the billing TIN level. In contrast, we had proposed that participants in Shared Savings Program ACOs submit advancing care information data at the billing TIN level and participants in other MIPS APMs submit advancing care information data at the individual clinician level.

    Finally, under the revised policy set forth in this final rule with comment period, Advanced APM participants will be notified about their QP or Partial QP status before the end of the performance period, whereas in the proposed rule, Advanced APM participants would not have been notified of their QP or Partial QP status until after the end of the submission period. Due to the timing of the QP and Partial QP status data, the proposed rule's burden estimates assumed that all Advanced APM Entities would be required to submit Partial QP election data. In the final rule with comment period, we assume the vast majority of Advanced APM participants will not be required to submit Partial QP election data.

    In addition to policy differences between the proposed rule and final rule with comment period, the burden estimates also reflect changes in methods. In response to public comments, we have changed our assumptions about the number of hours and skill mix of labor needed to review quality measure specifications. We have also changed our assumptions to more accurately reflect the efficiency gains from group reporting. In the proposed rule, we assumed that the burden per clinician was the same whether they submitted as an individual or as part of a group. In this final rule with comment period's burden estimates, we calculate the burden at the level of the respondent (group or individual clinician) submitting data, and assume the average burden per respondent is the same.

    These burden estimates have some limitations. We believe it is difficult to quantify the burden accurately because clinicians and groups may have different processes for integrating quality data submission into their practices' work flows. Moreover, the time needed for a clinician to review quality measures and other information, select measures applicable to their patients and the services they furnish, and incorporate the use of quality data codes into the office workflows is expected to vary along with the number of measures that are potentially applicable to a given clinician's practice. Further, the final burden estimates are based on historical rates of participation in the PQRS program, and the rate of participation in MIPS are expected to differ.

    We believe the burden associated with actually submitting the quality measures will vary depending on the submission method selected by the clinician or group. As such, we break down the burden estimates by clinicians and groups according to the submission method used.

    We anticipate that clinicians and groups using claims, QCDR and registry, and EHR submission mechanisms will have the same start-up costs related to reviewing measure specifications. As such, we estimate for clinicians and groups using any of these three submission mechanisms a total of 8 staff hours needed to review the quality measures list, review the various submission options, select the most appropriate submission option, identify the applicable measures or specialty measure sets for which they can report the necessary information, review the measure specifications for the selected measures or measures group, and incorporate submission of the selected measures or specialty measure sets into the office work flows. Building on data in a recent Health Affairs article (Casilano et al., 2016) http://content.healthaffairs.org/content/35/3/401.abstract we assume that a range of expertise is needed to review quality measures: 3 hours of an administrator's time, 2 hours of a clinician's time, 1 hour of a LPN/medical assistant's time, 1 hour of a computer systems analyst's time, and 1 hour of a billing clerk's time.39 We estimate that the start-up cost for a MIPS eligible clinician's practice to review measure specifications is $730.40, including 3 hours of a practice administrator's time (3 hours × $101.98 = $305.94), 2 hours of a clinician's time (2 hours × $182.46/hour = $346.92), 1 hour of a LPN/medical assistant's time (1 hour × $42.34), and 1 hour of a billing clerk's time (1 hour × $35.20/hour = $35.20). These start-up costs pertain to the specific quality submission methods below, and hence appear in the burden estimate tables.40

    39 Our burden estimates are based on prorated versions of the estimates for reviewing measure specifications in Lawrence P. Casalino et al., “US Physician Practices Spend More than $15.4 Billion Annually to Report Quality Measures,” Health Affairs, 35, no. 3 (2016): 401-406. The estimates were annualized to 50 weeks per year, and then prorated to reflect that Medicare revenue is 30 percent of all revenue paid by insurers, and then adjusted d to reflect that the decrease from 9 required quality measures under PQRS to 6 required measures under MIPS.

    40 The one exception is the start-up cost for a billing clerk to submit data is not listed in the CMS Web Interface Reporting Burden because the CMS Web Interface measures are very similar to the GPRO Web Interface measures used in the 2016 PQRS.

    For the purposes of our burden estimates for the claims, registry and QCDR, and EHR submission mechanisms, we also assume that, on average, each clinician or group will submit six quality measures. Given the lack of historical data on MIPS, it is difficult to estimate the number of physicians who will voluntary elect to test this system by submitting fewer than the six measures required for many clinicians. We believe that the number of clinicians and groups that submit fewer than six measures as they gain experience with the new system may be balanced out by the number of clinicians and groups that continue to submit more than six measures because they were required to submit nine measures under the PQRS.

    The revised quality performance requirements and burden estimates were submitted along with all other ICRs listed below under a new OMB control number (0938-NEW). Given that in the first year of implementation CAHPS for MIPS is replacing and using the same questions as CAHPS for the PQRS, the CAHPS for MIPS performance requirements and burden estimates were submitted as a request for continuation of OMB control number (0938-1222), CAHPS for PQRS.

    We received several general comments on the quality performance category burden estimates.

    Comment: Several commenters believed that the burden estimates in the Collection of Information section of the proposed rule were too low because MIPS eligible clinicians would require extensive time to become familiar with the program, including quality data reporting, in the transition year.

    Response: The estimated burden to become familiar with quality measure specifications has been increased from 1 hour in the proposed rule to 2 hours of clinician time for the transition year of the program. In future program years, we anticipate that the burden will be reduced as clinicians become more familiar with the quality measures and submission requirements.

    Comment: Several commenters disagreed with the assumption that a billing clerk could review proposed measures specifications due to their complexity.

    Response: We agree with the commenters, and believe that due the complexity of measure specifications, a broader range of occupational titles would need to be involved in reviewing measure specifications. In the proposed rule, we assumed that each practice would require 6 hours of a billing clerk's time and 1 hour of a clinician's time to review measure specifications. As noted above, we have revised our burden estimates to include a mix of staff needed to review quality measure specifications using calculations informed by a recent Health Affairs article (http://content.healthaffairs.org/content/35/3/401.abstract). We assume that the skill mix to review measure specifications to include: 3 hours of practice administrator time, 2 hours of clinician time, 1 hour of LPN/medical assistant time, 1 hour of computers systems analyst time, and 1 hour of billing clerk time.

    Comment: One commenter noted the reduction in the number of quality measures would reduce burden.

    Response: As noted above, the estimated burden to become familiar with quality measure specifications has been increased from 1 hour of clinician time to 2 hours of clinician time. After the transition year, we expect that the burden for quality measures submission will continue to decline in future years as MIPS eligible clinicians become more familiar with quality measures and submission requirements.

    Comment: One commenter requested that CMS provide time and cost estimates for determining which quality measures to report.

    Response: As noted above, our burden estimates factor in 8 hours of staff time to review quality measure specifications, which includes evaluating which quality measures to report. No further changes will be made in the burden estimates.

    In summary, CMS made several changes to the quality performance category data burden estimates in response to comments, including increasing our estimate of the time required to review measure specifications from 7 to 8 hours, and assuming that a broader and more skilled mix of occupational titles would be needed to review measure specifications. In addition, the burden estimates were revised to reflect updated 2015 wage and PQRS data, and to more accurately reflect the burden of group reporting.

    1. Burden for Quality Data Submission by Clinicians: Claims-Based Submission

    As noted above, we assume that 332,729 individual clinicians will submit quality data via claims based on 2015 PQRS data. We anticipate the claims submission process for MIPS will be operationally similar to the way it functioned under the PQRS. Specifically, clinicians will need to gather the required information, select the appropriate quality data codes (QDCs), and include the appropriate QDCs on the claims they submit for payment. Clinicians will collect QDCs as additional (optional) line items on the CMS-1500 claim form or the electronic equivalent HIPAA transaction 837-P, approved by OMB under control number 0938-0999.

    The total estimated burden of claims-based submission will vary along with the volume of claims on which the submission is based. Based on our experience with the PQRS, we estimate that the burden for submission of quality data will range from 0.22 hours to 10.8 hours per clinician. The wide range of estimates for the time required for a clinician to submit quality measures via claims reflects the wide variation in complexity of submission across different clinician quality measures. As shown in Table 44, we also estimate that the cost of quality data submission using claims will range from $19.08 (0.22 hours × $86.72) to $936.58 (10.8 hours × $86.72). The total estimated annual cost per clinician ranges from the minimum burden estimate of $878.60 to a maximum burden estimate of $1,796.10. The burden will involve becoming familiar with MIPS data submission requirements. We believe that the start-up cost for a clinician's practice to review measure specifications total 8, which includes 3 hours of a practice administrator's time (3 hours × $101.98 = $305.94), 2 hours of a clinician's time (2 hours × $194.66/hour = $389.32), 1 hour of a LPN/medical assistant's time (1 hour × $42.34 = $42.34), 1 hour of a computer systems analyst's time (1 hour × $86.72 = $86.72), and 1 hour of a billing clerk's time (1 hour × $35.20/hour = $35.20). These start-up costs pertain to the specific quality submission methods below, and hence appear in the burden estimate tables.

    Considering both data submission and start-up costs, the total estimated burden hours per clinician ranges from a minimum of 8.22 hours (0.22 + 3 + 2 + 1 + 1 + 1) to a maximum of 18.8 hours (10.8 + 3 + 2 + 1 + 1 + 1). The total estimated annual cost per clinician ranges from the minimum estimate of $878.60 ($19.08 + $305.94 + $389.32 + $42.34 + $86.72 + $35.20) to a maximum estimate of $1,796.10 ($936.58 + $305.94 + $389.32 + $42.34 + $86.72 + $35.20). Therefore, total annual burden cost is estimated to range from a minimum burden estimate of $292,335,167 (332,729 × $878.60) to a maximum burden estimate of $597,613,226 (332,729 × $1,796.10).

    Based on the assumptions discussed above, Table 44 summarizes the range of total annual burden associated with clinicians using the claims submission mechanism.

    41 In Tables 44-55, the numbers have been truncated to two decimals for readability.

    Table 44—Burden Estimate for Quality Performance Category: Clinicians Using the Claims Submission Mechanism 41 Minimum
  • burden
  • estimate
  • Median burden
  • estimate
  • Maximum
  • burden
  • estimate
  • Estimated # of Participating Clinicians (a) 332,729 332,729 332,729 Burden Hours Per Clinician to Submit Quality Data (b) 0.22 1.58 10.8 Estimated # of Hours Practice Administrator Review Measure Specifications (c) 3 3 3 Estimated # of Hours Computer Systems Analyst Review Measure Specifications (d) 1 1 1 Estimated # of Hours LPN Review Measure Specifications (e) 1 1 1 Estimated # of Hours Billing Clerk Review Measure Specifications (f) 1 1 1 Estimated # of Hours Physician Review Measure Specifications (g) 2 2 2 Estimated Annual Burden hours per Clinician (h) = (b) + (c) + (d) + (e) + (f) + (g) 8.22 9.58 18.8 Estimated Total Annual Burden Hours (i) = (a) * (h) 2,735,032 3,187,544 6,255,305 Estimated Cost Per Clinician to Submit Quality Data (@computer systems analyst's labor rate of $86.72/hr.) (j) $19.08 $137.02 $936.58 Estimated Cost Practice Administrator Review Measure Specifications (@practice administrator's labor rate of $101.98/hr.) (k) $305.94 $305.94 $305.94 Estimated Cost Computer System's Analyst Review Measure Specifications (@computer systems analyst's labor rate of $86.72/hr.) (l) $86.72 $86.72 $86.72 Estimated Cost LPN Review Measure Specifications (@LPN's labor rate of $42.34/hr.) (m) $42.34 $42.34 $42.34 Estimated Cost Billing Clerk Review Measure Specifications (@clerk's labor rate of $35.2/hr.) (n) $35.20 $35.20 $35.20 Estimated Cost Physician Review Measure Specifications (@physician's labor rate of $194.66/hr.) (p) $389.32 $389.32 $389.32 Estimated Total Annual Cost Per Eligible Clinician (q) = (j) + (k) + (l) + (m) + (n) + (p) $878.00 $996.54 $1,796.10 Estimated Total Annual Burden Cost (r) = (a) * (q) $292,335,167 $331,576,959 $597,613,226

    We did not receive comments specific to the claims-based submission burden. We have updated the numbers to reflect updates based on 2015 data and to reflect new assumptions on the staff time required to review measure specifications, but no other changes were made.

    2. Burden for Quality Data Submission by Clinicians and Groups Using Qualified Registry and QCDR Submissions

    As noted above, we assume that 258,993 clinicians will submit quality data as individuals or groups via qualified registry or QCDR submissions based on 2015 PQRS data. Of these, we expect 119,201 clinicians to submit as individuals and 2,678 groups are expected to submit on behalf of the remaining 139,792 clinicians. Given that the number of measures required is the same for clinicians and groups, we expect the burden to be the same for each respondent submitting data via qualified registry or QCDR, whether the clinician is participating in MIPS as an individual or group.

    We estimate that burdens associated with QCDR submissions are similar to the burdens associated with qualified registry submissions. Therefore, we discuss the burden for both data submissions together below. For qualified registry and QCDR submissions, we estimate an additional time burden for respondents (clinicians and groups) to become familiar with MIPS submission requirements and, in some cases, new specialty measure sets. Therefore, we believe that the start-up cost for an individual clinician or group to review measure specifications and report quality data to total $1,126.88. This total includes 3 hours per respondent to submit quality data (3 hours × $86.72/hour = $260.16), 3 hours of a practice administrator's time (3 hours × $101.98/hour = $305.94), 2 hours of a clinician's time (2 hours × $194.66/hour = $389.32), 1 hour of a computer systems analyst's time (1 hour × $86.72/hour = $86.72), 1 hour of LPN/medical assistant's time, (1 hour × $42.34/hour = $42.34), and 1 hour of a billing clerk's time (1 hour × $35.20/hour = 35.20). Clinicians and groups will need to authorize or instruct the qualified registry or QCDR to submit quality measures' results and numerator and denominator data on quality measures to CMS on their behalf. We estimate that the time and effort associated with authorizing or instructing the quality registry or QCDR to submit this data will be approximately 5 minutes (0.083 hours) per clinician or group (respondent) for a total burden cost of $7.20, at a computer systems analyst's labor rate (.083 hours × $86.72/hour). Hence, we estimate 11.083 burden hours per respondent, with annual total burden hours of 1,350,785 (11.083 burden hours × 121,879 respondents). The total estimated annual cost per respondent is estimated to be approximately $1,126.88. Therefore, total annual burden cost is estimated to be $137,342,735 (121,879 × $1,126.88). Based on these burden requirements and the number of clinicians and groups historically using the Qualified Registry and QCDR submissions, we have calculated a burden estimate for these submissions:

    Table 45—Burden Estimate for Quality Performance Category: Clinicians (Participating Individually or as Part of a Group) Using the Qualified Registry/QCDR Submission Burden
  • estimate
  • # of Clinicians submitting via QCDR or registry (a) 258,933 # of Clinicians submitting as individuals (b) 119,201 # of Groups submitting via QCDR or registry on behalf of individual clinicians (c) 2,678 # of Respondents (groups plus clinicians submitting as individuals) (d) = (b) + (c) 121,879 Estimated Burden Hours Per Respondent to Submit Quality Data (e) 3 Estimated # of Hours Practice Administrator Review Measure Specifications (f) 3 Estimated # of Hours Computer Systems Analyst Review Measure Specifications (g) 1 Estimated # of Hours LPN Review Measure Specifications (h) 1 Estimated # of Hours Billing Clerk Review Measure Specifications (i) 1 Estimated # of Hours Physician Review Measure Specifications (j) 2 Estimated # of Hours Per Respondent to Authorize Qualified Registry to Report on Respondent's Behalf) (k) 0.083 Estimated Annual Burden Hours Per Respondent (l) = (e) + (f) + (g) + (h) + (i) + (j) + (k) 11.083 Estimated Total Annual Burden Hours (m) = (d) * (l) 1,350,785 Estimated Cost Per Respondent to Submit Quality Data (@computer systems analyst's labor rate of $86.72/hr.) (n) $260.16 Estimated Cost Practice Administrator Review Measure Specifications (@practice administrator's labor rate of $101.98/hr.) (p) $305.94 Estimated Cost Computer System's Analyst Review Measure Specifications (@computer systems analyst's labor rate of $86.72/hr.) (q) $86.72 Estimated Cost LPN Review Measure Specifications (@LPN's labor rate of $42.34/hr.) (r) $42.34 Estimated Cost Billing Clerk Review Measure Specifications (@clerk's labor rate of $35.2/hr.) (s) $35.20 Estimated Cost Physician Review Measure Specifications (@physician's labor rate of $194.66/hr.) (t) $389.32 Estimated Burden for Submission Tool Registration etc. (@computer systems analyst's labor rate of $86.72/hr.) (u) $7.20 Estimated Total Annual Cost Per Respondent (v) = (n) + (p) + (q) + (r) + (s) + (t) + (u) $1,126.88 Estimated Total Annual Burden Cost (m) = (a) * (v) $137,342,735

    The following is a summary of the comments we received regarding our burden estimate for the quality performance category using registry or QCDR.

    Comment: One commenter stated that the proposed rule underestimated data submission costs because it did not include the fees paid to registries.

    Response: The potential financial costs of fees paid to registries are discussed in the section V.C of this final rule with comment period. Because the burden estimates in this section addresses time costs, not direct financial costs, no changes were made to the burden estimate for data submission to registries and QCDRs as a result of this comment. In II.E.9.c(3), we are finalizing our proposal to post QCDR's self-reported costs for MIPS eligible clinicians or groups to use the QCDR on the CMS Web site alongside their organizational contact information and the services and measures offered.

    In summary, no changes were made to the registry or QCDR data submission burden estimate in response to comments specific to that section. We have updated the numbers to reflect updates based on 2015 data and to reflect new assumptions on group submission and the staff time required to review measure specifications.

    3. Burden for Quality Data Submission by Clinicians and Groups: EHR Submission

    As noted above, based on 2015 PQRS data, we assume that 105,987 clinicians will submit quality data as individuals or groups via EHR submissions; 51,527 clinicians are expected to submit as individuals; and 903 groups are expected to submit on behalf of 54,460 clinicians. We expect the burden to be the same for each respondent submitting data via qualified registry or QCDR, whether the clinician is participating in MIPS as an individual or group.

    Under the EHR submission mechanism, the individual clinician or group may either submit the quality measures data directly to CMS from their EHR or utilize an EHR data submission vendor to submit the data to CMS on the clinician's or group's behalf.

    Based on our experience with the PQRS, we estimate that the time needed to perform all the steps necessary for clinicians or groups to submit quality performance measures includes the time to prepare for participating in quality performance category submissions for MIPS calculated at 8 hours of time to for reviewing specifications: (3 hours of a practice administrator's time, 2 hours of clinician's time, 1 hour of a LPN/medical assistant's time, plus 1 hour of a billing clerk's time). The time preparing for participating in EHR data submission also includes 1 hour for the respondent to obtain an account in the CMS identity management system plus 1 hour for submission of a test data file. This means the final step for quality data via an EHR submission mechanism is an additional 2 hours for data submission.

    To prepare for the EHR submission mechanism, the clinician or group must review the quality measures on which we will be accepting MIPS data extracted from EHRs, select the appropriate quality measures, extract the necessary clinical data from their EHR, and submit the necessary data to the CMS-designated clinical data warehouse or use a health IT vendor to submit the data on behalf of the clinician or group. We assume the burden for submission of quality measures data via EHR is similar for clinicians and groups who submit their data directly to CMS from their CERHT and clinicians and groups who use an EHR data submission vendor to submit the data on their behalf. To submit data to CMS directly from their CEHRT, clinicians and groups must have access to a CMS-specified identity management system which we believe takes less than 1 hour to obtain. Once a clinician or group has an account for this CMS-specified identity management system, they will need to extract the necessary clinical data from their EHR, and submit the necessary data to the CMS-designated clinical data warehouse. We estimate that obtaining a CMS-specified identity management system will require 1 hour per respondent for a cost of $86.72 (1 hour × $86.72/hour), and that submitting a test data file to CMS will also require 1 hour per respondent for a cost of $86.72. With respect to submitting the actual data file, we believe that this will take clinicians or groups no more than 2 hours per respondent for a cost of submission of $173.44 (2 hours × $86.72/hour). The burden will involve becoming familiar with MIPS submission. We believe that the start-up cost for a clinician or group to review measure specifications total 8 hours, which includes 3 hours of a practice administrator's time (3 hours × $101.98/hour = $305.94), 2 hours of a clinician's time (2 hours × $194.66/hour = $389.32), 1 hour of a computer systems analyst's time (1 hour × $86.72/hour = $86.72), 1 hour of a LPN/medical assistant's time (1 hour × $42.34/hour = $42.34), and 1 hour of a billing clerk's time (1 hour × $35.20/hour = $35.20). Hence, we estimated 12 total burden hours per respondent with annual total burden hours of 629,160 (12 burden hours × 52,430 respondents). The total estimated annual cost per respondent is estimated to be $1,206.40. Therefore, total annual burden cost is estimated to be $63,251,552 (52,430 × $1,206.40).

    Based on these burden requirements and the number of clinicians and groups historically using the EHR submission mechanism, we have calculated a burden estimate for the quality data submission using EHR submission mechanism:

    Table 46—Burden Estimate for Quality Performance Category Clinicians (Submitting Individually or as Part of a Group) Using the EHR Submission Mechanism Burden
  • estimate
  • # of Clinicians submitting via EHR (a) 105,987 # of Clinicians submitting as individuals (b) 51,527 # of Groups submitting via EHR on behalf of individual clinicians (c) 903 # of Respondents (groups plus clinicians submitting as individuals) (d) = (b) + (c) 52,430 Estimated Burden Hours Per Respondent to Obtain Account in CMS-Specified Identity Management System (e) 1 Estimated Burden Hours Per Respondents to Submit Test Data File to CMS (f) 1 Estimated Burden Hours Per Respondent to Submit MIPS Quality Data File to CMS (g) 2 Estimated # of Hours Practice Administrator Review Measure Specifications (h) 3 Estimated # of Hours Computer Systems Analyst Review Measure Specifications (i) 1 Estimated # of Hours LPN Review Measure Specifications (j) 1 Estimated # of Hours Billing Clerk Review Measure Specifications (k) 1 Estimated # of Hours Physician Review Measure Specifications (l) 2 Estimated Annual Burden Hours Per Respondent (m) = (e) + (f) + (g) + (h) + (i) + (j) + (k) + (l) 12 Estimated Total Annual Burden Hours (n) = (d) * (m) 629,160 Estimated Cost Per Respondent to Obtain Account in CMS-specified identity management system (@computer systems analyst's labor rate of $86.72/hr.) (p) $86.72 Estimated Cost Per Respondent to Submit Test Data File to CMS (@computer systems analyst's labor rate of $86.72/hr.) (q) $86.72 Estimated Cost Per Respondent to Submit Quality Data (@computer systems analyst's labor rate of $86.72/hr.) (r) $173.44 Estimated Cost Practice Administrator Review Measure Specifications (@practice administrator's labor rate of $101.98/hr.) (s) $305.94 Estimated Cost Computer System's Analyst Review Measure Specifications (@computer systems analyst's labor rate of $86.72/hr.) (t) $86.72 Estimated Cost LPN Review Measure Specifications (@LPN's labor rate of $42.34/hr.) (u) $42.34 Estimated Cost Billing Clerk Review Measure Specifications (@clerk's labor rate of $35.2/hr.) (v) $35.20 Estimated Cost Physician Review Measure Specifications (@physician's labor rate of $194.66/hr.) (w) $389.32 Estimated Total Annual Cost Per Respondent (x) = (p) + (q) + (r) + (s) + (t) + (u) + (v) + (w) $1,206.40 Estimated Total Annual Burden Cost (y) = (d) * (x) $63,251,552

    We did not receive comments specific to the EHR submission burden. We have updated the numbers to reflect updates based on 2015 data and to reflect new assumptions on the staff time required to review measure specifications, but no other changes were made.

    4. Burden for Quality Data Submission via CMS Web Interface

    Based on 2015 PQRS data and 2016 Shared Savings Program and Next Generation ACO participation data, we assume that 750 organizations will submit quality data via the CMS Web Interface in the 2017 performance period (299 groups, 433 Shared Savings Program ACOs, and 18 Next Generation ACOs). Approximately 272,369 clinicians will be represented (107,885 clinicians not participating in ACOs; 140,341 Shared Savings Program participants, and 24,144 Next Generation ACO participants). Groups interested in participating in MIPS using the CMS Web Interface must complete a registration process, whereas Shared Savings Program ACOs and Next Generation ACOs do not need to complete a separate registration process. We estimate that the registration process for groups under MIPS involves approximately 1 hour of administrative staff time per group. The weighted average of the time required to register for the CMS Web Interface across all organizations is 0.40 hours (1 hour for each of the 299 groups and zero hours for each of the 433 Shared Savings Program ACOs or 18 Next Generation ACOs.) We assume that a billing clerk will be responsible for registering the group and that therefore, this process has an average labor cost of $35.20 per hour. Therefore, assuming the total burden hours per group associated with the group registration process is 1 hour, we estimate the total cost to a group associated with the group registration process to be approximately $14.08. ($35.20 per hour × 0.40 hours per group).

    The burden associated with the group submission requirements under the CMS Web Interface is the time and effort associated with submitting data on a sample of the organization's beneficiaries that is prepopulated in the CMS Web Interface. Based on experience with PQRS GPRO Web Interface submission mechanism, we estimate that, on average, it will take each group 79 hours of a computer system analyst's time to submit quality measures data via the CMS Web Interface at a cost of $86.72 per hour, for a total cost of $6,850.88 (79 hours × $86.72/hour).

    Our estimate of 79 hours for submission includes the time needed for each group to populate data fields in the web interface with information on approximately 248 eligible assigned Medicare beneficiaries and then submit the data (CMS will partially pre-populate the CMS Web Interface with claims data from their Medicare Part A and B beneficiaries). The patient data can either be manually entered or uploaded into the CMS Web Interface via a standard file format, which can be populated by CEHRT. Because each group must provide data on 248 eligible assigned Medicare beneficiaries (or all eligible assigned Medicare beneficiaries if the pool of eligible assigned beneficiaries is less than 248), we are assuming that entering or uploading data for one Medicare beneficiary requires 19 minutes of a computer systems analyst's time (79 hours ÷248 patients).

    We also estimate that for each organization (group or ACO) submitting data, a clinician will need to spend 1 hour per year to review quality measure specifications, for a total cost of $194.66. The estimated time for reviewing quality measure specifications is lower than under the quality submission mechanisms because the CMS Web Interface measures are very similar to the GPRO Web Interface measures used in the 2016 PQRS. As mentioned above, we estimate it will take an average of 0.40 hours for each organization to register to submit through the CMS Web Interface, for a total of cost of $14.03 (0.40 × $35.20). The cost of these 1.40 hours is included in the total estimated annual cost per organization of $7,059.57. The total annual burden hours are estimated to be 60,299 (750 organizations × 80.40 annual hours), and the total annual burden cost is estimated to be $5,294,680 (750 organizations × $7.059.57).

    Based on the assumptions discussed above we have calculated the following burden estimate for groups, Shared Savings Program ACOs, and Next Generation ACOs submitting to MIPS with the CMS Web Interface.

    Table 47—Burden Estimate for Quality Performance Category Group Submission via the CMS Web Interface Burden
  • estimate
  • Estimated # of Eligible Group Practices (a) 750 Estimated # of Burden Hours Per Group Practice to Self-Nominate to Participate in MIPS Under the Group Reporting Option (b) 0.40 Estimated # of Burden Hours Per Group to Report (c) 79 Estimated # of Burden Hours for Physician Familiarizing Self with MIPS Measures (d) 1 Estimated Total Annual Burden Hours Per Group (e) = (b) + (c) + (d) 80.40 Estimated Total Annual Burden Hours (f) = (a) * (e) 60,299 Estimated Cost Per Group Practice to Self-Nominate to Participate in MIPS Under the Group Reporting Option (@clerk's labor rate of $35.2/hr.) (g) $14.08 Estimated Cost Per Group to Report (@computer systems analyst's labor rate of $86.72/hr.) (h) $6,850.88 Estimated Cost for Physician Familiarizing Self with MIPS Measures (@physician's labor rate of $194.66/hr.) (i) $194.66 Estimated Total Annual Cost Per Group (j) = (g) + (h) + (i) $7,059.57 Estimated Total Annual Burden Cost (k) = (a) * (j) $5,294,680 By Provider Estimated # of Participating Eligible Professionals (l) 272,369 Average Burden Hours Per Eligible Professional (m) = (f) ÷ (l) 0.22 Estimated Cost Per Eligible Professional to Submit Quality Data (n) = (k) ÷ (l) $19

    We did not receive comments specific to the Web Interface submission reporting burden. We have updated the numbers to reflect updates based on 2015 data, but no other edits were made.

    D. ICRs Regarding Burden for Third Party Reporting and Data Validation (§ 414.1400 and § 414.1390) 1. Burden for Qualified Registry and QCDR Self-Nomination 42

    42 We do not anticipate any changes in the CEHRT process for health IT vendors as we transition to MIPS. Hence, health IT vendors are not included in the burden estimates for MIPS.

    For CY 2016, 114 qualified registries and 69 QCDRs were qualified to report quality measures data to CMS for purposes of the PQRS, an increase from 98 qualified registries and 49 QCDRs in CY2015.43 Under MIPS we believe that the number of QCDRs and qualified registries will continue to increase because (1) many MIPS eligible clinicians will be able to use the qualified registry and QCDR for all MIPS submission (not just for quality submission) and (2) QCDRs will be able to provide innovative measures that address practice needs. Qualified registries or QCDRs interested in submitting quality measures results and numerator and denominator data on quality measures to CMS on their participants' behalf will need to complete a self-nomination process in order to be considered qualified to submit on behalf of MIPS eligible clinicians or groups, unless the qualified registry or QCDR was qualified to submit on behalf of MIPS eligible clinicians or groups for prior program years and did so successfully.

    43 The full list of qualified registries for 2016 is available at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/PQRS/Downloads/2016QualifiedRegistries.pdf and the full list of QCDRs is available at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/PQRS/Downloads/2016QCDRPosting.pdf.

    We estimate that the self-nomination process for qualifying additional qualified registries or QCDRs to submit on behalf of MIPS eligible clinicians or groups for MIPS will involve approximately 1 hour per qualified registry or QCDR to complete the online self-nomination process.44

    44 Appendix D of the MIPS Paperwork Reduction Act package is a screen shot of the online self-nomination form for qualified registries and QCDRs.

    Please note that the self-nomination statement will occur by submission of an email to CMS, or if technically feasible it will occur via an online form that organizations will use to provide information on their business. We estimate that either of these mechanisms will require the same amount of time for respondents.

    In addition to completing a self-nomination statement, qualified registries and QCDRs will need to perform various other functions, such as meet with CMS officials when additional information is needed. In addition, QCDRs must benchmark and calculate their measure results. The time it takes to perform these functions may vary depending on the sophistication of the entity, but we estimate that a qualified registry or QCDR will spend an additional 9 hours performing various other functions related to being a MIPS qualified registry or QCDR.

    We estimate that the staff involved in the qualified registry or QCDR self-nomination process will mainly be computer systems analysts or their equivalent, who have an average labor cost of $86.72/hour. Therefore, assuming the total burden hours per qualified registry or QCDR associated with the self-nomination process is 10 hours, the annual burden hours is 1,830 (183 QCDRs or qualified registries × 10 hours). We estimate that the total cost to a qualified registry or QCDR associated with the self-nomination process will be approximately $867.20 ($86.72 per hour × 10 hours per qualified registry). We also estimate that 183 new qualified registries or QCDRs will go through the self-nomination process leading to a total burden of $158,697.60 ($867.20 × 183).

    The burden associated with the qualified registry and QCDR submission requirements in MIPS will be the time and effort associated with calculating quality measure results from the data submitted to the qualified registry or QCDR by its participants and submitting these results, the numerator and denominator data on quality measures, the advancing care information performance category, and improvement activities data to CMS on behalf of their participants. We expect that the time needed for a qualified registry to accomplish these tasks will vary along with the number of MIPS eligible clinicians submitting data to the qualified registry or QCDR and the number of applicable measures. However, we believe that qualified registries and QCDRs already perform many of these activities for their participants. We believe the estimate above represents the upper bound of QCDR burden, with the potential for less additional MIPS burden if the QCDR already provides similar data submission services.

    Based on the assumptions previously discussed, we provide an estimate of total annual burden hours and total annual cost burden associated with a qualified registry or QCDR self-nominating to be considered “qualified” for the purpose of submitting quality measures results and numerator and denominator data on MIPS eligible clinicians.

    Table 48—Burden Estimate for QCDR and Registry Self-Nomination Burden
  • estimate
  • Estimated # of Qualified registries or QCDRs Self-Nominating for the PQRS (a) 183 Estimated Total Annual Burden Hours Per Qualified registry or QCDR (b) 10 Estimated Total Annual Burden Hours for Qualified registries or QCDRs (c) = (a) * (b) 1,830 Estimated Cost Per Qualified registry or QCDR (@computer systems analyst's labor rate of $86.72/hr.) (d) $867.20 Estimated Total Annual Burden Cost for Qualified registries or QCDRs (e) = (a) * (d) $158,698

    With regard to the QCDR and registry self-nomination and data submission, we did not receive any public comments regarding the proposed requirements or burden and are adopting them without change.

    2. Burden for MIPS Data Validation Survey

    Under MIPS, a CMS contractor will conduct the MIPS Data Validation Survey in order to identify and address problems with data handling, data accuracy, and incorrect payments. The survey will be part of a broader MIPS strategy to combine our past program integrity processes, including the data validation process used in PQRS and the auditing process used in the Medicare EHR Incentive Program, into one set of requirements for MIPS eligible clinicians and groups, which we refer to as “data validation and auditing”.

    Because the data that will be submitted to CMS by, or on behalf of, MIPS eligible clinicians and will be used to calculate payment adjustments, it is critical that this data be accurate. Additionally, the data will be used to generate performance feedback for MIPS eligible clinicians and groups and, in some cases, will be posted publicly on the CMS Web site. This further supports the need for accurate and complete data. The CMS data validation contractor will conduct surveys of groups, registries, QCDRs, health IT vendors, and MIPS eligible clinicians in support of evaluating the data submitted for MIPS. It will be similar to the PQRS Data Validation Survey, which uses a series of approximately 30 questions, arranged by category, to gather information about data handling practices, training, quality assurance, and the challenges that stakeholders face as part of PQRS participation. Under MIPS, the survey's topics will be expanded beyond validation of quality measures to include improvement activities and potentially advancing care information performance category data.

    The MIPS Data Validation Survey for performance period 2017 will be conducted in late 2018 for data reported in early 2018. Because the MIPS verification process is still under development, the precise sample size for respondents has not yet been determined. We anticipate that at most 500 organizations would be contacted for MIPS data verification for performance period 2017. Based on the most recent year of the PQRS Data Validation Survey, we will assume that the response rate will be 86 percent. Hence, we estimated the total number of respondents for performance period 2017 will be 430 (500 organizations contacted × 86 percent response rate).

    We estimate the total annual burden for the ongoing MIPS data validation survey will be up to 645 hours each performance period (430 responses × 1.5 hours), and the data validation will be conducted at a billing clerk's labor rate of $35.20 per hour for a total burden cost of $22,704 ($35.20/hour × 1.5 hours × 430 responses).

    Table 49—Total Estimated Burden for MIPS Data Validation Survey Respondents Responses Burden per
  • response (hours)
  • Total annual burden (hours) Hourly labor cost
  • ($)
  • Total burden cost
  • ($)
  • 430 430 1.5 645 $35.20 $22,704

    With regard to the MIPS data validation survey, we did not receive any public comments regarding the proposed requirements or burden and are adopting them without change.

    E. Burden for Quality Data Submission via CAHPS for MIPS Survey

    Under MIPS, groups of two or more clinicians can elect to contract with a CMS-approved survey vendor and use the CAHPS for MIPS survey as one of their six required quality measures. Beneficiaries will experience burden under the CAHPS for MIPS Survey.

    The usual practice in estimating the burden on public respondents to surveys such as CAHPS is to assume that respondent time is valued, on average, at civilian wage rates. As previously explained, the BLS data show the average hourly wage for civilians in all occupations to be $23.23. Although most Medicare beneficiaries are retired, we believe that their time value is unlikely to depart significantly from prior earnings expense, and have used the average hourly wage to compute the dollar cost estimate for these burden hours.

    Under the first performance period of MIPS, we assume that 461 groups will elect to report on the CAHPS for MIPS survey, which is equal to the number of groups reporting via CAHPS for the PQRS in 2014. Table 50 shows the estimated annualized burden for beneficiaries to participate in the CAHPS for MIPS Survey. Based on historical information on the numbers of CAHPS for PQRS survey respondents, we assume that an average of 287 beneficiaries will respond per group. Therefore, the CAHPS for MIPS survey will be administered to approximately 132,307 beneficiaries per year (461 groups × an average of 287 beneficiaries per group responding). The survey contains 81 items and is estimated to require an average administration time of 18.0 minutes in English (at a pace of 4.5 items per minute) and 21.6 minutes in Spanish (assuming 20 percent more words in the Spanish translation), for an average response time of 19.8 minutes or 0.33 hours. These burden and pace estimates are based on CMS's experience with surveys of similar length that were fielded with Medicare beneficiaries. Given that we expect approximately 132,307 respondents per year, the annual total burden hours are estimated to be 43,661 hours (132,307 respondents × .33 burden hours per respondent). The estimated total burden annual burden cost is $1,014,252 (43,661 total burden hours × $23.23 per hour)

    Table 50—Burden Estimate for Beneficiary Participation in CAHPS for MIPS Survey Burden
  • estimate
  • Estimated # of Eligible Group Practices Administering CAHPS for MIPS Survey (a) 461 Estimated # of Beneficiaries Per Group Responding to Survey (b) 287 Estimated # of Total Respondents Completing Survey 132,307 Estimated # of Burden Hours Per Respondent to Report (d) 0.33 Estimated Cost Per Beneficiary Reporting (@labor rate of $23.23/hr.) (e) $7.67 Estimated Total Annual Burden Hours (f) = (c) * (d) 43,661 Estimated Total Annual Burden Cost for Beneficiaries Responding to CAHPS PQRS (g) = (c) * (e) $1,014,252

    With regard to the CAHPS for MIPS Survey, we did not receive any public comments regarding the proposed requirements or burden. We are updating the CAHPS burden estimates to reflect 2015 data, but no further changes were made.

    F. ICRs Regarding Burden Estimate for Advancing Care Information Data (§ 414.1375)

    During the transition year, clinicians and groups can submit advancing care information data through qualified registry, QCDR, EHR, CMS Web Interface, and attestation data submission methods. Also, we have streamlined the submission requirements for advancing care information under the MIPS. Compared to the reporting requirements in the 2015 Medicare EHR Incentive Program Final Rule, two objectives and their associated measures (Clinical Decision Support and Computerized Provider Order Entry) will no longer be required for submission purposes. We have also worked to align the advancing care information performance category with other MIPS performance categories, such as submitting eCQMs to the quality category, which will streamline submission requirements and reduce MIPS eligible clinician confusion. In addition, as part of our efforts to align and streamline submission requirements, we are providing a group reporting option (which did not exist under the Medicare EHR Incentive Program). Hence, a MIPS eligible clinician's estimated burden for the advancing care information performance category is lower than the estimated 7 hours per MIPS eligible clinician in the Medicare EHR Incentive Program —Stage 3 PRA (OMB control number 0938-1278) currently under review at OMB. We are requesting that effective January 1, 2017, the MIPS Collection of Information Requirements replace those for eligible clinicians in the Medicare EHR Incentive Program Stage 3 PRA.45

    45 We do not anticipate any changes in the CERHT process for EHR vendors as we transition to MIPS. Hence, EHR vendors are not included in these burden estimates.

    As noted above in section B, billing TINs may report advancing care information performance category data on behalf of MIPS eligible clinicians in MIPS APMs, or, except for participants in the Shared Savings Program, MIPS eligible clinicians in MIPS APMs may report advancing care information performance category data individually. Because billing TINs in APM Entities will be report advancing care information performance category data to fulfill the requirements of submitting to MIPS, we have included MIPS APMs in our burden estimate for the advancing care information performance category. Consistent with the proposed list of APMs that are MIPS APMs in the proposed rule, we assume that three MIPS APMs that do not also qualify as Advanced APMs will operate in the first performance period: Track 1 of the Shared Savings Program, CEC (one-sided risk arrangement), and OCM (one-sided risk arrangement).

    Table 51—Estimated Numbers of Organizations Submitting Advancing Care Information Performance Category Data on Behalf of Eligible Clinicians Category of clinician Available mechanisms for submission Estimated number of organizations submitting data MIPS Eligible Clinicians (not in APMs) As groups or individuals 503,457 clinicians submitting as individuals.
  • 3,880 groups submitting on behalf of 194,192 clinicians.
  • MIPS Eligible Clinicians participating in the Shared Savings Program Each TIN in the APM Entity group reports advancing care information to MIPS through group TIN reporting 14,384 billing TINs representing 140,341 participants in 433 Shared Savings Program ACOs. MIPS Eligible Clinicians participating in MIPS APMs other than the Shared Savings Program Each MIPS eligible clinician in the APM Entity group reports advancing care information to MIPS through either group TIN or individual reporting [The burden estimates assume TIN-level reporting.] 33 Billing TINs representing 1 APM Entity in CEC (non-LDO arrangement).
  • 6,478 Billing TINS representing 195 APM Entities in OCM one-sided risk arrangement.
  • Total Number of Organizations and Individuals Submitting Data 528,231 respondents.

    Because performance year 2017 will be the first year for clinicians to report the advancing care information performance category data as groups, there is considerable uncertainty about what number of clinicians will report as part of a groups. Given the limitations of historical 2015 EHR Incentive Program data, some of our burden estimate's assumptions are based on 2015 PQRS data. Specifically, we assume that the number of individual clinicians and groups submitting advancing care information data will be the same as the number of individual clinicians and groups submitting data under the 2015 PQRS. Hence, we assume 503,457 clinicians will submit as individuals and 3,880 groups submitting data on behalf of 194,192 clinicians. Further we anticipate that the 433 Shared Savings Program ACOs will submit data at the ACO participant billing TIN level, for a total of 14,384 billing TINS representing 140,341 participants. We anticipate that the APM Entity in the CEC model one-sided risk arrangement (at the time of publication, there is only one APM Entity in this track) will submit data at the billing TIN level, for an estimated total of 33 billing TINs submitting data. Finally, we anticipate that the 195 APM Entities in the OCM one-sided risk arrangement will submit at the billing TIN level, for an estimated 6,478 billing TINs submitting data. Hence, as shown in Table 51, we estimate that up to approximately 528,231 respondents will be submitting data under the advancing care information performance category (503,457 MIPS eligible clinicians + 3,880 groups submitting on behalf of clinicians + 14,384 billing TINs within the Shared Savings Program ACOs + 33 billing TINs within the APM Entity participating in CEC one-sided risk arrangement and 6,578 billing TINs within the OCM one-sided risk arrangement). The total burden hours for a clinician or group to report on the specified Advancing Care Information Objectives and Measures will be 3 hours. The total estimated burden hours are 1,584,694 (528,231 responses × 3 hours). At a clinician's hourly rate, the total burden cost is $304,476,511 (1,584,694 hours × $194.66/hour).

    Table 52—Total Estimated Burden for Advancing Care Information Performance Category Data Submission Respondents Responses Burden per
  • response (hours)
  • Total annual burden (hours) Hourly labor cost
  • ($)
  • Total burden cost
  • ($)
  • 528,231 528,231 3 1,584,694 194.66 308,476,511

    The following is summary of the comments we received regarding our burden estimate for the advancing care information performance category.

    Comment: Two commenters noted that group reporting under advancing care information and other categories would reduce reporting burden.

    Response: As noted above, there is considerable uncertainty about the number of MIPS eligible clinicians who will report as part of a group, and no historical data on group reporting for the EHR Incentive Program We have revised our burden to more appropriately reflect the reduction in burden due to group reporting by assuming that groups that submitted quality data to the 2015 PQRS would also do so under the advancing care information performance category. We assume that the burden of advancing care information data submission is the same for each respondent, whether that respondent is a group, individual clinician, or billing TIN in a MIPS APM. In the proposed rule, we assumed that all MIPS eligible clinicians not in MIPS APMs would report as individuals. Due to the change in our assumptions about group reporting, our estimated burden of advancing care information is lower than in the proposed rule.

    Comment: Two commenters noted that the removal of redundant eCQMs in the advancing care information category would reduce burden.

    Response: As noted above, our efforts to align the advancing care information performance category with other MIPS performance categories, such as submitting eCQMs to the quality category, will streamline submission requirements and reduce confusion for MIPS eligible clinicians. Consistent with the reduction in measures, we have reduced our burden estimates for the advancing care information performance category from the proposed 4 hours to 3 hours per respondent. Note that the estimated burden of 3 hours is lower than the estimated 7 hours per clinician in the Medicare EHR Incentive Program—Stage 3 Paperwork Reduction Act Package.46 After the transition year, we anticipate a further reduction in the burden of submitting advancing care information measures as MIPS eligible clinicians and organizations submitting data on their behalf become more familiar with and have adapted to the measure specifications.

    46 The Medicare EHR Incentive Program—Stage 3 PRA package Paperwork Reduction Act Package is available at https://www.cms.gov/Regulations-and-Guidance/Legislation/PaperworkReductionActof1995/PRA-Listing.html.

    Comment: Several commenters stated that the burden estimates in the Collection of Information section of the proposed rule were too low because MIPS eligible clinicians would require extensive time to become familiar with the program, including the advancing care information performance category, in the transition year.

    Response: In response to public comments on the advancing care information performance category, we have reduced the number of required measures from 11 to five. Accordingly, we have reduced our burden estimates for the advancing care information performance category from the proposed 4 hours to 3 hours per respondent. After the transition year, we anticipate a reduction in the burden of reporting advancing care information measures as MIPS eligible clinicians and organizations reporting on their behalf become more familiar with and have adapted to the measure specifications.

    In summary, we have modified our advancing care information data submission requirements in response to public comment, and reduced the corresponding burden estimates as compared to the proposal. In response to public comments, we have also adjusted our estimates to more accurately reflect the burden due to group reporting. Further, the burden estimates have been revised to reflect changes advancing care information data submission requirements for APM Entities under the APM scoring standard between the proposal and final rule, and changed to incorporate updated data on wages, PQRS, and counts of ACOs and their participants.

    G. ICRs Regarding Burden for Improvement Activities Submission (§§ 414.1355 and 414.1365)

    Requirements for submitting improvement activities are new, and we do not have historical data which is directly relevant. As noted in section II.E.F of this final rule with comment period, a variety of organizations and in some cases, individual clinicians, will report improvement activity performance category data. For clinicians who are not part of APMs, we assume that the number of clinicians submitting improvement activities as part of a group will be approximately the same as the number of clinicians submitting PQRS data as part of a group through the QCDR and registry, EHR, and GPRO Web Interface submission mechanisms in 2015. As noted above, MIPS eligible clinicians participating in MIPS APMs do not need to report improvement activities data unless the CMS-assigned improvement activities score is below the maximum improvement activities score. We estimate that that there could be as many as 503,547 clinicians submitting improvement activities performance category data as individuals, which is equal to the number of clinicians submitting as individuals using the claims, QCDR or qualified registry, or EHR submission mechanisms under the 2015 PQRS.47 We estimate that approximately 194,192 clinicians comprising 3,880 groups may submit at the group level. The burden estimates assume no improvement activities reporting burden for MIPS APM participants. CMS will assign the improvement activities performance category score at the APM level; each APM Entity within the same MIPS APM will be assigned the same score.

    47 Because of the lack of historical data on improvement activities submission, our estimate of 595,100 eligible clinicians submitting improvement activities data is based on 2014 PQRS historical data (595,100 eligible clinicians = 299,169 eligible clinicians submitting quality data through claims + 214,590 eligible clinicians submitting quality data through QCDR or qualified registry + 77,241 eligible clinicians submitting quality data through EHR).

    Table 53—Estimated Numbers of Organizations Submitting Improvement Activities Performance Category Data on Behalf of Eligible Clinicians Category of clinician Available mechanisms for submission Estimated number of entities submitting data MIPS Eligible Clinicians (not in APMs) As groups or individuals 3,880 groups representing 302,076 eligible clinicians.
  • 503,337 eligible clinicians submitting individually.
  • MIPS APM participants No reporting burden 0

    During the transition year, clinicians and groups can submit data via qualified registry, QCDR, EHR, CMS Web Interface, or attestation data submission mechanisms. In addition to collecting necessary supporting documentation, each clinician and group, will provide a yes/no attestation submitted during the data submission period for successfully completed improvement activities. We estimate that up to approximately 507,457 groups or individuals (3,880 groups and + 503,337 individual clinicians) will be submitting data for improvement activities. We estimate it will take no longer than 2 hours per group or individual to submit data for the improvement activities performance category. The total estimated burden is 1,014,674 hours (507,337 groups or individuals × 2 hours each). At a physician's hourly rate, the total estimated burden cost is $197,516,441 (1,014,674 hours × $194.66).

    Table 54—Total Estimated Burden for Improvement Activities Submission Respondents Responses Burden per
  • response
  • (hours)
  • Total annual burden
  • (hours)
  • Hourly labor cost
  • ($)
  • Total burden cost
  • ($)
  • 507,337 507,337 2 1,014,674 194.66 197,516,441

    We received comments regarding the improvement activities submission burden estimates.

    Comment: Several commenters believed that the burden estimates were too low because MIPS eligible clinicians would require extensive time to become familiar with the program in the transition year.

    Response: In response to public comments on the improvement activities performance category, we have reduced the number of recommended improvement activities from six to four. Consistent with the reduction in measures, we have reduced our estimate of the data submission in this final rule with comment to 2 hours from the 3 hours estimated in the proposed rule.

    We have also simplified the improvement activities data submission requirements for MIPS APM participants. The proposal was to require individual MIPS eligible clinicians participating in MIPS APMs to submit improvement activities data. Under the policies finalized in this final rule with comment period, MIPS APM participants will not be required to submit improvement activities data because CMS will assign the score at the MIPS APM level. As noted above, APM Entities in MIPS APMs may submit improvement activities data if the CMS-assigned improvement activities scores is below the maximum improvement activities score.

    In summary, we have simplified the improvement activities submission requirements in response to public comments. We have updated the improvement activities burden estimate to reflect the updated data submission requirement, to reflect 2015 data and to more accurately reflect the proportion of clinicians that will submit data as groups.

    H. ICRs Regarding Burden for Cost (§ 414.1350)

    The cost performance category relies on administrative claims data. For claims-based submitting, the Medicare Parts A and B claims submission process is used to collect data on resource measures from MIPS eligible clinicians. MIPS eligible clinicians are not asked to provide any documentation by CD or hardcopy. Therefore, under the cost performance category, we do not anticipate any new or additional submission requirements for MIPS eligible clinicians.

    I. ICR Regarding Partial QP Elections for Advanced APMs

    In the proposed rule, we discussed the MIPS-related submission requirements for participants in MIPS APMs. Advanced APM Entities may face an additional submission requirement under MIPS related to Partial QP elections. The final rule has changed the timing of when eligible clinicians in Advanced APMs receive notification about their Partial QP status, which reduced the burden estimates. Under the revised policy set forth in this final rule with comment period, Advanced APM participants will be notified about their QP or Partial QP status before the end of the performance period, whereas in the proposed rule, Advanced APM participants would not have been notified of their QP or Partial QP status until after the end of the submission period. If an Advanced APM Entity is notified its eligible clinicians are determined as a group to be Partial QPs, a representative from the Advanced APM Entity will log into the MIPS portal to indicate whether MIPS eligible clinicians determined to be Partial QPs wish to participate in MIPS.48 Our analyses of 2014 data indicate that nearly all Advanced APM participants would meet the QP threshold, and that no participants would be determined as a group to be Partial QPs. Hence, we assume that no Advanced APM Entities will face the data submission requirement in the 2017 performance period.

    48 If the Advanced APM Entity or CJR model participant chooses not to make the election, the default is for the clinicians meeting the partial QP threshold to opt out of MIPS.

    In addition, Affiliated Practitioners participating as gainsharers in the CJR model and assessed individually for purposes of the QP determination may face a data submission requirement for Partial QP elections. Under the proposed rule, we did not discuss the CJR model as potentially contributing to the burden for Partial QP elections. However, CMS has recently proposed changes to the CJR model in the proposed Advancing Care Coordination Through Episode Payment Models rule (81 FR 50794 through 28364) that, if finalized, would allow the CJR model to meet the Advanced APM criteria. Because CMS will assess Affiliated Practitioners in the CJR model individually, Affiliated Practitioners must make a Partial QP election at the individual eligible clinician level if they are determined to be Partial QPs. We also estimate that CJR participants are much more likely to be Partial QPs than participants in other Advanced APMs. We therefore estimate that up to 12,800 individual participants in the CJR model may submit partial QP election data.

    We estimate it will take each Advanced APM Entity representative or CJR model participant 15 minutes to make this election, and an additional 15 minutes to register for the MIPS Portal. As noted above, we assume that 12,800 participants in the CJR model and no Advanced APM Entities will make this election. Hence, we assume that 12,800 APM Entities' participants will make this election on the MIPS Portal, for a total burden estimate of 6,400 hours (12,800 participants × 0.5 hours). At a computer systems analyst's hourly labor cost, the total burden cost of these elections is collectively estimated to be $555,008 (6,400 × $86.72/hour).

    We did not receive any comments on the Partial QP election burden estimates. As noted above, we are adopting changes in the Partial QP burden estimates that reflect policy changes between the proposed rule and final rule with comment period, and the Advancing Care Coordination Through Episode Payment Models that, if finalized, would create a new Advanced APM.

    Table 55—Total Estimated Burden for Partial QP Election Respondents Responses Burden per
  • response
  • (hours)
  • Total annual burden
  • (hours)
  • Hourly labor cost
  • ($)
  • Total burden cost
  • ($)
  • 12,800 12,800 0.5 6,400 86.72 555,008
    J. Summary of Annual Burden Estimates

    The total gross burden estimate includes the total burden of recordkeeping and data submission under MIPS. Table 56 provides an estimate of the total annual burden of MIPS of 10,947,453 hours and a total labor cost of reporting of $1,311,245,806. Some of the information collection burden under MIPS does not represent an additional burden to the public, but replaces information collection burden that existed under two of its predecessor programs, the PQRS and the Medicare EHR Incentive Program. The estimated total existing burden approved for information collections related to PQRS and the Medicare EHR Incentive Program (for EPs) was 11,954,112 hours for a total labor cost of reporting of $1,318,689,857. The net burden estimate reflects only the incremental burden associated with this rule, and excludes the burden of existing recordkeeping and data submission under the PQRS, the Medicare EHR Incentive Program, CAHPS for PQRS, and PQRS Data Validation.49 Mindful of the combined data submission burden of MIPS, we have sought to avoid duplication of data submission efforts and simplified data submission structures within the unified program. The streamlining and simplification of data submission structures is reflected in our net burden estimates, which show a reduction in burden of −1,006,658 burden hours and −$7,444,051 labor cost of reporting compared to the existing information collections.

    49 The previously approved data collections OMB control numbers were as follows: PQRS (OCN 0938-1059), CAHPS for PQRS (OCN 0938-1222), and PQRS Data Validation (OCN 0938-1255) and the Objectives/Measures (EP) ICR in the EHR Incentive Program Stage III PRA under review at OMB (OCN 0938-1278).

    Table 56—Proposed Annual Recordkeeping and Reporting Requirements Section(s) in title 42 of the CFR and section of rule Respondents Responses Burden per response
  • (hours)
  • Total annual burden
  • (hours)
  • Labor cost of reporting
  • ($)
  • Total annual burden cost
  • ($)
  • § 414.1330 and § 414.1335 (Quality Performance Category) Claims Submission Mechanism 332,729 332,729 18.8 6,255,305 Varies (see Table 44) 597,613,226 § 414.1330 and § 414.1335 (Quality Performance Category) Qualified Registry or QCDR Submission Mechanisms 121,879 121,879 11.1 1,350,785 Varies (see Table 45) 137,342,735 § 414.1330 and § 414.1335 (Quality Performance Category) EHR— Submission Mechanism 52,430 52,430 12.0 629,160 Varies (See Table 46) 63,251,552 § 414.1330 and § 414.1335 (Quality Performance Category) CMS Web Interface Submission Mechanism 750 750 80.4 60,299 Varies (See Table 47) 5,294,680 § 414.1400 (QCDR and Registries) QCDR and qualified registry self-nomination 183 183 10.0 1,830 86.72 158,698 § 414.1390 (Data Validation and Auditing) 430 430 1.5 645 35.20 22,704 § 414.1375 (Advancing Care Information Performance Category) 528,231 528,231 3.0 1,584,694 194.66 308,476,511 § 414.1360 (Improvement Activities) 507,337 507,337 2.0 1,014,674 194.66 197,516,441 $414.1430 (Partial Qualifying APM Participant (QP) election) 12,800 12,800 0.5 6,400 86.72 555,008 § 414.1400 (Quality Performance Category) CAHPS for MIPS 132,307 132,307 0.3 43,661 23.23 1,014,252 Total Gross Burden 1,689,076 10,947,453 1,311,245,806 Total Approved Burden Under Previous Programs 1,338,865 11,954,112 1,318,689,857 Total Net Burden 350,211 −1,006,658 −7,444,051

    We received one general comment regarding our calculations for the burden of the data submission requirements.

    Comment: One commenter requested that CMS provide time and cost estimates for reading educational materials and attending educational sessions, learning which of the new reporting requirements apply to each practice, and the costs for practices with CEHRT vs. those without CEHRT.

    Response: We agree that clinicians will need to review educational materials and attend outreach sessions to become familiar with the rule. We will use our extensive outreach efforts to improve clinician understanding to the greatest extent we can. The Regulatory Impact Analysis includes a general discussion of the potential costs to clinicians of meeting MIPS requirements. Because the Collection of Information section, by statute, discusses only the costs for submitting data, the costs of learning general information about the new requirements are not included. Hence, no changes were made to the burden estimate as a result of this comment.

    In summary, no changes were made to the rule as a result of general comments on the burden estimates.

    K. Submission of PRA-Related Comments

    We have submitted a copy of this rule's information collection and recordkeeping requirements to OMB for review and approval. The requirements are not effective until they have been approved by the OMB.

    To obtain copies of the supporting statement and any related forms for the proposed collections discussed above, please visit CMS's Web site at www.cms.hhs.gov/PaperworkReductionActof1995, or call the Reports Clearance Office at 410-786-1326.

    We invite public comments on these potential information collection requirements. If you wish to comment, please identify the rule (CMS-5517-FC) and submit your comments to the OMB desk officer via one of the following transmissions: Mail: OMB, Office of Information and Regulatory Affairs, Attention: CMS Desk Officer, Fax Number: 202-395-5806 OR, Email: [email protected]. ICR-related comments must be received on/by November 18, 2016.

    IV. Regulatory Impact Analysis A. Statement of Need

    This final rule with comment period is necessary to make payment and policy changes under the PFS and to make statutorily-required changes under the MACRA. The MACRA's enactment consolidated certain aspects of physician quality data submission and performance programs into the new Merit-based Incentive Payment System (MIPS), including using certified EHR technology (section 1848(o) of the Act), the PQRS (sections 1848(k) and (m) of the Act), and the VM (section 1848(p) of the Act). These programs have been developed and most recently implemented by us as the Medicare EHR Incentive Program (80 FR 62761), the PQRS (80 FR 71135), and the VM (80 FR 71273). The MACRA's enactment altered the Medicare EHR Incentive Program such that the existing Medicare payment adjustment for EPs under section 1848(a)(7)(A) of the Act will end in CY 2018. Similarly, the MACRA ends the separate PQRS in CY 2018 and provides for the inclusion of various aspects of PQRS in MIPS, and sunsets the VM, ending it in CY 2018 and establishing certain aspects of the VM as a component of MIPS in CY 2019. Finally, the MACRA introduces incentive payment to eligible clinicians who become Qualifying APM Participants (QPs) through participation in Advanced APMs.

    This consolidated program for MIPS eligible clinicians represents a new approach to the delivery of health care in this care setting aimed at reducing burden on Medicare-enrolled eligible clinicians, improving population health, lowering growth in overall health care costs, and providing clear incentives for the provision of the best quality care for Medicare beneficiaries. MIPS provides payment adjustments for MIPS eligible clinicians for providing value-driven health care services to their patients, and APMs offer a variety of opportunities that substantially alter the methods of payment for health care and enable clinicians to make fundamental changes to their day-to-day operations to improve the quality and reduce the cost of health care.

    B. Overall Impact

    We have examined the impact of this rule as required by Executive Order 12866 on Regulatory Planning and Review (September 30, 1993), Executive Order 13563 on Improving Regulation and Regulatory Review (February 2, 2013), the Regulatory Flexibility Act (RFA) (September 19, 1980, Pub. L. 96-354), section 1102(b) of the Act, section 202 of the Unfunded Mandates Reform Act of 1995 (March 22, 1995; Pub. L. 14-04), Executive Order 13132 on Federalism (August 4, 1999) and the Congressional Review Act (5 U.S.C. 804(2)).

    Executive Orders 12866 and 13563 direct agencies to assess all costs and benefits of available regulatory alternatives and, if regulation is necessary, to select regulatory approaches that maximize net benefits (including potential economic, environmental, public health and safety effects, distributive impacts, and equity). A regulatory impact analysis (RIA) must be prepared for major rules with economically significant effects ($100 million or more in any 1 year). We estimate, as discussed below in this section, that the PFS provisions included in this final rule with comment period will redistribute more than $199 million in budget neutral payments in the initial performance year. In addition, this final rule with comment period will increase government outlays for the exceptional performance payments under MIPS ($500 million), and incentive payments to QPs (approximately $333-$571 million). Therefore, we estimate that this rulemaking is “economically significant” as measured by the $100 million threshold, and hence also a major rule under the Congressional Review Act. Accordingly, we have prepared a RIA that, to the best of our ability, presents the costs and benefits of the rulemaking. The RFA requires agencies to analyze options for regulatory relief of small entities. For purposes of the RFA, small entities include small businesses, nonprofit organizations, and small governmental jurisdictions. Most hospitals, practitioners, and most other providers and suppliers are small entities, either by nonprofit status or by having annual revenues that qualify for small business status under the Small Business Administration (SBA) standards. (For details, see the SBA's Web site at http://www.sba.gov/content/table-smallbusiness-size-standards (refer to the 620000 series)). Individuals and States are not included in the definition of a small entity.

    The RFA requires that we analyze regulatory options for small businesses and other entities. We prepare a regulatory flexibility analysis unless we certify that a rule would not have a “significant economic impact on a substantial number of small entities.” The analysis must include a justification concerning the reason action is being taken, the kinds and number of small entities the rule affects, and an explanation of any meaningful options that achieve the objectives with less significant adverse economic impact on the small entities.

    There are over 1 million physicians, other practitioners, and medical suppliers that receive Medicare payment under the PFS. Approximately 95 percent of practitioners, other providers and suppliers are considered to be small entities, based upon the SBA standards. As shown later in this analysis, however, potential losses to MIPS eligible clinicians under the MIPS are a small percentage of their total Medicare Part B PFS revenue—4 percent in the initial payment year—though rising to as high as 9 percent in subsequent years. On average, clinicians' Medicare billings are only about 23 percent of total revenue,50 so even those MIPS eligible clinicians adversely affected by MIPS would rarely face losses in excess of 3 percent of revenues, the HHS standard for determining whether an economic effect is “significant.” (In order to determine whether a rule meets the RFA threshold of “significant” impact HHS has for many years used as a standard adverse effects that exceed 3 percent of either revenues or costs.) However, because there are so many affected MIPS eligible clinicians, even if only a small proportion is significantly adversely affected, the number could be “substantial.” Therefore, we are unable to conclude that an Initial Regulatory Flexibility Analysis (IRFA) is not required. Accordingly, the analysis and discussion provided in this section, as well as elsewhere in this final rule with comment period, together meet the requirements for an IRFA. We note that whether or not a particular MIPS eligible clinician or other eligible clinician is adversely affected would depend in large part on the performance of that MIPS eligible clinician or other eligible clinician and that CMS will offer significant technical assistance to MIPS eligible clinicians and other eligible clinicians in meeting the new standards.

    50 Based on National Health Expenditure Data, Physicians and Clinical Services Expenditures, https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NationalHealthAccountsProjected.html.

    In addition, section 1102(b) of the Act requires us to prepare an RIA if a rule may have a significant impact on the operations of a substantial number of small rural hospitals. This analysis must conform to the provisions of section 604 of the RFA. For purposes of section 1102(b) of the Act, we define a small rural hospital as a hospital that is located outside of a Metropolitan Statistical Area for Medicare payment regulations and has fewer than 100 beds. We are not preparing an analysis for section 1102(b) of the Act because we have determined, and the Secretary certifies, that this final rule with comment period would not have a significant impact on the operations of a substantial number of small rural hospitals.

    Section 202 of the Unfunded Mandates Reform Act of 1995 (UMRA) also requires that agencies assess anticipated costs and benefits on state, local, or tribal governments or on the private sector before issuing any rule whose mandates require spending in any 1 year of $100 million in 1995 dollars, updated annually for inflation. In 2016, that threshold is approximately $146 million. This final rule with comment period would impose no mandates on state, local, or tribal governments or on the private sector because participation in Medicare is voluntary and because physicians and other clinicians have multiple options as to how they will participate under MIPS and discretion over their performance. Moreover, HHS interprets UMRA as applying only to “unfunded” mandates. We do not interpret Medicare payment rules as being “unfunded mandates,” but simply as conditions for the receipt of payments from the federal government for providing services that meet federal standards. This interpretation applies whether the facilities or providers are private, state, local, or tribal.

    Executive Order 13132 establishes certain requirements that an agency must meet when it promulgates a proposed rule (and subsequent final rule) that imposes substantial direct requirement costs on state and local governments, preempts state law, or otherwise has Federalism implications. Since this regulation does not impose any costs on state or local governments, the requirements of Executive Order 13132 are not applicable.

    We have prepared the following analysis, which together with the information provided in the rest of this final rule with comment period, meets all assessment requirements. The analysis explains the rationale for and purposes of this final rule with comment period; details the costs and benefits of the rule; analyzes alternatives; and presents the measures we would use to minimize the burden on small entities. As indicated elsewhere in this final rule with comment period, we are implementing a variety of changes to our regulations, payments, or payment policies to implement statutory provisions. We provide information for each of the policy changes in the relevant sections of this final rule with comment period. We are unaware of any relevant federal rules that duplicate, overlap, or conflict with this final rule with comment period. The relevant sections of this final rule with comment period contain a description of significant alternatives if applicable.

    C. Changes in Medicare Payments

    Section 101 of the MACRA, (1) repeals the SGR formula for physician payment updates in Medicare, and (2) requires that we establish MIPS for eligible clinicians under which the Secretary must use a MIPS eligible clinician's final score to determine and apply a MIPS payment adjustment factor to the clinician for a year.

    Repealing the SGR formula eliminated significant and immediate problems with Medicare's annual PFS payment updates, including implausible payment reductions (such as the 21.2 percent decrease that was scheduled for April 1, 2015). The Office of the Actuary estimated that avoiding those payment reductions results in a budgetary cost of $150.5 billion for fiscal years 2015 through 2025 compared to the prior law baseline. However, that cost is partially offset by other MACRA provisions that are estimated to have a net reduction in federal expenditures of $47.7 billion, bringing the net cost of the legislation to $102.8 billion.51 52 The largest component of the MACRA costs is its replacement of scheduled reductions in physician payments with payment rates first frozen at 2015 levels and then increasing at a rate of 0.5 percent a year during CYs 2016 through 2019. The estimates in this RIA take those legislated rates as the baseline for the estimates we make as to the costs, benefits, and transfer effects of the regulation, with some data collection provisions taking effect in 2017 and substantial payment reforms first taking effect in 2019.

    51 Based on National Health Expenditure Data, Physicians and Clinical Services Expenditures, https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NationalHealthAccountsProjected.html.

    52 Estimated Financial Effects of the Medicare Access and CHIP Reauthorization Act of 2015 (H.R. 2), CMS Office of the Actuary, https://www.cms.gov/research-statistics-data-and-systems/research/actuarialstudies/downloads/2015hr2a.pdf.

    As required by the MACRA, overall payment rates for services for which payment is made under the PFS would remain at the 2019 level through 2025, but starting in 2019, the amounts paid to individual MIPS eligible clinicians and other eligible clinicians would be subject to adjustment through one of two mechanisms, depending on whether the MIPS eligible clinician or other eligible clinician meets the threshold for participation in Advanced APMs to be considered a Qualifying APM Participant (QP) or Partial QP, or is instead evaluated under MIPS.

    1. Estimated Incentive Payments to QPs in Advanced APMs

    For APMs, from 2019 through 2024, eligible clinicians receiving a substantial portion of their revenue through Advanced APMs and meeting other applicable requirements to become QPs would receive a lump-sum APM Incentive Payment equal to 5 percent of their estimated aggregate payment amounts for Medicare covered professional services in the preceding year. The APM Incentive Payment is separate from, and in addition to, the payment for services furnished by an eligible clinician during that year. Eligible clinicians who become QPs would not receive a MIPS payment adjustment under the PFS. Eligible clinicians who do not become QPs, but meet a slightly lower threshold, would be deemed Partial QPs for that year, and may elect to report to and be scored under MIPS but do not receive the APM Incentive Payment. In the 2017 QP Performance Period, we define Partial QPs to be eligible clinicians in Advanced APMs who have at least 20 percent, but less than 25 percent, of their payments for Part B covered professional services through an Advanced APM Entity, or furnish Part B covered professional services to at least 10 percent, but less than 20 percent, of their Medicare beneficiaries through an Advanced APM Entity. If the Partial QP elects to be scored under MIPS, they would be subject to all MIPS requirements and would receive a MIPS payment adjustment. This adjustment may be positive or negative. If an eligible clinician does not meet either of those QP standards, the eligible clinician would be subject to MIPS and would report to MIPS and receive the corresponding MIPS payment adjustment.

    Beginning in 2026, payment rates for clinicians who achieve QP status for a year would be increased each year by 0.75 percent, while payment rates for clinicians who do not achieve QP status would be increased each year by 0.25 percent. In addition, MIPS eligible clinicians would receive positive, neutral, or negative MIPS payment adjustments to their Part B payments in a payment year based on performance during a prior performance period. Although the legislation establishes overall payment rate and procedure parameters until 2026 and beyond, this impact analysis covers only the initial payment year (2019) in detail. After 2019, while overall payment levels will be partially bounded, we have also acknowledged in the preamble that the Department will likely revise its quality and other payment measures and overall payment thresholds and other parameters as clinicians' behavior changes.

    2. Estimated Numbers of Clinicians Eligible for MIPS

    As discussed further in this final rule with comment period, we are finalizing requirements for MIPS that may result in the exclusion of certain clinicians for various reasons. For example, the MACRA requires us to restrict eligibility for the 2019 and 2020 MIPS payment year to selected clinician types as described in section II.E.1 of this final rule with comment period. Additionally, we are excluding eligible clinicians that do not exceed the low volume threshold as defined in section II.E.3 of this rule: Those with $30,000 or less in Part B allowed charges or 100 or fewer Medicare patients as measured at the TIN/NPI level for individual reporting, the TIN level for group reporting, and the APM Entity level for reporting under the APM scoring standard. We also exclude those who are newly enrolled to Medicare and those eligible clinicians who are QPs.

    We projected the number of clinicians that would be excluded from MIPS due to their being QPs using several sources of information. First, the projections are anchored in the most recently available public information on Advanced APMs. The projections reflect APMs operating in 2017 that we indicated in the proposed rule would be Advanced APMs under proposed policies, including the Next Generation ACO Model, Comprehensive Primary Care (CPC) Plus, Comprehensive ESRD Care (CEC) Model, and the Shared Savings Program Tracks 2 and 3. We also factored in information about potential new Advanced APM opportunities including the Advanced APM criteria finalized in § 414.1415 of this final rule with comment period and the updates to the CJR model that were proposed in the Advancing Care Coordination Through Episode Payment Models proposed rule (81 FR 50794 through 28364). We also projected Advanced APM participation based on applicant counts and estimated acceptance rates to Advanced APMs that had open application periods as of September 2016. Finally, we used historical data to examine the extent to which Advanced APM participants would meet the QP thresholds of having at least 25 percent of their Part B covered professional services or at least 20 percent of their Medicare beneficiaries furnished Part B covered professional services through the Advanced APM Entity. We followed the methodologies for group determination of QP status outlined in section II.F.5 of this final rule with comment period, and we determined that all participants in the Advanced APMs that were in operation in 2014 and 2015 would have met the QP thresholds. Based on that information, we assumed that during the first QP Performance Period, the vast majority of eligible clinicians participating in Advanced APM would be QPs.53

    53 To estimate the percent of Advanced APM participants that meet the QP threshold using historical data, we identified APM Entities that participated in APMs that have similar design characteristics to those finalized for Advanced APMs in § 414.1415. In 2014, those models included the Pioneer ACO Model (which will end in 2016), and Comprehensive Primary Care Initiative (CPC). We also included the CEC model, which began in 2015. Further, we assigned Shared Savings Program ACOs that existed in 2014 their 2016 track assignments because several ACOs have since transitioned to higher risk tracks. Next, we analyzed 2014 claims data to identify the APM Entities within each of those APMs to determine which of those APM Entities met the criteria for having at least 25 percent of their Part B covered professional services or 20 percent of their beneficiaries furnished Part B covered professional services through the APM Entity.

    Using those procedures, we estimated that between approximately 70,000 and 120,000 clinicians would become QPs in the transition year with total Part B allowed charges of approximately $6,666 to $11,428 million. We estimated that the total incentive payment of 5 percent of Part B allowed charges would be between approximately $333 and $571 million. In this regard, it is longstanding HHS policy not to attempt to predict the effects of future rulemakings in order to maximize future Secretarial discretion over whether, and if so how, payment or other rules would be changed.

    To estimate the number of clinicians that are not in MIPS due to their clinician type not being eligible, or exclusions due to the low-volume or newly-enrolled eligible clinicians, we began with a list of the clinicians participating in Medicare Part B in 2015.54 We would like to note that we have used the most recent data available (2015 data) for these analyses where possible. In the instances where 2015 data is unavailable, we have used data from 2014 from the VM and other sources. We refined the number of eligible clinicians by restricting the sample to doctors of medicine, doctors of osteopathy, chiropractors, dentists, optometrists, podiatrists, nurse practitioners, physician assistants, certified registered nurse anesthetists, and clinical nurse specialists since those are the practitioner types that can be MIPS eligible clinicians for CY 2017 in accordance with section 1848(q)(1)(C) of the Act.

    54 We identified the clinicians (at TIN-NPI level) that had positive Part B allowed charges, a positive number of beneficiaries and a reported specialty NPPES data. Exception: for CAH-II only providers we included providers with CAH-II PFS allowed charges >0 and a specialty record; we did not have any beneficiary data or non-PFS charges for CAH-II only providers.

    We estimated the number of excluded clinicians by identifying and counting the clinicians on this list who in 2015 (a) exceeded the low-volume threshold; or (b) were assumed to be newly enrolled in Medicare by virtue of having PFS charges in 2015 but not 2014. We have estimated the effects of these various exclusions in Table 57. More than half (53-57 percent) of 1,380,209 Medicare clinicians billing to Part B will be ineligible for or excluded from MIPS. The excluded or ineligible clinicians represent approximately one-fourth (22-27 percent) of allowed Medicare Part B charges.

    According to National Health Expenditure data,55 in 2014, payments for physician and other clinician services totaled $603.7 billion from all sources. Medicare paid $138.4 billion of that amount. Based on the lower bound total in Table 57 of $23,314 million in allowed charges for clinicians excluded from MIPS, we estimate that less than 17 percent of clinicians' Medicare Part B spending for services covered under the PFS will be excluded from MIPS, and less than 4 percent of all clinicians' spending from all sources will be excluded.

    55 Physicians and Clinical Services Expenditures, https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NationalHealthAccountsProjected.html.

    Table 57—Projected Number of Clinicians Ineligible for or Excluded From MIPS in CY 2017, by Reason * Reason for exclusion Medicare clinicians (TIN/NPIs) excluded Count of Medicare clinicians (TIN/NPIs) remaining after exclusion Part B allowed charges excluded
  • ($ in millions)
  • Part B allowed charges remaining after exclusion
  • ($ in millions)
  • ALL MEDICARE CLINICIANS BILLING PART B 1,380,209 $104,674 Qualifying APM Participants (QPs) ** 70,000 lower bound
  • 120,000 upper bound
  • 1,260,209-1,310,209 $6,666-$11,428 $93,246-$98,008
    Ineligible Clinician Types *** 199,308 1,060,901-1,110,901 $10,614 $82,632-$87,394 Newly-enrolled clinicians **** 85,268 975,633-1,025,633 $1,283 $81,349-$86,111 Low-volume clinicians ***** 383,514 592,119-642,119 $4,751 $76,598-$81,360 TOTAL EXCLUDED MEDICARE CLINICIANS 738,090-788,090 $23,314-$28,076 PERCENT EXCLUDED 53-57% 22-27% * Allowed charges for covered services of the clinician under Part B. 2015 data used to estimate 2017 performance. Payments estimated using 2015 dollars. ** QPs have at least 25 percent of their Medicare Part B covered professional services or least 20 percent of their Medicare beneficiaries furnished part B covered professional services through an Advanced APM. The upper bound estimate for QPs also reflects that a small number of Advanced APM participants may be Partial Qualifying APM Participants (Partial QPs) that opt to be excluded from MIPS. For MIPS Year 1, Partial QPs are APM participants that have at least 20 percent, but less than 25 percent, of their Medicare Part B covered professional services through an Advanced APM Entity, or at least 10 percent, but less than 20 percent, of their Medicare beneficiaries furnished part B covered professional services through an Advanced APM Entity. *** Section 1848(q)(1)(C) of the Act defines a MIPS eligible clinician for payment years 1 and 2 as a physician, physician's assistant, nurse practitioner, or clinical nurse anesthetist, or a group that includes such clinicians. (See section II.E.1 for further details) Our estimates of ineligible clinician types count clinician types who received part B payments but are not listed as eligible clinicians in the Act for payment year 1 or 2. **** Newly enrolled Medicare clinicians in our data had allowed PFS charges in CY 2015 but the NPI did not have allowed PFS charges in CY 2014. ***** Low-volume clinicians have less than or equal to $30,000 in allowed Medicare Part B charges or less than or equal to 100 Medicare patients.

    We have estimated the number of clinicians that we believe will be excluded from MIPS in CY 2017 by specialty. Our estimates follow in Table 58. The estimates in Table 58 are based on clinicians in eligible specialties that were excluded because they were newly enrolled, QPs, or met the proposed low-volume exclusion. However, due to data limitations, the estimates in Table 58 include only a portion of the 70,000-120,000 QPs that are listed in Table 57.56

    56 The QP estimates in Table 58 are counts of eligible clinicians that participated in the three APMs that were in effect in 2015 and meet the criteria for Advanced APMs, that is, CPC initiative and Pioneer ACO Model. (In our 2015 data, the Pioneer ACO Model serves as a proxy for its successor, the Next Generation ACO Model; similarly, the CPC initiative serves as a proxy for its successor, CPC+). Due to data limitations, the QP estimates in Table 58 do not count Shared Savings Program Tracks 2 and 3 participants in Advanced APMs that were implemented after 2015, including CEC, Comprehensive Primary Care Plus, and changes to the CJR model proposed in the Advancing Care Coordination Through Episode Payment Models proposed rule (81 FR 50794 through 28364). In contrast, the QP estimate in Table 57 includes publicly announced APMs that will be implemented in 2016 or 2017.

    Among eligible clinicians, Table 58 shows that the percent excluded from MIPS varies widely across specialties, ranging from a low of 16.8 percent in gastroenterology to a high of 90.2 percent for chiropractors.

    We have also estimated the numbers of eligible clinicians that will be excluded from MIPS in CY 2017 by practice size as shown in Table 59. Eligible clinicians in small practices are much more likely to be excluded from MIPS than those in larger practices. For example, more than half (51.6 percent) of eligible clinicians in practices of 1-9 clinicians will be excluded from MIPS, whereas about one-fourth (27.3 percent) of eligible clinicians in practices of 100 or more clinicians will be excluded.

    BILLING CODE 4120-01-P ER04NO16.017 ER04NO16.018 ER04NO16.019 ER04NO16.020 BILLING CODE 4120-01-C 3. Estimated Impacts on Payments to MIPS Eligible Clinicians

    Based on the estimates of excluded clinicians in Table 57, we estimate that between approximately 592,119 and 642,119 eligible clinicians will be required to submit MIPS data to CMS in year 1.57 They are clinicians with eligible clinician types that (a) are not QPs participating in Advanced APMs (b) exceeded the low volume threshold and (c) have been enrolled as Medicare physicians for more than 1 year.

    57 Because our model assigned final scores using data from the quality performance category, our model did not assign final scores to 21,764 eligible clinicians who are eligible for MIPS, but reported measures groups which is no longer continuing in MIPS. However, these eligible clinicians may be scored on advancing care information and improvement activities, and those two performance categories could not be modeled at this time given limited historical data.

    Payment impacts in this final rule with comment period reflect averages by specialty and practice size based on Medicare utilization. The payment impact for a MIPS eligible clinician could vary from the average and would depend on the mix of services that the MIPS eligible clinician furnishes. The average percentage change in total revenues would be less than the impact displayed here because MIPS eligible clinicians generally furnish services to both Medicare and non-Medicare patients. In addition, MIPS eligible clinicians may receive substantial Medicare revenues for services under other Medicare payment systems that would not be affected by MIPS payment adjustment factors.

    In order to estimate the impact of MIPS on clinicians required to report, we used the most recently available data, including data from 2015 PQRS, NPPES data and other available data to model the scoring provisions described in this regulation. First, we arithmetically calculated a hypothetical final score for each MIPS eligible clinician based on quality performance. Because the cost performance category has a zero percent weight for the initial payment year, we did not include any cost measures in the final score. Because of the lack of historical data for the advancing care information and improvement activities measures, the model does not estimate scores for the advancing care information and improvement activities performance categories either.

    Then, we implemented an exchange function based on the provisions of this final rule with comment period to translate the hypothetical final score into a negative MIPS payment adjustment or positive MIPS payment adjustment. This entailed modifying parameters of the exchange function iteratively in order to achieve distributions in MIPS payment adjustments that meet requirements related to budget neutrality and aggregate exceptional performance payment amounts using a 3 point performance threshold and a 70 point additional performance threshold.

    Given the wide diversity of clinical practices, the initial development period of the Quality Payment Program implementation was designed to allow physicians to pick their pace of participation for the first performance period that begins January 1, 2017. Eligible clinicians will have three flexible options to submit data to MIPS and a fourth option to join Advanced APMs in order to become QPs, all of which would ensure they do not receive a negative payment adjustment in 2019. With the extensive changes to policy and flexibility, estimating impacts of this final rule with comment period using only historic 2015 quality submission data significantly overestimates the impact on clinicians, particularly on clinicians in practices with 1-9 clinicians, which have traditionally had lower participation rates. In order to assess the sensitivity of the impact to the participation rate, we have prepared two sets of analyses.

    The first analysis, which we label as “standard participation assumptions,” relies on the assumption that policy goals are designed to encourage a minimum 90 percent of MIPS eligible clinicians to participate, regardless of practice size. Therefore, we assumed that, on average, the categories of practices with 1-9 clinicians and practices with 10-24 clinicians would have 90 percent participation. This assumption is an increase from existing historical data. PQRS participation rates have increased steadily since the program began; the 2014 PQRS experience report showed an increase in the participation rate from 15 percent in 2007 to 62 percent in 2014.58 In 2015, among those eligible for MIPS, 87.2 percent participated in the PQRS. In 2015, MIPS eligible practices of less than 10 clinicians participated in the PQRS at a rate of 58.2 percent, and MIPS eligible practices of 10-24 clinicians participated in the PQRS at a rate of 83.7 percent. Because practices of 25-99 clinicians have a 92.6 percent participation rate based on historical data and practices of 100+ clinicians have a 98.5 percent participation rate, we assumed the average participation rates of those categories of clinicians would be the same as under the 2015 PQRS. Our assumption of 90 percent average participation for the categories of practices with 1-9 or 10-24 clinicians reflects our belief that small and solo practices will respond to this final rule with comment period's flexibility, reduced data submission burden, financial incentives, the support they will receive through technical assistance by participating at a rate close to that of other practice sizes, enhancing the existing upward trend in quality data submission rates. Therefore, we assume that the quality scores assigned to new participants reflect the distribution of MIPS quality scores.

    58 2014 PQRS Experience Report at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/PQRS/Downloads/2014_PQRS_Experience_Rpt.pdf.

    The second analysis, which we label as “alternative participation assumptions,” assumes a minimum participation rate of 80 percent. Because the 2015 PQRS participation rates for practices of more than 10 clinicians are greater than 80 percent, this analysis assumes increased participation for practices of 1-9 clinicians. Practices of more than 10 clinicians are included in the model at their historic participation rates.

    Table 60 summarizes the impact on Part B services of MIPS eligible clinicians by specialty for the standard participation assumptions. Table 61 summarizes the impact on Part B services of MIPS eligible clinicians by specialty under the alternative participation assumptions.

    Tables 62 and 63 summarize the impact on Part B services of MIPS eligible clinicians by practice size for the standard participation assumptions (Table 62) and the alternative participation assumptions (Table 63).

    Tables 60 and 62 show that under our standard participation assumptions, the vast majority (94.7percent) of MIPS eligible clinicians are anticipated to receive positive or neutral payment adjustments for the 2019 MIPS payment year, with only 5.3 percent receiving negative MIPS payment adjustments. Using the alternative participation assumptions, Table 63 shows that 91.9 percent of MIPS eligible clinicians are expected to receive positive or neutral payment adjustments. Due to limitations of modeling the new payment policies using historic data, it is not possible to differentiate between positive and neutral adjustment expectations. However, in both the standard and alternative assumptions, participating practices of all sizes are expected to experience a neutral or small net positive impact in the 2019 MIPS payment year.

    The distribution of funds reflects this final rule with comment period's emphasis on increasing participation of MIPS eligible clinicians for the transition year of MIPS, which creates a ramp to more robust participation in future MIPS performance years.

    The following policy changes were made between the proposed and final rule with comment period to support that emphasis: modifying the low-volume threshold to exclude more clinicians, modifying the performance threshold to 3 for the initial payment year, and adding a performance floor on quality measure benchmarks.

    BILLING CODE 4120-01-P ER04NO16.021 ER04NO16.022 ER04NO16.023 ER04NO16.024 ER04NO16.025 ER04NO16.026 ER04NO16.027 ER04NO16.028 BILLING CODE 4120-01-C

    We received several comments about the data used in the RIA.

    Comment: Several commenters noted that the 2014 data used in the RIA was not representative of the 2019 MIPS payment year of MIPS. One commenter requested that CMS use 2015 data for its RIA estimates.

    Response: The RIA has been updated as requested, to the extent feasible, with 2015 data, which is the most recently available data. The claims-based readmission measures are still based on 2014 data. The identification of newly enrolled Medicare clinicians is based on both 2014 and 2015 data, and the estimated number of QPs and their allowed charges is based on 2014, 2015, and more recent data.

    In summary, in response to comments, the RIA was updated with more recent data where feasible.

    4. Potential Impact of Advancing Care Information Score

    As noted earlier, our impact does not include either the advancing care information or the improvement activities performance categories. The proposed rule discussed preliminary data on potential advancing care information scores (81 FR 28370). While we estimate the final score using only the quality performance category score, we recognize the final scores for the 2019 MIPS payment year would be estimated using advancing care information and improvement activities data.

    The costs for implementation and complying with the advancing care information performance category requirements could potentially lead to higher operational expenses for MIPS eligible clinicians. However, we believe that the combination of MIPS payment adjustments and long-term overall gains in efficiency will likely offset the initial expenditures. Because section II.E.5.g of this final rule with comment period establishes a policy to reweight the advancing care information performance category scores for MIPS eligible clinicians that were exempt from the Medicare EHR Incentive Program or received hardship exemptions (see 81 FR 28232), the final rule with comment period would not impose additional requirements for EHR adoption during the transition year. Health IT vendors may face additional costs in the transition year of MIPS if they choose to develop additional capabilities in their systems in order to submit advancing care information and improvement activities performance category data on behalf of MIPS eligible clinicians.

    Additionally, we believe a majority of MIPS eligible clinicians who are able to report the advancing care information performance category of MIPS have already adopted an EHR during Stage 1 and 2 of the prior Medicare EHR Incentive Program. As we have stated with respect to the Medicare EHR Incentive Program, we believe that future retrospective studies on the costs to implement an EHR and the return on investment (ROI) will demonstrate efficiency improvements that offset the actual costs incurred by MIPS eligible clinicians participating in MIPS and specifically in the advancing care information performance category, but we are unable to quantify those costs and benefits at this time.

    At present, evidence on EHR benefits in either improving quality of care or reducing health care costs is mixed. This is not surprising since the adoption of EHR as a fully functioning part of medical practice is progressing, with numerous areas of adoption, use, and sophistication demonstrating need for improvement. Even physicians and hospitals that can meet Medicare EHR Incentive Program standards have not necessarily fully implemented all the functionality of their systems or fully exploited the diagnostic, prescribing, and coordination of care capabilities that these systems promise. Moreover, many of the most important benefits of EHR depend on interoperability among systems and this functionality is still lacking in many EHR systems. A recent RAND report prepared for the ONC reviewed 236 recent studies that related the use of health IT to quality, safety, and efficacy in ambulatory and non-ambulatory care settings and found that—

    A majority of studies that evaluated the effects of health IT on healthcare quality, safety, and efficiency reported findings that were at least partially positive. These studies evaluated several forms of health IT: metrics of satisfaction, care process, and cost and health outcomes across many different care settings . . . Our findings agree with previous [research] suggesting that health IT, particularly those functionalities included in the Medicare EHR Incentive Program regulation, can improve healthcare quality and safety. The relationship between health IT and [health care] efficiency is complex and remains poorly documented or understood, particularly in terms of healthcare costs, which are highly dependent upon the care delivery and financial context in which the technology is implemented.59

    59 Paul G. Shekelle, et al. Health Information Technology: An Updated Systematic Review with a Focus on Meaningful Use Functionalities. RAND Corporation. 2014.

    Other recent studies have not found definitive quantitative evidence of benefits.60 The proposed rule requested comments providing better evidence concerning EHR benefits in reducing the costs or increasing the value of EHR-supported health care. No commenters provided evidence concerning EHR benefits in reducing the costs or increasing the value of EHR-supported health care.

    60 See, for example, Saurabh Rahurkar, et al, “Despite the Spread of Health Information Exchange, There Is Little Information of Its Impact On Cost, Use, And Quality of Care,” Health Affairs, March 2015; and Hemant K. Bharga and Abhay Nath Mishra, “Electronic Medical Records and Physician Productivity: Evidence from Panel Data Analysis,” Management Science, July 2014.

    Similarly, the costs for implementation and complying with the improvement activities performance category requirements could potentially lead to higher expenses for MIPS eligible clinicians. Costs per full-time equivalent primary care clinician for improvement activities will vary across practices, including for some activities or certified patient-centered medical home practices, in incremental costs per encounter, and in estimated costs per member per month. Costs may vary based on panel size and location of practice among other variables. For example, Magill (2015), conducted a study of certified patient-centered medical home practices in two states.61 Magill (2015), found that costs associated with a full-time equivalent primary care clinician, who were associated with certified patient-centered medical home practices, varied across practices. Specifically, Magill (2015) found an average of $7,691 per month in Utah practices, and an average of $9,658 in Colorado practices. Consequently, certified patient-centered medical home practices incremental costs per encounter were $32.71 in Utah and $36.68 in Colorado (Magill, 2015). Magill (2015) also found that the average estimated cost per member, per month, for an assumed panel of 2,000 patients was $3.85 in Utah and $4.83 in Colorado. However, given the lack of comprehensive historical data for proposed improvement activities, we are unable to quantify those costs in detail at this time. The proposed rule requested public comments on the costs associated with improvement activities from practices that have implemented clinical practice improvements in the past. No commenters provided specific cost estimates of improvement activities.

    61 Magill et al. “The Cost of Sustaining a Patient-Centered Medical Home: Experience from 2 States.” Annals of Family Medicine, 2015; 13:429-435.

    D. Impact on Beneficiaries

    There are a number of changes in this final rule with comment period that would have an effect on beneficiaries. In general, we believe that the changes will have a positive impact and improve the quality and value of care provided to Medicare beneficiaries.

    More broadly, we expect that over time clinician engagement in the Quality Payment Program will increasingly result in improved quality of care, resulting in lower morbidity and mortality, and in reduced spending, as physicians respond to the incentives offered by MIPS and APMs and adjust their clinical practices in order to maximize their performance on specified quality measures and activities. The various shared savings initiatives already operating have demonstrated that all three outcomes are possible. For example, in August of 2015, we issued 2014 quality and financial performance results showing that Medicare ACOs continue to improve the quality of care for Medicare beneficiaries while generating net savings to the Medicare trust fund as well as shared savings to some model participants.62 In 2014, the 20 ACOs in the Pioneer ACO Model and 333 Shared Shavings Program ACOs generated more than $411 million in total savings, which includes all ACOs' savings and losses. Additionally, in their first years of implementation, both Pioneer and Shared Savings Program ACOs had higher quality care than Medicare FFS providers on measures for which comparable data were available. Shared Savings Program patients with multiple chronic conditions and with high predicted Medicare spending received better quality care than comparable FFS patients.63 Between the first and third performance periods, Pioneer ACOs improved their average quality score from 73 percent to 87 percent. The Shared Savings Program ACOs yielded $465 million in savings to the Medicare Trust Funds in 2014.64

    62https://www.cms.gov/Newsroom/MediaReleaseDatabase/Fact-sheets/2015-Fact-sheets-items/2015-08-25.html.

    63 J.M. McWilliams et al., “Changes in Patients' Experiences in Medicare Accountable Care Organizations.” New England Journal of Medicine 2014; 371:1715-1724, DOI: 10.1056/NEJMsa1406552.

    64 The cost savings were for the second year of Shared Savings Program implementation and the third year of Pioneer ACO implementation. https://www.cms.gov/Newsroom/MediaReleaseDatabase/Fact-sheets/2015-Fact-sheets-items/2015-08-25.html.

    Results from the first year of the CPC Initiative indicate that it has generated nearly enough savings in Medicare health care expenditures to offset care management fees paid by CMS. The primary sources of the savings were reduced rates of hospital admissions and ED visits. The bulk of the savings was generated by patients in the highest-risk quartile, but favorable results were also seen in other patients. Over 90 percent of practices successfully met all first-year transformation requirements. The expenditure impact estimates differ across the seven regions. Additional time and data are needed to assess impact on care quality. The results from the first year of the CPC Initiative should be interpreted cautiously as effects are emerging earlier than anticipated, and additional research is needed to assess how the initiative affects cost and quality of care beyond the first year. Because the effects of the CPC Initiative are likely to be larger in subsequent years, these early results suggest it is likely the model will eventually break even or generate savings.65

    65https://blog.cms.gov/2015/01/23/moving-forward-on-primary-care-transformation/. For more detail see https://innovation.cms.gov/files/reports/cpci-evalrpt1.pdf.

    Basing payment in part on performance metrics is still an evolving art and, as discussed throughout this preamble, there are multiple variables and as yet no definitive answers as to what combinations of measures, benchmarks, and other variables will achieve the best results over time. Accordingly, we are unable at this time to provide specific dollar estimates of these benefits and cost reductions.

    E. Impact on Other Health Care Programs and Providers

    The MIPS is aimed at Medicare FFS physicians and other clinicians paid under the PFS. These physicians and other clinicians are almost all engaged in serving patients covered by other payers as well. Because Medicare covers only about one in seven persons (though a considerably higher share of total healthcare spending, since older persons incur far higher expenses on average than younger persons), for most of those services that will be subject to MIPS payment adjustments, Medicare provides only a fraction of practice revenues. Moreover, it is unlikely that many insurance payers will adopt MIPS or MIPS-like payment models in the short run. Hence, MIPS incentives are necessarily attenuated. On the other hand, changing practices for one group of patients will possibly lead to changes for other patients (for example, EHR systems are almost always used for all patients served by a physician). Physicians and other clinicians may find it simpler and more efficient to adopt clinical practice improvements for all patients, regardless of payer, in response to the Quality Payment Program's incentives, through the use of both MIPS measures and activities and APMs. Furthermore, since the Quality Payment Program eventually rewards participation in APMs based on services furnished to patients beyond those in Medicare, other payers may start to develop more models in which clinicians and patients can participate. Hence, there are likely to be beneficial effects on a far broader range of patients in the health care system than simply Medicare patients, and we believe those effects would include improved health care quality and lower costs over time. However, we have no basis at this time for quantifying such effects.

    We note that large proportions of the Medicare and Medicaid programs are already delivered through capitated insurance payments to HMOs, PPOs, and related organizations. The Medicare Advantage Plans and related State programs therefore already have substantial incentives to improve quality and reduce costs. MIPS does not affect provider payments under those programs directly, which have their own reimbursement mechanisms for physicians and other clinicians. In many but not all cases, those insurance carriers do use incentive mechanisms that are similar in purpose and design to the kinds of APMs that we expect will arise under the new payment adjustments. We would not expect major near-term changes in HMO and PPO payment arrangements, or performance, from any MIPS or APM spillover effects. Regardless, we have no basis at this time for quantifying any such effects.

    There are other potentially affected provider entities, including hospitals, skilled nursing facilities, CAHs (largely small rural hospitals), and providers serving unique populations, such as providers of tribal health care services. In none of these cases do we believe that MIPS would have significant effects on substantial numbers of providers. But to the extent that MIPS and increasing participation in APMs over time succeed in improving quality and reducing costs, there may be some beneficial effects not only on patients but also on some providers.

    As noted previously in this section of the final rule with comment period, and as discussed in this subsection, we have concluded that financial effects on either directly or indirectly affected small entities, including rural hospitals, will be minimal. We welcomed comments on these conclusions.

    The following is summary of the comments we received regarding the financial effects on either directly or indirectly affected small entities, including rural hospitals.

    Comment: Many commenters expressed concerns that the proposed rule would have negative financial consequences on small or solo practices, practices in rural and medically underserved areas, small hospital systems, primary care practices, and practices treating medically complex patients. Several commenters recommended policies to address the disparate effect on small and solo practices.

    Response: As noted above, in response to many public comments, we implemented several policy changes that reduced the impact of this final rule with comment period on small and solo practices, including modifying the low-volume threshold to reduce the burden for small and solo practices. Further, this final rule with comment period's scoring provisions are designed to encourage participation, incentivize continuous improvement, and move participants on a glide path to improved health care delivery in the quality payment program. The RIA has been modified to reflect these policy changes, and shows that this final rule with comment period does not have disparate effects of small and solo practices that participate in reporting.

    Comment: Several commenters were concerned that the negative effects of small/solo practices would be discriminatory against racial/ethnic minority physicians or racial/ethnic minority/patients. One commenter noted that Hispanic physicians were more likely to be in small and solo practices, and Hispanic and non-English speaking patients were more likely to be treated for small and solo practices, and recommended that small and solo practices be protected for their diversity value. Further, one commenter stated the rule would have a negative impact on inner city clinics and lower-income patients.

    Response: As noted above, we implemented several policy changes designed to address the commenters' concerns. The RIA has been modified to reflect these policy changes, and our modeling shows that the rule does not have disparate effects of small and solo practices that participate in reporting.

    Comment: Many commenters believed that the rule was administratively complex and confusing, and increased administrative burden for clinicians, especially small and solo practices, and rural practices, including hospitals.

    Response: We have taken numerous steps to simplify the Quality Payment Program, particularly for the transition. For example, as discussed in II.E.5.f.(3) and II.E.5.g.(6) of this final rule with comment period, the advancing care information and improvement activities performance category reporting requirements have been simplified between the proposed and final rule with comment period. We do not believe, however, that our general discussion of the potential costs of implementing the rule need further modification.

    Comment: Several commenters expressed concerns about the increased administrative costs and potential detrimental effects to IHS and Tribal providers. Several commenters requested clarification on the extent to which the proposed rule requirements would affect IHS and tribal providers. Two commenters requested clarification on the extent to which the RIA tables included IHS and tribal providers, and requested further analyses if they were not included. Several commenters recommended provisions to limit any detrimental impact to IHS and tribal providers including: Excluding them from MIPS, accepting the measures they report to other programs, technical assistance, separate benchmarks, and non-putative approaches to compliance.

    Response: As we stated in the proposed rule, we continue to believe that the Quality Payment Program will not have significant effects on substantial numbers of providers of tribal health care services. Because the data used for our scoring model does not identify tribal and IHS providers, we are unable to estimate the amount of MIPS payments for those providers. While tribal and IHS provided care is not covered under MACRA, physicians working in tribal or IHS health facilities may be eligible for MACRA if they treat Medicare beneficiaries. However, we believe the number of IHS and tribal providers that will be covered under MACRA will be small, especially because many of those clinicians will not exceed the low-volume threshold. We will consider whether we should adopt any policies aimed specifically at IHS and other tribal providers in the future.

    We will consider whether we should adopt any policies aimed specifically at IHS and other tribal providers in the future.

    Comment: Many commenters noted the high cost of implementing EHRs and Health IT, particularly for small, solo, or rural clinicians. Several commenters noted that Medicare does not reimburse physicians for the time required to implement or use EHRs and other Health IT. Several commenters noted their practices had spent large amounts of money to comply with the previous EHR Incentive Program requirements, and expressed frustration that they needed to spend additional funds to comply with the new requirements.

    Response: As we noted in the CY 2012 PFS final rule (76 FR 73464), we believe some eligible clinicians will incur costs associated with purchasing an EHR product if they have not purchased them already. However, we do not believe that the majority of eligible clinicians will purchase an EHR solely for the purpose of participating in MIPS.

    We understand commenters' concerns about the costs of purchasing and implementing EHR systems in order to participate in MIPS. In response to public comments, we have further simplified the advancing care information reporting requirement and modified the low-volume threshold to exclude more small and solo clinicians with few encounters and low Medicare allowed costs.

    In summary, we received many comments about the potential impacts of the rule on small, solo, or rural clinicians. In response to many comments, we modified the low-volume threshold to increase the number of small, solo, and rural clinicians exempt from MIPS requirements. Further, as a result of comments, we have decided to finalize policies throughout the rule, which will focus the Quality Payment Program in its transition year on encouraging participation and educating clinicians while minimizing the risks for negative MIPS payment adjustment. The transition year policies will create a ramp to more robust participation in future years. The policy changes are reflected in the RIA estimates, which show that the risk for negative MIPS payment adjustment is minimal for MIPS eligible clinicians, including small and solo practices that participate.

    F. Alternatives Considered

    This final rule with comment period contains a range of policies, including many provisions related to specific statutory provisions. The preceding preamble provides descriptions of the statutory provisions that are addressed, identifies those policies where discretion has been exercised, presents our rationale for our finalized policies and, where relevant, analyzes alternatives that we considered. Although it is hard to single out any one alternative for public comment, the proposed rule particularly called attention to and requested comments on the performance threshold and the level at which it is set for scoring purposes under MIPS.

    As described previously, under section 1848(q)(6)(D)(i) of the Act, for each year of MIPS, the Secretary shall compute a performance threshold with respect to which the final scores of MIPS eligible clinicians are compared for purposes of determining the MIPS payment adjustment factors under section 1848(q)(6)(A) of the Act for a year. The performance threshold for a year must be either the mean or median (as selected by the Secretary, which may be reassessed every 3 years) of the final scores for all MIPS eligible clinicians for a prior period specified by the Secretary. Section 1848(q)(6)(D)(iii) of the Act outlines a special rule for the initial 2 years of MIPS, which requires the Secretary, prior to the performance period for such years, to establish a performance threshold for purposes of determining the MIPS payment adjustment factors under paragraph (A) and an additional performance threshold for purposes of determining the additional MIPS payment adjustment factors under paragraph (C), each of which shall be based on a period prior to the performance periods and take into account data available with respect to performance on measures and activities that may be used under the performance categories and other factors determined appropriate by the Secretary.

    Depending on where the threshold is set within those parameters, the proportions and distributions of MIPS eligible clinicians receiving payment reductions versus positive payment adjustments can change dramatically from our estimates. For example, in Table 60, we estimated (based on available data) that 3.7 percent of Colon/Rectal Surgery specialists will receive a negative payment adjustment under MIPS. Setting the performance threshold at a lower level would enable more Colon/Rectal Surgery specialists to avoid negative MIPS payment adjustments and potentially qualify for more positive MIPS payment adjustments. Conversely, we estimated above that 96.7 percent of Interventional Radiology specialists would receive a positive MIPS payment adjustment under the current proposal. Setting the performance threshold at a higher level would result in fewer Interventional Radiology specialists qualifying for positive MIPS payment adjustments, and potentially more of them receiving negative MIPS payment adjustments. But any payment changes resulting from changes to the performance threshold policy will depend primarily on changes to practices and other responses from MIPS eligible clinicians.

    The proposed rule requested comment on these alternatives, on all previous estimates of effects, and on any other issues or options that might improve the substantive effects of the proposed rule, or our estimates of those effects. We were particularly interested in comments on any aspects of the proposed rule that might inadvertently or unintentionally create adverse effects on the delivery of high quality and high value health care, and on options that might reduce such effects.

    Comments on the alternatives to the proposed performance threshold are discussed in section II.E.7.c. of this final rule with comment period.

    G. Assumptions and Limitations

    We would like to note several limitations to the analyses that estimated MIPS eligible clinicians' eligibility, negative MIPS payment adjustments, and positive payment adjustments based for the first MIPS performance period (2017) based primarily on 2015 data described above:

    The scoring model cannot fully reflect MIPS eligible clinicians' behavioral responses to MIPS. The scoring model assumes higher participation in MIPS quality reporting than under the PQRS. Other potential behavioral responses are not addressed in our scoring model. The scoring model assumes that quality measures submitted and the distribution of scores on those measures would be similar under the transition year as they were under the 2015 PQRS program.

    Limited historical data for two performance categories. Because we have limited historical data for the proposed advancing care information and improvement activities performance categories, the modeled scoring estimates were based solely on quality measures. Our scoring model estimates do not include advancing care information or improvement activities performance category scores.

    The scoring model does not reflect the growth in Advanced APM participation between 2015 and 2017. Due to data limitations, the scoring model could only identify clinicians that participated in APMs that may have been determined to be Advanced APM in 2015 were they operating in 2017. Several new APMs that we anticipate will be Advanced APM have been implemented or will be implemented between 2015 and 2017. Further, some eligible clinicians will join the successors of APMs already in existence in 2015. In contrast to the scoring model, the CMS Innovation Center's QP estimates use methods that do reflect projected growth in APM participation between 2015 and 2017.

    There are additional limitations to our estimates. To the extent that there are year-to-year changes in the data submission, volume and mix of services provided by MIPS eligible clinicians, the actual impact on total Medicare revenues will be different from those shown in Tables 60-63. Due the limitations above, there is considerable uncertainty around our estimates that is difficult to quantify in detail.

    H. Accounting Statement

    As required by OMB Circular A-4 (available at http://www.whitehouse.gov/omb/circulars/a004/a-4.pdf), in Table 64 (Accounting Statement), we have prepared an accounting statement.

    We have not attempted to quantify the benefits of this rule because of the many uncertainties as to both clinician behaviors and resulting effects on patient health and cost reductions. For example, the applicable percentage for MIPS incentives changes over time, increasing from 4 percent in 2019 to 9 percent in 2022 and subsequent years, and we are unable to estimate precisely how physicians will respond to the increasing incentives. As noted above, in CY 2019, we estimate that we will distribute approximately $199 million in payment adjustments on a budget-neutral basis, which represents the applicable percent for 2019 required under section 1848(q)(6)(B)(i) of the Act and excludes $500 million in exceptional performance payments. In 2020, section 1848(q)(6)(B)(ii) of the Act specifies that the applicable percent will be 5 percent, which we estimate would mean that we will distribute approximately $249 million in payment adjustments on a budget-neutral basis, ignoring changes in clinical practice, volume growth, inflation, or other changes that may affect Medicare physician payments, effects of changes in data submission practices, advancing care information scores, and innovation activities scores, as well as the $500 million in exceptional performance payments. Finally, in 2021, section 1848(q)(6)(B)(iii) of the Act specifies that the applicable percent will be 7 percent, which we estimate would mean that we will distribute approximately $435 million in payment adjustments on a budget-neutral basis, again ignoring changes in clinical practice, volume growth, inflation or other changes that may affect Medicare physician payments, as well as the $500 million in exceptional performance payments.

    Further, the addition of new Advanced APMs and growth in Advanced APM participation over time will affect the pool of MIPS eligible clinicians, and for those that are MIPS eligible clinicians, may change their relative performance. The $500 million available for exceptional performance and the 5 percent APM Incentive Payment for QPs are only available from 2019 through 2024. Beginning in 2026, payment for services furnished by QPs will receive a higher update than for services furnished by non-QPs. However, we are unable to estimate the number of QPs in those years, as we cannot project the number or types of Advanced APMs that will be made available in those years through future CMS initiatives proposed and implemented in those years, nor the number of QPs for those future Advanced APMs.

    The percentage of the final score attributable to each performance category will change over time, and we will incorporate improvement scoring in future years. The Improvement activities category represents an entirely new category for measuring MIPS eligible clinicians' performance. We may also propose policy changes in future years as we continue implementing MIPS and as MIPS eligible clinicians accumulate experience with the new system. Moreover, there are interactions between the MIPS and APM incentive programs and other shared savings and incentive programs that we cannot model or project. Nonetheless, even if ultimate savings and health benefits represent only low fractions of current experience, benefits are likely to be substantial in overall magnitude.

    Table 64 includes our estimate for MIPS payment adjustments ($199 million), the exceptional performance payments under MIPS ($500 million), and incentive payments to QPs (using the range described in the preceding analysis, approximately $333-$571 million). However, of these three elements, only the negative MIPS payment adjustments are shown as estimated decreases.

    66 A range of estimates is provided due to uncertainty about the number of Advanced APM participants that will meet the QP threshold in 2016.

    Table 64—Accounting Statement Category Transfers CY 2019 Annualized Monetized Transfers Estimated increase of between $1,032 and $1,270 million in payments for higher performance under MIPS and to QPs.66 From Whom to Whom? Increased Federal Government payments to physicians, other practitioners and suppliers who receive payment under the Medicare Physician Fee Schedule. CY 2019 Annualized Monetized Transfers Estimated decrease of $199 million for lower performance under MIPS. From Whom to Whom? Reduced Federal Government payments to physicians, other practitioners and suppliers who receive payment under the Medicare Physician Fee Schedule. Note: These estimates are identical under both a 7 percent and 3 percent discount rate.

    We received three comments in response to the estimated federal costs of implementing the rule in Table 64.

    Based on National Health Expenditure data,67 total Medicare expenditures for physician and clinical services in 2014 reached $138.4 billion. Expenditures for physician and clinical services from all sources reached $603.7 billion. Table 60 shows that the aggregate negative MIPS payment adjustment for all MIPS eligible clinicians under MIPS is estimated at $199 million, which represents less than 0.2 percent of Medicare payments for physician and clinical services and less than 0.1 percent of payments for physician and clinician services from all sources. Table 60 also shows that the aggregate positive payment adjustment for MIPS eligible clinicians under MIPS is estimated at $699 million (including additional MIPS payment adjustments for exceptional performance), which represents less than 1 percent of Medicare expenditures for physician and clinician services and 0.2 percent of expenditures from all sources for physician and clinical services.

    67 Physicians and Clinical Services Expenditures, https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NationalHealthAccountsProjected.html.

    Comment: One commenter requested that the government provide an estimate of its costs to implement the rule.

    Response: Supporting Statement A of this rule's Paperwork Reduction Act package has a discussion of the cost to the government of implementing the rule. Hence, no revisions were made to the accounting table as result of this comment.

    Comment: Two commenters noted the administrative complexity of meeting both federal Medicare and state Medicaid administrative requirements for dually eligible beneficiaries, those commenters requested that CMS factor dually eligible beneficiaries into its thinking about the timing of the MIPS and requested that CMS provide guidance to states on implementing the Quality Payment Program.

    Response: We intend to work with the states during the MIPS' implementation, and will consider commenters' suggestions about policies with respect to dually-eligible beneficiaries in the future. No revisions were made to the accounting table as result of this comment.

    In summary, after considering comments on government costs, no changes were made to the accounting table.

    List of Subjects 42 CFR Part 414

    Administrative practice and procedure, Biologics, Drugs, Health facilities, Health professions, Kidney diseases, Medicare, Reporting and recordkeeping requirements.

    42 CFR Part 495

    Administrative practice and procedure, Health facilities, Health maintenance organizations (HMO), Health professions, Health records, Medicaid, Medicare, Penalties, Reporting and recordkeeping requirements.

    For the reasons set forth in the preamble, the Centers for Medicare & Medicaid Services amends 42 CFR chapter IV as set forth below:

    PART 414—PAYMENT FOR PART B MEDICAL AND OTHER HEALTH SERVICES 1. The authority citation for part 414 continues to read as follows: Authority:

    Secs. 1102, 1871, and 1881(b)(l) of the Social Security Act (42 U.S.C. 1302, 1395hh, and 1395rr(b)(l)).

    § 414.90 [Amended]
    2. In § 414.90— a. Amend paragraph (e) introductory text by removing the phrase “and subsequent years” and adding in its place the phrase “through 2018”; and b. Amend paragraph (e)(1)(ii) by removing the phrase “and each subsequent year” and adding in its place the phrase “through 2018”. 3. Subpart O is added to part 414 to read as follows: Subpart O—Merit-Based Incentive Payment System and Alternative Payment Model Incentive Sec. 414.1300 Basis and scope. 414.1305 Definitions. 414.1310 Applicability. 414.1315 [Reserved] 414.1320 MIPS performance period. 414.1325 Data submission requirements. 414.1330 Quality performance category. 414.1335 Data submission criteria for the quality performance category. 414.1340 Data completeness criteria for the quality performance category. 414.1350 Cost performance category. 414.1355 Improvement activities performance category. 414.1360 Data submission criteria for the improvement activities performance category. 414.1365 Subcategories for the improvement activities performance category. 414.1370 APM scoring standard under MIPS. 414.1375 Advancing care information performance category. 414.1380 Scoring. 414.1385 Targeted review and review limitations. 414.1390 Data validation and auditing. 414.1395 Public reporting. 414.1400 Third party data submission. 414.1405 Payment. 414.1410 Advanced APM determination. 414.1415 Advanced APM criteria. 414.1420 Other payer advanced APMs. 414.1425 Qualifying APM participant determination: In general. 414.1430 Qualifying APM participant determination: QP and partial QP thresholds. 414.1435 Qualifying APM participant determination: Medicare option. 414.1440 Qualifying APM participant determination: All-payer combination option. 414.1445 Identification of other payer advanced APMs. 414.1450 APM incentive payment. 414.1455 Limitation on review. 414.1460 Monitoring and program integrity. 414.1465 Physician-focused payment models. Subpart O—Merit-Based Incentive Payment System and Alternative Payment Model Incentive
    § 414.1300 Basis and scope.

    (a) Basis. This subpart implements the following provisions of the Act:

    (1) Section 1833(z)—Incentive Payments for Participation in Eligible Alternative Payment Models.

    (2) Section 1848(a)—Payment for Physicians' Services Based on Fee Schedule.

    (3) Section 1848(k)—Quality Reporting System.

    (4) Section 1848(q)—Merit-based Incentive Payment System.

    (b) Scope. This subpart part sets forth the following:

    (1) The circumstances under which eligible clinicians are not considered MIPS eligible clinicians with respect to a year.

    (2) How individual MIPS eligible clinicians can have their performance assessed as a group.

    (3) The data submission methods and data submission criteria for each of the MIPS performance categories.

    (4) Methods for calculating a performance category score for each of the MIPS performance categories.

    (5) Methods for calculating a MIPS final score and applying the MIPS payment adjustment to MIPS eligible clinicians.

    (6) Requirements for an APM to be designated an “Advanced APM.”

    (7) Methods for eligible clinicians and entities participating in Advanced APMs to meet the participation thresholds to become Qualifying APM Participants (QPs) and Partial QPs.

    (8) Methods and processes for counting participation in Other Payer Advanced APMs in making QP and Partial QP determinations.

    (9) Methods for calculating and paying the APM Incentive Payment to QPs.

    (10) Criteria for Physician-Focused Payment Models (PFPMs).

    § 414.1305 Definitions.

    As used in this section, unless otherwise indicated—

    Additional performance threshold means the numerical threshold for a MIPS payment year against which the final scores of MIPS eligible clinicians are compared to determine the additional MIPS payment adjustment factors for exceptional performance.

    Advanced Alternative Payment Model (Advanced APM) means an APM that CMS determines meets the criteria set forth in § 414.1415.

    Advanced APM Entity means an APM Entity that participates in an Advanced APM or Other Payer Advanced APM.

    Affiliated practitioner means an eligible clinician identified by a unique APM participant identifier on a CMS-maintained list who has a contractual relationship with the Advanced APM Entity for the purposes of supporting the Advanced APM Entity's quality or cost goals under the Advanced APM.

    Affiliated practitioner list means the list of Affiliated Practitioners of an APM Entity that is compiled from a CMS-maintained list.

    Alternative Payment Model (APM) means any of the following:

    (1) A model under section 1115A of the Act (other than a health care innovation award).

    (2) The shared savings program under section 1899 of the Act.

    (3) A demonstration under section 1866C of the Act.

    (4) A demonstration required by Federal law.

    APM Entity means an entity that participates in an APM or payment arrangement with a non-Medicare payer through a direct agreement or through Federal or State law or regulation.

    APM Entity group means the group of eligible clinicians participating in an APM Entity, as identified by a combination of the APM identifier, APM Entity identifier, Taxpayer Identification Number (TIN), and National Provider Identifier (NPI) for each participating eligible clinician.

    APM Incentive Payment means the lump sum incentive payment for a year paid to an eligible clinician who is a QP for the year from 2019 through 2024.

    Attestation means a secure mechanism, specified by CMS, with respect to a particular performance period, whereby a MIPS eligible clinician or group may submit the required data for the advancing care information or the improvement activities performance categories of MIPS in a manner specified by CMS.

    Attributed beneficiary means a beneficiary attributed to the Advanced APM Entity under the terms of the Advanced APM or Other Payer Advanced APM and listed as an attributed beneficiary on the latest available list of attributed beneficiaries at the time of a QP determination.

    Attribution-eligible beneficiary means a beneficiary who during the QP Performance Period:

    (1) Is not enrolled in Medicare Advantage or a Medicare cost plan;

    (2) Does not have Medicare as a secondary payer;

    (3) Is enrolled in both Medicare Parts A and B;

    (4) Is at least 18 years of age;

    (5) Is a United States resident; and

    (6) Has a minimum of one claim for evaluation and management services furnished by an eligible clinician who is in the APM Entity for any period during the QP Performance Period or, for an Advanced APM that does not base attribution on evaluation and management services and for which attributed beneficiaries are not a subset of the attribution-eligible beneficiary population based on the requirement to have at least one claim for evaluation and management services furnished by an eligible clinician who is in the APM Entity for any period during the QP Performance Period, the attribution basis determined by CMS based upon the methodology the Advanced APM uses for attribution, which may include a combination of evaluation and management and/or other services.

    Certified Electronic Health Record Technology (CEHRT) means the following:

    (1) For any calendar year before 2018, EHR technology (which could include multiple technologies) certified under the ONC Health IT Certification Program that meets one of the following:

    (i) The 2014 Edition Base EHR definition (as defined at 45 CFR 170.102) and that has been certified to the certification criteria that are necessary to report on applicable objectives and measures specified for the MIPS advancing care information performance category, including the applicable measure calculation certification criterion at 45 CFR 170.314(g)(1) or (2) for all certification criteria that support an objective with a percentage-based measure.

    (ii) Certification to—

    (A) The following certification criteria:

    (1) CPOE at—

    (i) 45 CFR 170.314(a)(1), (18), (19) or (20); or

    (ii) 45 CFR 170.315(a)(1), (2) or (3).

    (2)(i) Record demographics at 45 CFR 170.314(a)(3); or

    (ii) 45 CFR 170.315(a)(5).

    (3)(i) Problem list at 45 CFR 170.314(a)(5); or

    (ii) 45 CFR 170.315(a)(6).

    (4)(i) Medication list at 45 CFR 170.314(a)(6); or

    (ii) 45 CFR 170.315(a)(7).

    (5)(i) Medication allergy list 45 CFR 170.314(a)(7); or

    (ii) 45 CFR 170.315(a)(8).

    (6)(i) Clinical decision support at 45 CFR 170.314(a)(8); or

    (ii) 45 CFR 170.315(a)(9).

    (7) Health information exchange at transitions of care at one of the following:

    (i) 45 CFR 170.314(b)(1) and (2).

    (ii) 45 CFR 170.314(b)(1), (b)(2), and (h)(1).

    (iii) 45 CFR 170.314(b)(1), (b)(2), and (b)(8).

    (iv) 45 CFR 170.314(b)(1), (b)(2), (b)(8), and (h)(1).

    (v) 45 CFR 170.314(b)(8) and (h)(1).

    (vi) 45 CFR 170.314(b)(1), (b)(2), and 170.315(h)(2).

    (vii) 45 CFR 170.314(b)(1), (b)(2), (h)(1), and 170.315(h)(2).

    (viii) 45 CFR 170.314(b)(1), (b)(2), (b)(8), and 170.315(h)(2).

    (ix) 45 CFR 170.314(b)(1), (b)(2), (b)(8), (h)(1), and 170.315(h)(2).

    (x) 45 CFR 170.314(b)(8), (h)(1), and 170.315(h)(2).

    (xi) 45 CFR 170.314(b)(1), (b)(2), and 170.315(b)(1).

    (xii) 45 CFR 170.314(b)(1), (b)(2), (h)(1), and 170.315(b)(1).

    (xiii) 45 CFR 170.314(b)(1), (b)(2), (b)(8), and 170.315(b)(1).

    (xiv) 45 CFR 170.314(b)(1), (b)(2), (b)(8), (h)(1), and 170.315(b)(1).

    (xv) 45 CFR 170.314(b)(8), (h)(1), and 170.315(b)(1).

    (xvi) 45 CFR 170.314(b)(1), (b)(2), (b)(8), (h)(1), 170.315(b)(1), and 170.315(h)(1).

    (xvii) 45 CFR 170.314(b)(1), (b)(2), (b)(8), (h)(1), 170.315(b)(1), and 170.315(h)(2).

    (xviii) 45 CFR 170.314(h)(1) and 170.315(b)(1).

    (xix) 45 CFR 170.315(b)(1) and (h)(1).

    (xx) 45 CFR 170.315(b)(1) and (h)(2).

    (xxi) 45 CFR 170.315(b)(1), (h)(1), and (h)(2); and

    (B) Clinical quality measures at—

    (1) 45 CFR 170.314(c)(1) or 170.315(c)(1);

    (2) 45 CFR 170.314(c)(2) or 170.315(c)(2);

    (3) Clinical quality measure certification criteria that support the calculation and reporting of clinical quality measures at 45 CFR 170.314(c)(2) and (3) and optionally (4); or 45 CFR 170.315(c)(3)(i) and (ii) and optionally (c)(4); and can be electronically accepted by CMS if the data is submitted electronically.

    (C) Privacy and security at—

    (1) 45 CFR 170.314(d)(1) or 170.315(d)(1);

    (2) 45 CFR 170.314(d)(2) or 170.315(d)(2);

    (3) 45 CFR 170.314(d)(3) or 170.315(d)(3);

    (4) 45 CFR 170.314(d)(4) or 170.315(d)(4);

    (5) 45 CFR 170.314(d)(5) or 170.315(d)(5);

    (6) 45 CFR 170.314(d)(6) or 170.315(d)(6);

    (7) 45 CFR 170.314(d)(7) or 170.315(d)(7);

    (8) 45 CFR 170.314(d)(8) or 170.315(d)(8); and

    (D) The certification criteria that are necessary to report on applicable objectives and measures specified for the MIPS advancing care information performance category, including the applicable measure calculation certification criterion at 45 CFR 170.314(g)(1) or (2) or 45 CFR 170.315(g)(1) or (2) for all certification criteria that support an objective with a percentage-based measure.

    (iii) The definition for 2018 and subsequent years specified in paragraph (2) of this definition.

    (2) For 2018 and subsequent years, EHR technology (which could include multiple technologies) certified under the ONC Health IT Certification Program that meets the 2015 Edition Base EHR definition (as defined at 45 CFR 170.102) and has been certified to the 2015 Edition health IT certification criteria—

    (i) At 45 CFR 170.315(a)(12) (family health history) and 45 CFR 170.315(e)(3) (patient health information capture); and

    (ii) Necessary to report on applicable objectives and measures specified for the MIPS advancing care information performance category including the following:

    (A) The applicable measure calculation certification criterion at 45 CFR 170.315(g)(1) or (2) for all certification criteria that support an objective with a percentage-based measure.

    (B) Clinical quality measure certification criteria that support the calculation and reporting of clinical quality measures at 45 CFR 170.315(c)(2) and (c)(3)(i) and (ii) and optionally (c)(4), and can be electronically accepted by CMS.

    CMS-approved survey vendor means a survey vendor that is approved by CMS for a particular performance period to administer the CAHPS for MIPS survey and to transmit survey measures data to CMS.

    CMS Web Interface means a web product developed by CMS that is used by groups that have elected to utilize the CMS Web Interface to submit data on the MIPS measures and activities.

    Covered professional services has the meaning given by section 1848(k)(3)(A) of the Act.

    Eligible clinician means “eligible professional” as defined in section 1848(k)(3) of the Act, as identified by a unique TIN and NPI combination and, includes any of the following:

    (1) A physician.

    (2) A practitioner described in section 1842(b)(18)(C) of the Act.

    (3) A physical or occupational therapist or a qualified speech-language pathologist.

    (4) A qualified audiologist (as defined in section 1861(ll)(3)(B) of the Act).

    Episode payment model means an APM or other payer arrangement designed to improve the efficiency and quality of care for an episode of care by bundling payment for services furnished to an individual over a defined period of time for a specific clinical condition or conditions.

    Estimated aggregate payment amounts means the total payments to a QP for Medicare Part B covered professional services for the incentive payment base period, estimated by CMS as described in § 414.1450(b).

    Final score means a composite assessment (using a scoring scale of 0 to 100) for each MIPS eligible clinician for a performance period determined using the methodology for assessing the total performance of a MIPS eligible clinician according to performance standards for applicable measures and activities for each performance category. The final score is the sum of each of the products of each performance category score and each performance category's assigned weight, multiplied by 100.

    Group means a single TIN with two or more eligible clinicians (including at least one MIPS eligible clinician), as identified by their individual NPI, who have reassigned their billing rights to the TIN.

    Health Professional Shortage Areas (HPSA) means areas as designated under section 332(a)(1)(A) of the Public Health Service Act.

    High priority measure means an outcome, appropriate use, patient safety, efficiency, patient experience, or care coordination quality measure.

    Hospital-based MIPS eligible clinician is a MIPS eligible clinician who furnishes 75 percent or more of his or her covered professional services in sites of service identified by the Place of Service codes used in the HIPAA standard transaction as an inpatient hospital, on-campus outpatient hospital or emergency room setting based on claims for a period prior to the performance period as specified by CMS.

    Improvement activities means an activity that relevant MIPS eligible clinician, organizations and other relevant stakeholders identify as improving clinical practice or care delivery and that the Secretary determines, when effectively executed, is likely to result in improved outcomes.

    Incentive payment base period means the calendar year prior to the year in which CMS disburses the APM Incentive Payment.

    Low-volume threshold means an individual MIPS eligible clinician or group who, during the low-volume threshold determination period, have Medicare Part B allowed charges less than or equal to $30,000 or provides care for 100 or fewer Part B-enrolled Medicare beneficiaries.

    Meaningful EHR user for MIPS means a MIPS eligible clinician who possesses CEHRT, uses the functionality of CEHRT, and reports on applicable objectives and measures specified for the advancing care information performance category for a performance period in the form and manner specified by CMS, supports information exchange and the prevention of health information blocking, and engages in activities related to supporting providers with the performance of CEHRT.

    Measure benchmark means the level of performance that the MIPS eligible clinician is assessed on for a specific performance period at the measures and activities level.

    Medicaid APM means a payment arrangement authorized by a State Medicaid program that meets the criteria for an Other Payer Advanced APM under § 414.1420(a).

    Medical Home Model means an APM under section 1115A of the Act that is determined by CMS to have the following characteristics:

    (1) The APM has a primary care focus with participants that primarily include primary care practices or multispecialty practices that include primary care physicians and practitioners and offer primary care services. For the purposes of this provision, primary care focus means the inclusion of specific design elements related to eligible clinicians practicing under one or more of the following Physician Specialty Codes: 01 General Practice; 08 Family Medicine; 11 Internal Medicine; 16 Obstetrics and Gynecology; 37 Pediatric Medicine; 38 Geriatric Medicine; 50 Nurse Practitioner; 89 Clinical Nurse Specialist; and 97 Physician Assistant;

    (2) Empanelment of each patient to a primary clinician; and

    (3) At least four of the following:

    (i) Planned coordination of chronic and preventive care.

    (ii) Patient access and continuity of care.

    (iii) Risk-stratified care management.

    (iv) Coordination of care across the medical neighborhood.

    (v) Patient and caregiver engagement.

    (vi) Shared decision-making.

    (vii) Payment arrangements in addition to, or substituting for, fee-for-service payments (for example, shared savings or population-based payments).

    Medicaid Medical Home Model means a payment arrangement under title XIX that CMS determines to have the following characteristics:

    (1) The payment arrangement has a primary care focus with participants that primarily include primary care practices or multispecialty practices that include primary care physicians and practitioners and offer primary care services. For the purposes of this provision, primary care focus means the inclusion of specific design elements related to eligible clinicians practicing under one or more of the following Physician Specialty Codes: 01 General Practice; 08 Family Medicine; 11 Internal Medicine; 16 Obstetrics and Gynecology; 37 Pediatric Medicine; 38 Geriatric Medicine; 50 Nurse Practitioner; 89 Clinical Nurse Specialist; and 97 Physician Assistant;

    (2) Empanelment of each patient to a primary clinician; and

    (3) At least four of the following:

    (i) Planned coordination of chronic and preventive care.

    (ii) Patient access and continuity.

    (iii) Risk-stratified care management.

    (iv) Coordination of care across the medical neighborhood.

    (v) Patient and caregiver engagement.

    (vi) Shared decision-making.

    (vii) Payment arrangements in addition to, or substituting for, fee-for-service payments (for example, shared savings or population-based payments).

    Merit-based Incentive Payment System (MIPS) means the program required by section 1848(q) of the Act.

    MIPS APM means an APM that meets the criteria specified under § 414.1370(b).

    MIPS eligible clinician as identified by a unique billing TIN and NPI combination used to assess performance, means any of the following (excluding those identified at § 414.1310(b)):

    (1) A physician as defined in section 1861(r) of the Act.

    (2) A physician assistant, a nurse practitioner, and clinical nurse specialist as such terms are defined in section 1861(aa)(5) of the Act.

    (3) A certified registered nurse anesthetist as defined in section 1861(bb)(2) of the Act.

    (4) A group that includes such clinicians.

    MIPS payment year means a calendar year in which the MIPS payment adjustment factor, and if applicable the additional MIPS payment adjustment factor, are applied to Medicare Part B payments.

    New Medicare-Enrolled MIPS eligible clinician means an eligible clinician who first becomes a Medicare-enrolled eligible clinician within the Provider Enrollment, Chain and Ownership System (PECOS) during the performance period for a year and had not previously submitted claims under Medicare as an individual, an entity, or a part of a physician group or under a different billing number or tax identifier.

    Non-patient facing MIPS eligible clinician means an individual MIPS eligible clinician that bills 100 or fewer patient facing encounters (including Medicare telehealth services defined in section 1834(m) of the Act) during the non-patient facing determination period, and a group provided that more than 75 percent of the NPIs billing under the group's TIN meet the definition of a non-patient facing individual MIPS eligible clinician during the non-patient facing determination period.

    Other Payer Advanced APM means a payment arrangement that meets the criteria set forth in § 414.1420.

    Other payer arrangement means a payment arrangement with any payer that is not an APM.

    Partial Qualifying APM Participant (Partial QP) means an eligible clinician determined by CMS to have met the relevant Partial QP threshold under § 414.1430(a)(2) and (4) and (b)(2) and (4) for a year.

    Partial QP patient count threshold means the minimum threshold score specified in § 414.1430(a)(4) and (b)(4) that an eligible clinician must attain through a patient count methodology described in §§ 414.1435(b) and 414.1440(c) to become a Partial QP for a year.

    Partial QP payment amount threshold means the minimum threshold score specified in § 414.1430(a)(2) and (b)(2) that an eligible clinician must attain through a payment amount methodology described §§ 414.1435(a) and 414.1440(b) to become a Partial QP for a year.

    Participation List means the list of participants in an APM Entity that is compiled from a CMS-maintained list.

    Performance category score means the assessment of each MIPS eligible clinician's performance on the applicable measures and activities for a performance category for a performance period based on the performance standards for those measures and activities.

    Performance standards means the level of performance and methodology that the MIPS eligible clinician is assessed on for a MIPS performance period at the measures and activities level for all MIPS performance categories.

    Performance threshold means the numerical threshold for a MIPS payment year against which the final scores of MIPS eligible clinicians are compared to determine the MIPS payment adjustment factors.

    QP patient count threshold means the minimum threshold score specified in § 414.1430(a)(3) and (b)(3) that an eligible clinician must attain through a patient count methodology described in §§ 414.1435(b) and 414.1440(c) to become a QP for a year.

    QP payment amount threshold means the minimum threshold score specified in § 414.1430(a)(1) and (b)(1) that an eligible clinician must attain through the payment amount methodology described in §§ 414.1435(a) and 414.1440(b) to become a QP for a year.

    QP Performance Period means the time period that CMS will use to assess the level of participation by an eligible clinician in Advanced APMs and Other Payer Advanced APMs for purposes of making a QP determination for the eligible clinician for the year as specified in § 414.1425. The QP Performance Period begins on January 1 and ends on August 31 of the calendar year that is 2 years prior to the payment year.

    Qualified Clinical Data Registry (QCDR) means a CMS-approved entity that has self-nominated and successfully completed a qualification process to determine whether the entity may collect medical or clinical data for the purpose of patient and disease tracking to foster improvement in the quality of care provided to patients.

    Qualified registry means a medical registry, a maintenance of certification program operated by a specialty body of the American Board of Medical Specialties or other data intermediary that, with respect to a particular performance period, has self-nominated and successfully completed a vetting process (as specified by CMS) to demonstrate its compliance with the MIPS qualification requirements specified by CMS for that performance period. The registry must have the requisite legal authority to submit MIPS data (as specified by CMS) on behalf of a MIPS eligible clinician or group to CMS.

    Qualifying APM Participant (QP) means an eligible clinician determined by CMS to have met or exceeded the relevant QP payment amount or QP patient count threshold under § 414.1430(a)(1), (a)(3), (b)(1), or (b)(3) for a year based on participation in an Advanced APM Entity.

    Rural areas means clinicians in zip codes designated as rural, using the most recent HRSA Area Health Resource File data set available.

    Small practices means practices consisting of 15 or fewer clinicians and solo practitioners.

    Threshold Score means the percentage value that CMS determines for an eligible clinician based on the calculations described in § 414.1435 or § 414.1440.

    Topped out non-process measure means a measure where the Truncated Coefficient of Variation is less than 0.10 and the 75th and 90th percentiles are within 2 standard errors.

    Topped out process measure means a measure with a median performance rate of 95 percent or higher.

    § 414.1310 Applicability.

    (a) Program Implementation. Except as specified in paragraph (b) of this section, MIPS applies to payments for items and services furnished by MIPS eligible clinicians on or after January 1, 2019.

    (b) Exclusions. (1) For a year, a MIPS eligible clinician does not include an eligible clinician who:

    (i) Is a Qualifying APM Participant (as defined at § 414.1305);

    (ii) Is a Partial Qualifying APM Participant (as defined at § 414.1305) and does not report on applicable measures and activities that are required to be reported under MIPS for any given performance period in a year; or

    (iii) For the performance period with respect to a year, does not exceed the low-volume threshold as defined at § 414.1305.

    (2) Eligible clinicians, as defined at § 414.1305, who are not MIPS eligible clinicians, as defined at § 414.1305, have the option to voluntarily report measures and activities for MIPS.

    (c) Treatment of new Medicare-enrolled eligible clinicians. New Medicare-enrolled eligible clinician, as defined at § 414.1305, will not be treated as a MIPS eligible clinician until the subsequent year and the performance period for such subsequent year.

    (d) Clarification. In no case will a MIPS payment adjustment apply to the items and services furnished during a year by individual eligible clinicians, as described in paragraphs (b) and (c) of this section, who are not MIPS eligible clinicians, including eligible clinicians who voluntarily report on applicable measures and activities specified under MIPS.

    (e) Requirements for groups. (1) The following way is for individual eligible clinicians and individual MIPS eligible clinicians to have their performance assessed as a group:

    (i) As part of a single TIN associated with two or more eligible clinicians (including at least one MIPS eligible clinician), as identified by a NPI, that have their Medicare billing rights reassigned to the TIN.

    (ii) [Reserved]

    (2) A group must meet the definition of a group at all times during the performance period for the MIPS payment year in order to have its performance assessed as a group.

    (3) Eligible clinicians and MIPS eligible clinicians within a group must aggregate their performance data across the TIN in order for their performance to be assessed as a group.

    (4) A group that elects to have its performance assessed as a group will be assessed as a group across all four MIPS performance categories.

    (5) A group must adhere to an election process established and required by CMS.

    § 414.1315 [Reserved]
    § 414.1320 MIPS performance period.

    (a) For purposes of the 2019 MIPS payment year, the performance period for all performance categories and submission mechanisms except for the cost performance category and data for the quality performance category reported through the CMS Web Interface, for the CAHPS for MIPS survey, and for the all-cause hospital readmission measure, is a minimum of a continuous 90-day period within CY 2017, up to and including the full CY 2017 (January 1, 2017 through December 31, 2017). For purposes of the 2019 MIPS payment year, for data reported through the CMS Web Interface or the CAHPS for MIPS survey and administrative claims-based cost and quality measures, the performance period under MIPS is CY 2017 (January 1, 2017 through December 31, 2017).

    (b) For purposes of the 2020 MIPS payment year, the performance period for:

    (1) The quality and cost performance categories is CY 2018 (January 1, 2018 through December 31, 2018).

    (2) The advancing care information and improvement activities performance categories is a minimum of a continuous 90-day period within CY 2018, up to and including the full CY 2018 (January 1, 2018 through December 31, 2018).

    § 414.1325 Data submission requirements.

    (a) Data submission performance categories. MIPS eligible clinicians and groups must submit measures, objectives, and activities for the quality, improvement activities, and advancing care information performance categories.

    (b) Data submission mechanisms for individual eligible clinicians. An individual MIPS eligible clinician may elect to submit their MIPS data using:

    (1) A qualified registry for the quality, improvement activities, or advancing care information performance categories;

    (2) The EHR submission mechanism (which includes submission of data by health IT vendors or other authorized providers on behalf of MIPS eligible clinicians) for the quality, improvement activities, or advancing care information performance categories;

    (3) A QCDR for the quality, improvement activities, or advancing care information performance categories;

    (4) Medicare Part B claims for the quality performance category; or

    (5) Attestation for the improvement activities and advancing care information performance categories.

    (c) Data submission mechanisms for groups that are not reporting through an APM. Groups may submit their MIPS data using:

    (1) A qualified registry for the quality, improvement activities, or advancing care information performance categories;

    (2) The EHR submission mechanism (which includes the submission of data by health IT vendors on behalf of groups) for the quality, improvement activities, or advancing care information performance categories;

    (3) A QCDR for the quality, improvement activities, or advancing care information performance categories;

    (4) A CMS Web Interface (for groups comprised of at least 25 MIPS eligible clinicians) for the quality, improvement activities, and advancing care information performance categories;

    (5) Attestation for the improvement activities and advancing care information performance categories; or

    (6) A CMS-approved survey vendor for groups that elect to include the CAHPS for MIPS survey as a quality measure. Groups that elect to include the CAHPS for MIPS survey as a quality measure must select one of the above data submission mechanisms to submit their other quality information.

    (d) Requirement to use only one submission mechanism per performance category. Except as described in paragraph (c)(6) of this section, MIPS eligible clinicians and groups may elect to submit information via multiple mechanisms; however, they must use the same identifier for all performance categories and they may only use one submission mechanism per performance category.

    (e) No data submission requirements for the cost performance category and certain quality measures. There are no data submission requirements for the cost performance category and for certain quality measures used to assess performance in the quality performance category. CMS will calculate performance on these measures using administrative claims data.

    (f) Data submission deadlines for all submission mechanisms for individual eligible clinicians and groups for all performance categories. The submission deadlines are:

    (1) For the qualified registry, QCDR, EHR, and attestation submission mechanisms are March 31 following the close of the performance period.

    (2) For Medicare Part B claims, data must be submitted on claims with dates of service during the performance period that must be processed no later than 60 days following the close of the performance period.

    (3) For the CMS Web Interface, data must be submitted during an 8-week period following the close of the performance period. The period must begin no earlier than January 2 and end no later than March 31.

    § 414.1330 Quality performance category.

    (a) For purposes of assessing performance of MIPS eligible clinicians on the quality performance category, CMS will use:

    (1) Quality measures included in the MIPS final list of quality measures.

    (2) Quality measures used by QCDRs.

    (b) Subject to CMS's authority to reweight performance category weights under section 1848(q)(5)(F) of the Act, performance in the quality performance category will comprise:

    (1) 60 percent of a MIPS eligible clinician's final score for MIPS payment year 2019.

    (2) 50 percent of a MIPS eligible clinician's final score for MIPS payment year 2020.

    (3) 30 percent of a MIPS eligible clinician's final score for each MIPS payment year thereafter.

    § 414.1335 Data submission criteria for the quality performance category.

    (a) Criteria. A MIPS eligible clinician or group must submit data on MIPS quality measures in one of the following manners, as applicable:

    (1) Via claims, qualified registry, EHR or QCDR submission mechanism. For the performance period—

    (i) Submit data on at least six measures including at least one outcome measure. If an applicable outcome measure is not available, report one other high priority measure (appropriate use, patient safety, efficiency, patient experience, and care coordination measures). If fewer than six measures apply to the MIPS eligible clinician or group, report on each measure that is applicable.

    (ii) Subject to paragraph (a)(1)(i) of this section, MIPS eligible clinicians and groups can either select their measures from the complete MIPS final measure list or a subset of that list, MIPS specialty-specific measure sets, as designated by CMS.

    (2) Via the CMS Web Interface—for groups only. For the 12-month performance period-

    (i) For a group of 25 or more MIPS eligible clinicians, report on all measures included in the CMS Web Interface. The group must report on the first 248 consecutively ranked beneficiaries in the sample for each measure or module.

    (ii) If the sample of eligible assigned beneficiaries is less than 248, then the group must report on 100 percent of assigned beneficiaries. In some instances, the sampling methodology will not be able to assign at least 248 patients on which a group may report, particularly those groups on the smaller end of the range of 25-99 MIPS eligible clinicians.

    (iii) The group is required to report on at least one measure for which there is Medicare patient data.

    (iv) Groups reporting via the CMS Web Interface are required to report on all of the measures in the set.

    (3) Via CMS-approved survey vendor for CAHPS for MIPS survey- for groups only. (i) For the 12-month performance period, a group that wishes to voluntarily elect to participate in the CAHPS for MIPS survey measures must use a survey vendor that is approved by CMS for a particular performance period to transmit survey measures data to CMS.

    (A) The CAHPS for MIPS survey counts for one measure towards the MIPS quality performance category and, as a patient experience measure, also fulfills the requirement to report at least one high priority measure in the absence of an applicable outcome measure.

    (B) Groups that elect this data submission mechanism must select an additional group data submission mechanism in order to meet the data submission criteria for the MIPS quality performance category.

    (ii) [Reserved]

    (b) [Reserved]

    § 414.1340 Data completeness criteria for the quality performance category.

    (a) MIPS eligible clinicians and groups submitting quality measures data using the QCDR, qualified registry, or EHR submission mechanism must submit data on:

    (1) At least 50 percent of the MIPS eligible clinician or group's patients that meet the measure's denominator criteria, regardless of payer for MIPS payment year 2019.

    (2) At least 60 percent of the MIPS eligible clinician or group's patients that meet the measure's denominator criteria, regardless of payer for MIPS payment year 2020.

    (b) MIPS eligible clinicians submitting quality measures data using Medicare Part B claims, must submit data on:

    (1) At least 50 percent of the applicable Medicare Part B patients seen during the performance period to which the measure applies for MIPS payment year 2019.

    (2) At least 60 percent of the applicable Medicare Part B patients seen during the performance period to which the measure applies for MIPS payment year 2020.

    (c) Groups submitting quality measures data using the CMS Web Interface or a CMS-approved survey vendor to submit the CAHPS for MIPS survey must meet the data submission requirement on the sample of the Medicare Part B patients CMS provides.

    § 414.1350 Cost performance category.

    (a) For purposes of assessing performance of MIPS eligible clinicians on the cost performance category, CMS specifies cost measures for a performance period.

    (b) Subject to CMS's authority to reweight performance category weights under section 1848(q)(5)(F) of the Act, performance in the cost performance category comprises:

    (1) 0 percent of a MIPS eligible clinician's final score for MIPS payment year 2019.

    (2) 10 percent of a MIPS eligible clinician's final score for MIPS payment year 2020.

    (3) 30 percent of a MIPS eligible clinician's final score for each MIPS payment year thereafter.

    § 414.1355 Improvement activities performance category.

    (a) For purposes of assessing performance of MIPS eligible clinicians on the improvement activities performance category, CMS specifies an inventory of measures and activities for a performance period.

    (b) Subject to CMS's authority to reweight performance category weights under section 1848(q)(5)(F) of the Act, performance in the improvement activities performance category comprises:

    (1) 15 percent of a MIPS eligible clinician's final score for MIPS payment year 2019 and for each MIPS payment year thereafter.

    (2) [Reserved].

    (c) For purposes of assessing performance of MIPS eligible clinicians on the improvement activities performance category, CMS uses activities included in the improvement activities inventory established by CMS through rulemaking.

    § 414.1360 Data submission criteria for the improvement activities performance category.

    (a) MIPS eligible clinicians must submit data on MIPS improvement activities in one of the following manners:

    (1) Via qualified registry, EHR submission mechanisms, QCDR, CMS Web Interface or Attestation. For activities that are performed for at least a continuous 90-days during the performance period, MIPS eligible clinicians must—

    (i) Submit a yes response for activities within the improvement activities inventory.

    (ii) [Reserved]

    (2) [Reserved]

    (b) [Reserved]

    § 414.1365 Subcategories for the improvement activities performance category.

    (a) The following are the list of subcategories, of which, with the exception of Participation in an APM, include activities for selection by a MIPS eligible clinician or group:

    (1) Expanded practice access, such as same day appointments for urgent needs and after-hours access to clinician advice.

    (2) Population management, such as monitoring health conditions of individuals to provide timely health care interventions or participation in a QCDR.

    (3) Care coordination, such as timely communication of test results, timely exchange of clinical information to patients or other clinicians, and use of remote monitoring or telehealth.

    (4) Beneficiary engagement, such as the establishment of care plans for individuals with complex care needs, beneficiary self-management assessment and training, and using shared decision-making mechanisms.

    (5) Patient safety and practice assessment, such as through the use of clinical or surgical checklists and practice assessments related to maintaining certification.

    (6) Participation in an APM.

    (7) Achieving health equity, such as for MIPS eligible clinicians that achieve high quality for underserved populations, including persons with behavioral health conditions, racial and ethnic minorities, sexual and gender minorities, people with disabilities, people living in rural areas, and people in geographic HPSAs.

    (8) Emergency preparedness and response, such as measuring MIPS eligible clinician participation in the Medical Reserve Corps, measuring registration in the Emergency System for Advance Registration of Volunteer Health Professionals, measuring relevant reserve and active duty uniformed services MIPS eligible clinician activities, and measuring MIPS eligible clinician volunteer participation in domestic or international humanitarian medical relief work.

    (9) Integrated behavioral and mental health, such as measuring or evaluating such practices as: Co-location of behavioral health and primary care services; shared/integrated behavioral health and primary care records; cross-training of MIPS eligible clinicians, and integrating behavioral health with primary care to address substance use disorders or other behavioral health conditions, as well as integrating mental health with primary care.

    (b) [Reserved]

    § 414.1370 APM scoring standard under MIPS.

    (a) General. The APM scoring standard is the MIPS scoring methodology applicable for MIPS eligible clinicians identified on the Participation List for the performance period of an APM Entity participating in a MIPS APM.

    (b) Criteria for MIPS APMs. MIPS APMs are those in which:

    (1) APM Entities participate in the APM under an agreement with CMS or through a law or regulation;

    (2) The APM is designed such that APM Entities participating in the APM include at least one MIPS eligible clinician on a Participation List;

    (3) The APM bases payment on cost/utilization and quality measures; and

    (4) The APM is not either of the following:

    (i) New APMs. An APM for which the first performance year begins after the first day of the MIPS performance period for the year.

    (ii) APM in final year of operation for which the APM scoring standard is impracticable. An APM in the final year of operation for which CMS determines, within 60 days after the beginning of the MIPS performance period for the year, that it is impracticable for APM Entity groups to report to MIPS using the APM scoring standard.

    (c) APM scoring standard performance period. The MIPS performance period under § 414.1320 applies for the APM scoring standard.

    (d) APM participant identifier. The APM participant identifier for an eligible clinician is the combination of four identifiers:

    (1) APM identifier (established for the APM by CMS);

    (2) APM Entity identifier (established for the APM Entity by CMS);

    (3) Medicare-enrolled billing TIN; and

    (4) Eligible clinician NPI.

    (e) APM Entity group determination. The APM Entity group is determined in the manner prescribed in § 414.1425(b)(1).

    (f) APM Entity group scoring under the APM scoring standard. The MIPS final score calculated for the APM Entity group is applied to each MIPS eligible clinician in the APM Entity group. The MIPS payment adjustment is applied at the TIN/NPI level for each of the MIPS eligible clinicians in the APM Entity group. In the event that a Shared Savings Program ACO does not report quality measures as required by the Shared Savings Program, the ACO participant TINs will each be considered a unique APM Entity for purposes of the APM scoring standard.

    (g) MIPS performance category scoring under the APM scoring standard—(1) Quality—(i) MIPS APMs that require APM Entities to submit quality data using the CMS Web Interface. The MIPS performance category score for quality for a performance period will be calculated for the APM Entity group using the data submitted for the APM Entity through the CMS Web Interface according to the terms of the APM. In the event that a Shared Savings Program ACO does not report on quality measures as required by the Shared Savings Program, the ACO participant TINs must report data for the MIPS quality performance category according to the MIPS submission and reporting requirements.

    (ii) [Reserved]

    (2) Cost. The cost performance category weight is zero percent for APM Entity groups in MIPS APMs.

    (3) Improvement activities. (i) CMS assigns an improvement activities score for each MIPS APM for a performance period based on the requirements of the MIPS APM. The assigned improvement activities score applies to each APM Entity group in the MIPS APM for the performance year. In the event that the assigned score does not represent the maximum improvement activities score, APM Entities may report additional activities.

    (ii) [Reserved]

    (4) Advancing care information. (i) For APM Entity groups in the Shared Savings Program, each ACO participant TIN submits data on the advancing care information performance category as specified in § 414.1375(b) and performance on the advancing care information performance category is assessed for the APM Entity group by calculating the weighted mean of the TIN level scores, weighted based on the number of MIPS eligible clinicians in the TINs as compared to the total number of MIPS eligible clinicians in the APM Entity group.

    (ii) For APM Entity groups in MIPS APMs other than the Shared Savings Program, CMS uses one score for each MIPS eligible clinician in the APM Entity group to derive a single average APM Entity group score for advancing care information. The score for each MIPS eligible clinician is the higher of either:

    (A) A group score based on the measure data for the advancing care information performance category reported by a TIN for the MIPS eligible clinician according to the MIPS submission and reporting requirements for groups; or

    (B) An individual score based on the measure data for the advancing care information performance category reported by the MIPS eligible clinician according to the MIPS submission and reporting requirements for individuals.

    (h) APM scoring standard performance category weights. The performance category weights used to calculate the final score for an APM Entity group are:

    (1) Quality. (i) For the Shared Savings Program and other MIPS APMs that require APM Entities to submit quality data through the CMS Web Interface: 50 percent.

    (ii) For 2017, for MIPS APMs that do not require APM Entities to submit quality data through the CMS Web Interface: 0 percent.

    (2) Cost. 0 percent.

    (3) Improvement activities. (i) For the Shared Savings Program and other MIPS APMs that require APM Entities to submit quality data through the CMS Web Interface: 20 percent.

    (ii) For 2017, for MIPS APMs that do not require APM Entities to submit quality data through the CMS Web Interface: 25 percent.

    (4) Advancing care information. (i) For the Shared Savings Program and other MIPS APMs that require APM Entities to submit quality data through the CMS Web Interface: 30 percent.

    (ii) For 2017, for MIPS APMs that do not require APM Entities to submit quality data through the CMS Web Interface: 75 percent.

    § 414.1375 Advancing care information performance category.

    (a) Final score. Subject to CMS's authority to reweight performance category weights under section 1848(q)(5)(E)(ii) and (q)(5)(F) of the Act, performance in the advancing care information performance category will comprise 25 percent of a MIPS eligible clinician's final score for MIPS payment year 2019 and each MIPS payment year thereafter.

    (b) Reporting for the advancing care information performance category: To earn a performance category score for the advancing care information performance category for inclusion in the final score, a MIPS eligible clinician must:

    (1) CEHRT. Use CEHRT as defined at § 414.1305 for the performance period;

    (2) Report MIPS—advancing care information objectives and measures. Report on the objectives and associated measures as specified by CMS for the advancing care information performance category for the performance period as follows:

    (i) Report the numerator (of at least one) and denominator, or yes/no statement as applicable, for each required measure; or

    (ii) Report a null value for each required measure that includes a null value as an acceptable result in the measure specification.

    (3) Support information exchange and the prevention of health information blocking, and engage in activities related to supporting providers with the performance of CEHRT. (i) Supporting providers with the performance of CEHRT (SPPC). To engage in activities related to supporting providers with the performance of CEHRT, the MIPS eligible clinician—

    (A) Must attest that he or she:

    (1) Acknowledges the requirement to cooperate in good faith with ONC direct review of his or her health information technology certified under the ONC Health IT Certification Program if a request to assist in ONC direct review is received; and

    (2) If requested, cooperated in good faith with ONC direct review of his or her health information technology certified under the ONC Health IT Certification Program as authorized by 45 CFR part 170, subpart E, to the extent that such technology meets (or can be used to meet) the definition of CEHRT, including by permitting timely access to such technology and demonstrating its capabilities as implemented and used by the MIPS eligible clinician in the field.

    (B) Optionally, may also attest that he or she:

    (1) Acknowledges the option to cooperate in good faith with ONC-ACB surveillance of his or her health information technology certified under the ONC Health IT Certification Program if a request to assist in ONC-ACB surveillance is received; and

    (2) If requested, cooperated in good faith with ONC-ACB surveillance of his or her health information technology certified under the ONC Health IT Certification Program as authorized by 45 CFR part 170, subpart E, to the extent that such technology meets (or can be used to meet) the definition of CEHRT, including by permitting timely access to such technology and demonstrating its capabilities as implemented and used by the MIPS eligible clinician in the field.

    (ii) Support for health information exchange and the prevention of information blocking. The MIPS eligible clinician must attest to CMS that he or she—

    (A) Did not knowingly and willfully take action (such as to disable functionality) to limit or restrict the compatibility or interoperability of certified EHR technology.

    (B) Implemented technologies, standards, policies, practices, and agreements reasonably calculated to ensure, to the greatest extent practicable and permitted by law, that the certified EHR technology was, at all relevant times—

    (1) Connected in accordance with applicable law;

    (2) Compliant with all standards applicable to the exchange of information, including the standards, implementation specifications, and certification criteria adopted at 45 CFR part 170;

    (3) Implemented in a manner that allowed for timely access by patients to their electronic health information; and

    (4) Implemented in a manner that allowed for the timely, secure, and trusted bi-directional exchange of structured electronic health information with other health care providers (as defined by 42 U.S.C. 300jj(3)), including unaffiliated providers, and with disparate certified EHR technology and health IT vendors.

    (C) Responded in good faith and in a timely manner to requests to retrieve or exchange electronic health information, including from patients, health care providers (as defined by 42 U.S.C. 300jj(3)), and other persons, regardless of the requestor's affiliation or technology vendor.

    § 414.1380 Scoring.

    (a) General. MIPS eligible clinicians are scored under MIPS based on their performance on measures and activities in four performance categories. MIPS eligible clinicians are scored against performance standards for each performance category and receive a final score, composed of their scores on individual measures and activities, and calculated according to the final score methodology.

    (1) Measures and activities in the four performance categories are scored against performance standards.

    (i) For the quality performance category, measures are scored between zero and 10 points. Performance is measured against benchmarks. Bonus points are available for both submitting specific types of measures and submitting measures using end-to-end electronic reporting.

    (ii) For the cost performance category, measures are scored between one and 10 points. Performance is measured against a benchmark.

    (iii) For the improvement activities performance category, each improvement activity is worth a certain number of points. The points for each reported activity are summed and scored against a total potential performance category score of 40 points.

    (iv) For the advancing care information performance category, the performance category score is the sum of a base score, performance score, and bonus score.

    (2) [Reserved]

    (b) Performance categories. MIPS eligible clinicians are scored under MIPS in four performance categories.

    (1) Quality performance category. For the 2017 performance period. MIPS eligible clinicians receive three to ten achievement points for each scored quality measure in the quality performance category based on the MIPS eligible clinician's performance compared to measure benchmarks. A MIPS quality measure must have a measure benchmark to be scored based on performance. MIPS quality measures that do not have a benchmark will not be scored based on performance. Instead, these measures will receive 3 points for the 2017 performance period.

    (i) Measure benchmarks are based on historical performance for the measure based on a baseline period. Each benchmark must have a minimum of 20 individual clinicians or groups who reported the measure meeting the data completeness requirement and minimum case size criteria and performance greater than zero. We will restrict the benchmarks to data from MIPS eligible clinicians and comparable APM data, including data from QPs and Partial QPs.

    (ii) As an exception, if there is no comparable data from the baseline period, CMS would use information from the performance period to create measure benchmarks, which would not be published until after the performance period. For the 2017 performance period, CMS would use information from CY 2017 during which MIPS eligible clinicians may report for a minimum of any continuous 90-day period.

    (A) CMS Web Interface submission uses benchmarks from the corresponding reporting year of the Shared Savings Program.

    (B) [Reserved]

    (iii) Separate benchmarks are used for the following submission mechanisms:

    (A) EHR submission options;

    (B) QCDR and qualified registry submission options;

    (C) Claims submission options;

    (D) CMS Web Interface submission options;

    (E) CMS-approved survey vendor for CAHPS for MIPS submission options; and

    (F) Administrative claims submission options.

    (iv) Minimum case requirements for quality measures are 20 cases, unless a measure is subject to an exception.

    (v) As an exception, the minimum case requirements for the all-cause hospital readmission measure is 200 cases.

    (vi) MIPS eligible clinicians failing to report a measure required under this category receive zero points for that measure.

    (vii) MIPS eligible clinicians do not receive zero points if the expected measure is submitted but is unable to be scored because it does not meet the required case minimum or if the measure does not have a measure benchmark for MIPS payment year 2019. Instead, these measures as well as measures that are below the data completeness requirement receive a score of 3 points in MIPS payment year 2019.

    (viii) As an exception, the administrative claims-based measures and CMS Web Interface measures will not be scored if these measures do not meet the required case minimum. For CMS Web Interface measures, we will recognize the measure was submitted but exclude the measure from being scored. For CMS Web Interface measures: measures that do not have a measure benchmark will also not be scored, although we will recognize that the measure was submitted, and measures that are below the data completeness requirement receive 0 points.

    (ix) Measures submitted by MIPS eligible clinicians are scored using a percentile distribution, separated by decile categories.

    (x) For each set of benchmarks, CMS calculates the decile breaks for measure performance and assigns points based on which benchmark decile range the MIPS eligible clinician's measure rate is between.

    (xi) CMS assigns partial points based on the percentile distribution.

    (xii) MIPS eligible clinicians are required to submit measures consistent with § 414.1335.

    (xiii) Bonus points are available for measures determined to be high priority measures when two or more high priority measures are reported.

    (A) Bonus points are not available for the first reported high priority measure which is required to be reported. To qualify for bonus points, each measure must be reported with sufficient case volume to the meet the required case minimum and the required data completeness criteria and does not have a zero percent performance rate, regardless of whether it is included in the calculation of the quality performance category score.

    (B) Outcome and patient experience measures receive two bonus points.

    (C) Other high priority measures receive one bonus point.

    (D) Bonus points for high priority measures cannot exceed 10 percent of the total possible points for MIPS payment year 2019 and 2020.

    (xiv) One bonus point is also available for each measure submitted with end-to-end electronic reporting for a quality measure under certain criteria determined by the Secretary. Bonus points cannot exceed 10 percent of the total possible points for MIPS payment year 2019 and 2020.

    (xv) A MIPS eligible clinician's quality performance category score is the sum of all the points assigned for the measures required for the quality performance category criteria plus the bonus points in paragraph (b)(1)(xiii) of this section and bonus points in paragraph (b)(1)(xiv) of this section. The sum is divided by the sum of total possible points. The quality performance category score cannot exceed the total possible points for the quality performance category.

    (2) Cost performance category. A MIPS eligible clinician receives one to ten achievement points for each cost measure attributed to the MIPS eligible clinician based on the MIPS eligible clinician's performance compared to the measure benchmark.

    (i) Cost measure benchmarks are based on the performance period. Cost measures must have a benchmark to be scored.

    (ii) A MIPS eligible clinician must meet the minimum case volume specified by CMS to be scored on a cost measure.

    (iii) A MIPS eligible clinician's cost performance category score is the equally-weighted average of all scored cost measures.

    (3) Improvement activities performance category. MIPS eligible clinicians and groups receive points for improvement activities based on patient-centered medical home or comparable specialty practice participation, APM participation, and improvement activities reported by the MIPS eligible clinician in comparison to the highest potential score (40 points) for a given MIPS year.

    (i) CMS assigns credit for the total possible category score for each reported improvement activity based on two weights: Medium-weighted; and high-weighted activities.

    (ii) Improvement activities with a high weighting receive credit for 20 points, toward the total possible category score.

    (iii) Improvement activities with a medium weighting receive credit for

    10 points toward the total possible category score.

    (iv) A MIPS eligible clinician or group in a practice that is certified as a patient-centered medical home or comparable specialty practice, as determined by the Secretary, receives full credit for performance on the improvement activities performance category. For purposes of this paragraph (b)(3)(iv), “full credit” means that the MIPS eligible clinician or group has met the highest potential score of 40 points. A practice is certified as a patient-centered medical home if it meets any of the following criteria:

    (A) The practice has received accreditation from one of four accreditation organizations that are nationally recognized;

    (1) The Accreditation Association for Ambulatory Health Care;

    (2) The National Committee for Quality Assurance (NCQA);

    (3) The Joint Commission; or

    (4) The Utilization Review Accreditation Commission (URAC).

    (B) The practice is participating in a Medicaid Medical Home Model or Medical Home Model.

    (C) The practice is a comparable specialty practice that has received the NCQA Patient-Centered Specialty Recognition.

    (D) The practice has received accreditation from other certifying bodies that have certified a large number of medical organizations and meet national guidelines, as determined by the Secretary. The Secretary must determine that these certifying bodies must have 500 or more certified member practices, and require practices to include the following:

    (1) Have a personal physician/clinician in a team-based practice.

    (2) Have a whole-person orientation.

    (3) Provide coordination or integrated care.

    (4) Focus on quality and safety.

    (5) Provide enhanced access.

    (v) CMS compares the points associated with the reported activities against the highest potential category score of 40 points.

    (vi) A MIPS eligible clinician or group's improvement activities category score is the sum of points for all of their reported activities, which is capped at 40 points, divided by the highest potential category score of 40 points.

    (vii) Non-patient facing MIPS eligible clinicians and groups, small practices, and practices located in rural areas and geographic HPSAs receive full credit for improvement activities by selecting one high-weighted improvement activity or two medium-weighted improvement activities. Non-patient facing MIPS eligible clinicians and groups, small practices, and practices located in rural areas and geographic HPSAs receive half credit for improvement activities by selecting one medium-weighted improvement activity.

    (viii) To receive full credit as a certified patient-centered medical home or comparable specialty practice requires that a TIN that is reporting includes at least one practice which is a certified patient-centered medical home or comparable specialty practice.

    (ix) MIPS eligible clinicians participating in APMs that are not patient-centered medical homes for a performance period shall earn a minimum score of one-half of the highest potential score for the improvement activities performance category.

    (4) Advancing care information performance category. (i) A MIPS eligible clinician's advancing care information performance category score equals the sum of the base score, performance score, Public Health and Clinical Data Registry bonus score and completing improvement activities using CEHRT bonus score. The advancing care information performance category score will not exceed 100 percentage points.

    (A) A MIPS eligible clinician earns a base score by reporting the numerator (of at least one) and denominator or yes/no statement or null value as applicable, for each required measure

    (B) A MIPS eligible clinician earns a performance score by reporting on certain measures specified by CMS. MIPS eligible clinicians may earn up to 10 or 20 percentage points as specified by CMS for each measure reported for the performance score.

    (C) A MIPS eligible clinician earn a bonus of five percentage points for reporting any measures beyond than the Immunization Registry Reporting measure for the Public Health and Clinical Data Registry objective.

    (D) A MIPS eligible clinician earns a bonus of 10 percentage points for attesting to completing one or more improvement activities specified by CMS using CEHRT.

    (ii) [Reserved]

    (c) Final score calculation. Each MIPS eligible clinician receives a final score of 0 to 100 points equal to the sum of each of the products of each performance category score and each performance category's assigned weight, multiplied by 100.

    (1) Performance category weights. Subject to CMS's authority to reweight, performance category weights under section 1848(q)(5)(F) of the Act:

    (i) Quality performance category weight is defined under § 414.1330(b).

    (ii) Cost performance category weight is defined under § 414.1350(b).

    (iii) Improvement activities performance category weight is defined under § 414.1355(b).

    (iv) Advancing care information performance category weight is defined under § 414.1375(a).

    (2) Reweighting the performance categories. If CMS determines there are not sufficient measures and activities applicable and available to MIPS eligible clinicians, CMS will assign weights to the performance categories that are different from the weights specified in § 414.1380(c)(1).

    (d) Scoring for APM entities. MIPS eligible clinicians in APM Entities that are subject to the APM scoring standard are scored using the methodology under § 414.1370.

    § 414.1385 Targeted review and review limitations.

    (a) Targeted review. MIPS eligible clinicians or groups may request a targeted review of the calculation of the MIPS payment adjustment factor under section 1848(q)(6)(A) of the Act and, as applicable, the calculation of the additional MIPS payment adjustment factor under section 1848(q)(6)(C) of the Act applicable to such MIPS eligible clinician or group for a year. The process for targeted reviews is:

    (1) MIPS eligible clinicians and groups have a 60-day period to submit a request for targeted review, which begins on the day CMS makes available the MIPS payment adjustment factor, and if applicable the additional MIPS payment adjustment factor, for the MIPS payment year and ends on September 30 of the year prior to the MIPS payment year or a later date specified by CMS.

    (2) CMS will respond to each request for targeted review timely submitted and determine whether a targeted review is warranted.

    (3) The MIPS eligible clinician or group may include additional information in support of their request for targeted review at the time the request is submitted. If CMS requests additional information from the MIPS eligible clinician or group, it must be provided and received by CMS within 30 days of the request. Non-responsiveness to the request for additional information may result in the closure of the targeted review request, although the MIPS eligible clinician or group may submit another request for targeted review before the deadline.

    (4) Decisions based on the targeted review are final, and there is no further review or appeal.

    (b) Limitations on review. Except as specified in paragraph (a)(4) of this section, there is no administrative or judicial review under section 1869 or 1879 of the Act, or otherwise of—

    (1) The methodology used to determine the amount of the MIPS payment adjustment factor and the amount of the additional MIPS payment adjustment factor and the determination of such amounts;

    (2) The establishment of the performance standards and the performance period;

    (3) The identification of measures and activities specified for a MIPS performance category and information made public or posted on the Physician Compare Internet Web site of the CMS; and

    (4) The methodology developed that is used to calculate performance scores and the calculation of such scores, including the weighting of measures and activities under such methodology.

    § 414.1390 Data validation and auditing.

    (a) General. CMS will selectively audit MIPS eligible clinicians and groups on a yearly basis. If a MIPS eligible clinician or group is selected for audit, the MIPS eligible clinician or group will be required to do the following in accordance with applicable law and timelines CMS establishes:

    (1) Comply with data sharing requests, providing all data as requested by CMS or our designated entity. All data must be shared with CMS or our designated entity within 45 days of the data sharing request, or an alternate timeframe that is agreed to by CMS and the MIPS eligible clinician or group. Data will be submitted via email, facsimile, or an electronic method via a secure Web site maintained by CMS.

    (2) Provide substantive, primary source documents as requested. These documents may include: Copies of claims, medical records for applicable patients, or other resources used in the data calculations for MIPS measures, objectives, and activities. Primary source documentation also may include verification of records for Medicare and non-Medicare beneficiaries where applicable.

    (b) [Reserved]

    § 414.1395 Public reporting.

    (a) Public reporting of a MIPS eligible clinician's MIPS data. For each program year, CMS will post on a public Web site, in an easily understandable format, information regarding the performance of MIPS eligible clinicians or groups under the MIPS.

    (b) [Reserved]

    § 414.1400 Third party data submission.

    (a) General. (1) MIPS data may be submitted by third party intermediaries on behalf of a MIPS eligible clinician or group by:

    (i) A qualified registry;

    (ii) A QCDR;

    (iii) A health IT vendor or other authorized third party that obtains data from a MIPS eligible clinician's CEHRT; or

    (iv) A CMS-approved survey vendor.

    (2) Qualified registries, QCDRs, and health IT vendors or other authorized third parties may submit data on measures, activities, or objectives for any of the following MIPS performance categories:

    (i) Quality;

    (ii) Improvement activities; or

    (iii) Advancing care information, if the MIPS eligible clinician or group is using CEHRT.

    (3) CMS-approved survey vendors may submit data for the CAHPS for MIPS survey under the MIPS quality performance category.

    (4) Third party intermediaries must meet all the criteria specified by CMS to qualify and be approved as a third party intermediary for purposes of MIPS, including, but not limited to, the following criteria:

    (i) For measures, activities, and objectives under the quality, advancing care information, and improvement activities performance categories, if the data is derived from CEHRT, the QCDR, qualified registry, or health IT vendor must be able to indicate its data source.

    (ii) All submitted data must be submitted in the form and manner specified by CMS.

    (b) QCDR self-nomination criteria. QCDRs must self-nominate, for the 2017 performance period, from November 15, 2016 until January 15, 2017. For future years of the program, starting with the 2018 performance period, QCDRs must self-nominate from September 1 of the prior year until November 1 of the prior year. Entities that desire to qualify as a QCDR for the purposes of MIPS for a given performance period will need to self-nominate for that performance period and provide all information requested by CMS at the time of self-nomination. Having qualified as a QCDR does not automatically qualify the entity to participate in subsequent MIPS performance periods.

    (c) Establishment of a QCDR entity. For an entity to become qualified for a given performance period as a QCDR, the entity must:

    (1) Be in existence as of January 1 of the performance period for which the entity seeks to become a QCDR.

    (2) Have at least 25 participants by January 1 of the performance period.

    (d) Collaboration of entities to become a QCDR. In situations where an entity may not meet the criteria of a QCDR solely on its own but can do so in conjunction with another entity, the entity must also comply with the following:

    (1) An entity that uses an external organization for purposes of data collection, calculation, or transmission may meet the definition of a QCDR as long as the entity has a signed, written agreement that specifically details the relationship and responsibilities of the entity with the external organization effective as of September 1 the year prior to the year for which the entity seeks to become a QCDR.

    (2) [Reserved]

    (e) Identifying non-MIPS quality measures. For purposes of QCDRs submitting data for the MIPS quality performance category, CMS considers the following types of quality measures to be non-MIPS quality measures:

    (1) A measure that is not contained in the annual list of MIPS quality measures for the applicable performance period.

    (2) A measure that may be in the annual list of MIPS quality measures but has substantive differences, as determined by the Secretary, in the manner it is reported by the QCDR.

    (3) CAHPS for MIPS survey. Although the CAHPS for MIPS survey included in the MIPS measure set, we consider the changes that need to be made for reporting by individual MIPS eligible clinicians (and not as a part of a group) significant enough as to treat the CAHPS for MIPS survey as a non-MIPS quality measure for purposes of individual MIPS eligible clinicians reporting the CAHPS for MIPS survey via a QCDR.

    (f) QCDR measure specifications criteria. A QCDR must provide specifications for each measure, activity, or objective the QCDR intends to submit to CMS. The QCDR must provide CMS descriptions and narrative specifications for each measure, activity, or objective no later than January 15 of the applicable performance period for which the QCDR wishes to submit quality measures or other performance category (improvement activities and advancing care information) data. In future years, starting with the 2018 performance period, those specifications must be provided to CMS by no later than November 1 prior to the applicable performance period for which the QCDR wishes to submit quality measures or other performance category (improvement activities and advancing care information) data.

    (1) For non-MIPS quality measures, the quality measure specifications must include the following for each measure: Name/title of measures, NQF number (if NQF-endorsed), descriptions of the denominator, numerator, and when applicable, denominator exceptions, denominator exclusions, risk adjustment variables, and risk adjustment algorithms. The narrative specifications provided must be similar to the narrative specifications we provide in our measures list. CMS will consider all non-MIPS quality measures submitted by the QCDR but the measures must address a gap in care and outcome or other high priority measures are preferred. Documentation or “check box” measures are discouraged. Measures that have very high performance rates already or address extremely rare gaps in care (thereby allowing for little or no quality distinction between eligible clinicians) are also unlikely to be approved for inclusion.

    (2) For MIPS quality measures, the QCDR only needs to submit the MIPS measure numbers or specialty-specific measure sets (if applicable).

    (3) The QCDR must publicly post the measure specifications (no later than 15 days following CMS approval of the measure specifications) for each non-MIPS quality measure it intends to submit for MIPS. The QCDR may use any public format it prefers. Immediately following posting of the measures specification, the QCDR must provide CMS with the link to where this information is posted.

    (g) Qualified registry self-nomination criteria. Qualified registries must self-nominate, for the 2017 performance period from November 15, 2016 until January 15, 2017. For future years of the program, starting with the 2018 performance period, the qualified registry must self-nominate from September 1 of the prior year until November 1 of the prior year. Entities that desire to qualify as a qualified registry for a given performance period must self-nominate and provide all information requested by CMS at the time of self-nomination. Having qualified as a qualified registry does not automatically qualify the entity to participate in subsequent MIPS performance periods.

    (h) Establishment of a qualified registry entity. For an entity to become qualified for a given performance period as a qualified registry, the entity must:

    (1) Be in existence as of January 1 of the performance period for which the entity seeks to become a qualified registry.

    (2) Have at least 25 participants by January 1 of the performance period.

    (i) CMS-approved survey vendor application criteria. Vendors are required to undergo the CMS approval process for each year in which the survey vendor seeks to transmit survey measures data to CMS. All CMS-approved survey vendor applications and materials will be due by April 30 of the performance period.

    (j) Auditing of entities submitting MIPS data. Any third party intermediary (that is, a QCDR, health IT vendor, qualified registry, or CMS-approved survey vendor) must comply with the following procedures as a condition of their qualification and approval to participate in MIPS as a third party intermediary.

    (1) The entity must make available to CMS the contact information of each MIPS eligible clinician or group on behalf of whom it submits data. The contact information will include, at a minimum, the MIPS eligible clinician or group's practice phone number, address, and, if available, email.

    (2) The entity must retain all data submitted to CMS for MIPS for a minimum of 10 years.

    (3) For the purposes of auditing, CMS may request any records or data retained for the purposes of MIPS for up to 6 years and 3 months.

    (k) Probation and disqualification of a third party intermediary. (1) If at any time we determine that a third party intermediary (that is, a QCDR, health IT vendor, qualified registry, or CMS-approved survey vendor) has not met all of the applicable criteria for qualification and approval, CMS may place the third party intermediary on probation for the current performance period or the following performance period, as applicable.

    (2) For purposes of this section, probation means that, for the applicable performance period, the third party intermediary must meet all applicable criteria for qualification and approval and must submit a corrective action plan for remediation or correction of any deficiencies identified by CMS that resulted in the probation.

    (3) CMS requires a corrective action plan from the third party intermediary to address any deficiencies or issues and prevent them from recurring. The corrective action plan must be received and accepted by CMS within 14 days of the CMS notification to the third party intermediary of the deficiency or probation. If the corrective action plan is not received and accepted by CMS within the specified time, CMS may disqualify the third party intermediary from the MIPS program for the subsequent performance period.

    (4) If the third party intermediary has data inaccuracies including (but not limited to) TIN/NPI mismatches, formatting issues, calculation errors, data audit discrepancies affecting in excess of 3 percent (but less than 5 percent) of the total number of MIPS eligible clinicians or groups submitted by the third party intermediary, such inaccuracies will trigger paragraph (k)(3) of this section and may result in this information being posted on the CMS Web site.

    (5) If the third party intermediary does not reduce their data error rate below 3 percent for the subsequent performance period, the third party intermediary will continue to be on probation and have their listing on the CMS Web site continue to note the poor quality of the data they are submitting for MIPS for one additional year. After 2 years on probation, the third party intermediary will be disqualified for the subsequent performance period.

    (6) Before placing the third party intermediary on probation; CMS would notify the third party intermediary of the identified issues, at the time of discovery of such issues.

    (7) If the third party intermediary does not submit an acceptable corrective action plan within 14 days of notification of deficiencies, and correct the deficiencies within 30 days or before the submission deadline—whichever is sooner, CMS may disqualify the third party intermediary from participating in MIPS for the current performance period or the following performance period, as applicable.

    § 414.1405 Payment.

    (a) General. Each MIPS eligible clinician receives a MIPS payment adjustment factor, and if applicable an additional MIPS payment adjustment factor for exceptional performance, for a MIPS payment year determined by comparing their final score to the performance threshold and additional performance threshold for the year.

    (b) Performance threshold. A performance threshold will be specified for each MIPS payment year.

    (1) MIPS eligible clinicians with a final score at or above the performance threshold receive a zero or positive MIPS payment adjustment factor on a linear sliding scale such that an adjustment factor of 0 percent is assigned for a final score at the performance threshold and an adjustment factor of the applicable percent is assigned for a final score of 100.

    (2) MIPS eligible clinicians with a final score below the performance threshold receive a negative MIPS payment adjustment factor on a linear sliding scale such that an adjustment factor of 0 percent is assigned for a final score at the performance threshold and an adjustment factor of the negative of the applicable percent is assigned for a final score of 0; further, MIPS eligible clinicians with final scores that are equal to or greater than zero, but not greater than one-fourth of the performance threshold, receive a negative MIPS payment adjustment factor that is equal to the negative of the applicable percent.

    (3) A scaling factor not to exceed 3.0 may be applied to positive MIPS payment adjustment factors to ensure budget neutrality such that the estimated increase in aggregate allowed charges resulting from the application of the positive MIPS payment adjustment factors for the MIPS payment year equals the estimated decrease in aggregate allowed charges resulting from the application of negative MIPS payment adjustment factors for the MIPS payment year.

    (c) Applicable percent. For MIPS payment year 2019, 4 percent. For MIPS payment year 2020, 5 percent. For MIPS payment year 2021, 7 percent. For MIPS payment year 2022 and each subsequent MIPS payment year, 9 percent.

    (d) Additional performance threshold. An additional performance threshold will be specified for each of the MIPS payment years 2019 through 2024.

    (1) In addition to the MIPS payment adjustment factor, MIPS eligible clinicians with a final score at or above the additional performance threshold receive an additional MIPS payment adjustment factor for exceptional performance on a linear sliding scale such that an additional adjustment factor of 0.5 percent is assigned for a final score at the additional performance threshold and an additional adjustment factor of 10 percent is assigned for a final score of 100, subject to the application of a scaling factor as determined by CMS, such that the estimated aggregate increase in payments resulting from the application of the additional MIPS payment adjustment factors for the MIPS payment year shall not exceed $500,000,000 for each of the MIPS payment years 2019 through 2024.

    (2) [Reserved]

    (e) Application of adjustments to payments. For each MIPS payment year, the MIPS payment adjustment factor, and if applicable the additional MIPS payment adjustment factor, are applied to Medicare Part B payments for items and services furnished by the MIPS eligible clinician during the year.

    § 414.1410 Advanced APM determination.

    (a) General. An APM is an Advanced APM for a payment year if CMS determines that it meets the criteria in § 414.1415 during the QP Performance Period.

    (b) Advanced APM and Other Payer Advanced APM determination process. CMS identifies Advanced APMs and Other Payer Advanced APMs in the following manner:

    (1) Advanced APM determination. (i) No later than January 1, 2017, CMS will post on its Web site a list of all Advanced APMs for the first QP Performance Period.

    (ii) CMS updates the Advanced APM list on its Web site at intervals no less than annually.

    (iii) CMS will include notice of whether a new APM is an Advanced APM in the first public notice of the new APM.

    (2) Other Payer Advanced APM determination. (i) CMS identifies Other Payer Advanced APMs following conclusion of the QP Performance Period using information submitted to CMS according to § 414.1445. CMS will not make determinations for other payer arrangements for which insufficient information is submitted.

    (ii) CMS makes Other Payer Advanced APM determinations prior to QP determinations under § 414.1440.

    (iii) CMS makes final Other Payer Advanced APM determinations and notifies Advanced APM Entities and eligible clinicians of such determinations as soon as practicable.

    § 414.1415 Advanced APM criteria.

    (a) Use of certified electronic health record technology (CEHRT)—(1) Required use of CEHRT. To be an Advanced APM, an APM must:

    (i) Require at least 50 percent of eligible clinicians in each participating APM Entity group, or, for APMs in which hospitals are the APM Entities, each hospital, to use CEHRT to document and communicate clinical care to their patients or other health care providers; or

    (ii) For the Shared Savings Program, apply a penalty or reward to an APM Entity based on the degree of the use of CEHRT of the eligible clinicians in the APM Entity.

    (b) Payment based on quality measures. (1) To be an Advanced APM, an APM must include quality measure results as a factor when determining payment to participants under the terms of the APM.

    (2) At least one of the quality measures upon which an Advanced APM bases the payment in paragraph (b)(1) of this section must have an evidence-based focus, be reliable and valid, and meet at least one of the following criteria:

    (i) Used in the MIPS quality performance category as described in § 414.1330;

    (ii) Endorsed by a consensus-based entity;

    (iii) Developed under section 1848(s) of the Act;

    (iv) Submitted in response to the MIPS Call for Quality Measures under section 1848(q)(2)(D)(ii) of the Act; or

    (v) Any other quality measures that CMS determines to have an evidence-based focus and to be reliable and valid.

    (3) In addition to the quality measure requirements under paragraph (b)(2) of this section, the quality measures upon which an Advanced APM bases the payment in paragraph (b)(1) of this section must include at least one outcome measure. This requirement does not apply if CMS determines that there are no available or applicable outcome measures included in the MIPS quality measures list for the Advanced APM's first QP Performance Period.

    (c) Financial risk. To be an Advanced APM, an APM must either meet the financial risk standard under paragraph (d)(1) or (2) of this section and the nominal amount standard under paragraph (d)(3) or (4) of this section or be an expanded Medical Home Model under section 1115A(c) of the Act.

    (1) Generally applicable financial risk standard. Except for paragraph (c)(2) of this section, to be an Advanced APM, an APM must, based on whether an APM Entity's actual expenditures for which the APM Entity is responsible under the APM exceed expected expenditures during a specified QP Performance Period, do one or more of the following:

    (i) Withhold payment for services to the APM Entity or the APM Entity's eligible clinicians;

    (ii) Reduce payment rates to the APM Entity or the APM Entity's eligible clinicians; or

    (iii) Require the APM Entity to owe payment(s) to CMS.

    (2) Medical Home Model financial risk standard. The following standard applies only for APM Entities that are participating in Medical Home Models, and, starting in the 2018 QP Performance Period, such APM Entities must be owned and operated by an organization with fewer than 50 eligible clinicians whose Medicare billing rights have been reassigned to the TIN(s) of the organization(s) or any of the organization's subsidiary entities. The APM Entity participates in a Medical Home Model that, based on the APM Entity's failure to meet or exceed one or more specified performance standards, which may include expected expenditures, does one or more of the following:

    (i) Withholds payment for services to the APM Entity or the APM Entity's eligible clinicians;

    (ii) Reduces payment rates to the APM Entity or the APM Entity's eligible clinicians;

    (iii) Requires the APM Entity to owe payment(s) to CMS; or

    (iv) Causes the APM Entity to lose the right to all or part of an otherwise guaranteed payment or payments.

    (3) Generally applicable nominal amount standard. (i) Except as provided in paragraph (c)(4) of this section, the total amount an APM Entity potentially owes CMS or foregoes under an APM must be at least equal to either:

    (A) For QP Performance Periods 2017 and 2018, 8 percent of the estimated average total Medicare Parts A and B revenues of participating APM Entities; or

    (B) 3 percent of the expected expenditures for which an APM Entity is responsible under the APM.

    (ii) [Reserved]

    (4) Medical Home Model nominal amount standard. (i) For a Medical Home Model to be an Advanced APM, the total annual amount that an Advanced APM Entity potentially owes CMS or foregoes must be at least the following amounts:

    (A) For QP Performance Period 2017, 2.5 percent of the estimated average total Medicare Parts A and B revenues of participating APM Entities.

    (B) For QP Performance Period 2018, 3 percent of the estimated average total Medicare Parts A and B revenues of participating APM Entities;

    (C) For QP Performance Period 2019, 4 percent of the estimated average total Medicare Parts A and B revenues of participating APM Entities.

    (D) For QP Performance Period 2020 and later, 5 percent of the estimated average total Medicare Parts A and B revenues of participating APM Entities.

    (5) Expected expenditures. For the purposes of this section, expected expenditures is defined as the beneficiary expenditures for which an APM Entity is responsible under an APM. For episode payment models, expected expenditures mean the episode target price.

    (6) Capitation. A full capitation arrangement meets this Advanced APM criterion. For purposes of this part, a capitation arrangement means a payment arrangement in which a per capita or otherwise predetermined payment is made under the APM for all items and services for which payment is made through the APM furnished to a population of beneficiaries, and no settlement is performed to reconcile or share losses incurred or savings earned by the APM Entity. Arrangements between CMS and Medicare Advantage Organizations under the Medicare Advantage program (42 U.S.C. 422) are not considered capitation arrangements for purposes of this paragraph.

    § 414.1420 Other payer advanced APMs.

    (a) Other Payer Advanced APM criteria. A payment arrangement with a payer other than Medicare is an Other Payer Advanced APM for a QP Performance Period if CMS determines that the arrangement meets the following criteria during the QP Performance Period:

    (1) Use of CEHRT, as described in paragraph (b) of this section;

    (2) Quality measures comparable to measures under the MIPS quality performance category apply, as described in paragraph (c) of this section; and

    (3) Either:

    (i) Requires APM Entities to bears more than nominal financial risk if actual aggregate expenditures exceed expected aggregate expenditures, as described in paragraph (d) of this section; or

    (ii) Is a Medicaid Medical Home Model that meets criteria comparable to Medical Home Models expanded under section 1115A(c) of the Act, as described in paragraph (d)(3) of this section.

    (b) Use of CEHRT. To be an Other Payer Advanced APM, an other payer arrangement must require participants to use CEHRT as defined in § 414.1305. The other payer arrangement must require at least 50 percent of eligible clinicians in each participating APM Entity group, or each hospital if hospitals are the APM Entities, to use CEHRT to document and communicate clinical care.

    (c) Quality measure use. (1) To be an Other Payer Advanced APM, a payment arrangement must apply quality measures comparable to measures under the MIPS quality performance category, as described in paragraph (c)(2) of this section.

    (2) At least one of the quality measures used in the payment arrangement with an APM Entity must have an evidence-based focus, be reliable and valid, and meet at least one of the following criteria:

    (i) Used in the MIPS quality performance category, as described in § 414.1330;

    (ii) Endorsed by a consensus-based entity;

    (iii) Developed under section 1848(s) of the Act;

    (iv) Submitted in response to the MIPS Call for Quality Measures under section 1848(q)(2)(D)(ii) of the Act; or

    (v) Any other quality measures that CMS determines to have an evidence-based focus and to be reliable and valid.

    (3) To meet the quality measure use criterion, an other payment arrangement must use an outcome measure if there is an applicable outcome measure on the MIPS quality measure list. If an Other Payer Advanced APM has no outcome measure, the Advanced APM Entity must attest that there is no applicable outcome measure on the MIPS list.

    (d) Other Payer Advanced APM financial risk. To be an Other Payer Advanced APM, an other payer arrangement must meet either the financial risk standard under paragraph (d)(1) or (2) of this section and the nominal risk standard under paragraph (d)(3) or (4) of this section, make payment using a full capitation arrangement under paragraph (d)(6) of this section, or be a Medicaid Medical Home Model that meets criteria comparable to an expanded Medical Home Model under section 1115A(c) of the Act.

    (1) Other Payer Advanced APM financial risk standard. Except for APM Entities to which paragraph (d)(2) of this section applies, to be an Other Payer Advanced APM, an APM Entity must, based on whether an APM Entity's actual expenditures for which the APM Entity is responsible under the APM exceed expected expenditures during a specified performance period do one or more of the following:

    (i) Withhold payment for services to the APM Entity or the APM Entity's eligible clinicians;

    (ii) Reduce payment rates to the APM Entity or the APM Entity's eligible clinicians; or

    (iii) Require direct payment by the APM Entity to the payer.

    (2) Medicaid Medical Home Model financial risk standard. For an APM Entity owned and operated by an organization with fewer than 50 eligible clinicians whose Medicare billing rights have been reassigned to the TIN(s) of the organization(s) or any of the organization's subsidiary entities, the following standard applies. The APM Entity participates in a Medicaid Medical Home Model that, based on the APM Entity's failure to meet or exceed one or more specified performance standards, does one or more of the following:

    (i) Withhold payment for services to the APM Entity or the APM Entity's eligible clinicians;

    (ii) Require direct payment by the APM Entity to the Medicaid program;

    (iii) Reduce payment rates to the APM Entity or the APM Entity's eligible clinicians; or

    (iv) Require the APM Entity to lose the right to all or part of an otherwise guaranteed payment or payments.

    (3) Other Payer Advanced APM nominal amount standard. (i) Except for risk arrangements described under paragraph (d)(2) of this section, the total amount an APM Entity potentially owes us or foregoes under an Other Payer Advanced APM is at least be equal to 3 percent of the expected expenditures for which an APM Entity is responsible under the payment arrangement.

    (ii) Except for risk arrangements described under paragraph (d)(2) of this section, the risk arrangement must have:

    (A) A marginal risk rate of at least 30 percent; and

    (B) Total potential risk of at least 4 percent of expected expenditures.

    (4) Medicaid Medical Home Model nominal amount standard. For an APM Entity owned and operated by an organization with fewer than 50 eligible clinicians whose Medicare billing rights have been reassigned to the TIN(s) of the organization(s) or any of the organization's subsidiary entities, the following standard applies. For a Medicaid Medical Home Model to be an Other Payer Advanced APM, the total annual amount that an Advanced APM Entity potentially owes CMS or foregoes must be at least the following amounts:

    (i) For QP Performance Period 2019, 4 percent of the estimated average total revenue of participating APM Entities from the payer.

    (ii) For QP Performance Period 2020 and later, 5 percent of the estimated average total revenue of participating APM Entities for the payer.

    (5) Marginal risk rate. For purposes of this section, the marginal risk rate is defined as the percentage of actual expenditures that exceed expected expenditures for which an APM Entity is responsible under an APM.

    (i) In the event that the marginal risk rate varies depending on the amount by which actual expenditures exceed expected expenditures, the lowest marginal risk rate across all possible levels of actual expenditures would be used for comparison to the marginal risk rate specified in paragraph (d)(3)(ii)(A) of this section, with exceptions for large losses as described in paragraph (d)(5)(ii) of this section and small losses as described in paragraph (d)(5)(iii) of this section.

    (ii) Allowance for large losses. The determination in paragraph (d)(3)(ii)(A) of this section may disregard the marginal risk rates that apply in cases when actual expenditures exceed expected expenditures by an amount sufficient to require the APM Entity to make financial risk payments under the Other Payer Advanced APM greater than or equal to the total risk requirement under paragraph (d)(3)(i) of this section.

    (iii) Allowance for minimum loss rate. The determination in paragraph (d)(3)(ii)(A) of this section may disregard the marginal risk rates that apply in cases when actual expenditures exceed expected expenditures by less than 4 percent of expected expenditures.

    (6) Expected expenditures. For the purposes of this section, expected expenditures is defined as the Other Payer Advanced APM benchmark, except for episode payment models, for which it is defined as the episode target price.

    (7) Capitation. A capitation arrangement meets this Other Payer Advanced APM criterion. For purposes of paragraph (d)(3) of this section, a capitation arrangement means a payment arrangement in which a per capita or otherwise predetermined payment is made under the APM for all items and services for which payment is made through the APM furnished to a population of beneficiaries, and no settlement is performed for the purpose of reconciling or sharing losses incurred or savings earned by the APM Entity. Arrangements made directly between CMS and Medicare Advantage Organizations under the Medicare Advantage program (42 U.S.C. 422) are not considered capitation arrangements for purposes of this paragraph.

    § 414.1425 Qualifying APM participant determination: In general.

    (a) List used for QP determination. (1) For Advanced APMs with Advanced APM Entities that include eligible clinicians on a Participation List, the Participation List defines the APM Entity group, regardless of whether the Advanced APM Entity also has eligible clinicians on an Affiliated Practitioner List.

    (2) For Advanced APMs with Advanced APM Entities that do not include eligible clinicians on a Participation List but do include eligible clinicians on an Affiliated Practitioner List, the Affiliated Practitioner List defines the eligible clinicians who will be assessed to become QPs.

    (3) For Advanced APMs with some Advanced APM Entities that include eligible clinicians on a Participation List and other Advanced APM Entities that only include eligible clinicians on an Affiliated Practitioner List, paragraph (a)(1) applies to APM Entities that include eligible clinicians on a Participation List, and paragraph (a)(2) applies to APM Entities that only include eligible clinicians on an Affiliated Practitioner List.

    (b) Group or individual determination—(1) APM Entity group determination. Except for § 414.1445 and paragraph (b)(2) of this section, for purposes of the QP determinations for a year, eligible clinicians are grouped and assessed through their collective participation in an APM Entity group that is in an Advanced APM. To be included in the APM Entity group for purposes of the QP determination, an eligible clinician's APM participant identifier must be present on a Participation List of an APM Entity group on one of the dates: March 31, June 30, or August 31 of the QP Performance Period. An eligible clinician included on a Participation List on any one of these dates is included the APM Entity group even if that eligible clinician is not included on that Participation List at one of the prior or later listed dates. CMS performs QP determinations for the eligible clinicians in APM Entity group three times during the QP Performance Period using claims data for services furnished from January 1 through each of the respective QP determination dates: March 31, June 30, and August 31. An eligible clinician can only be determined to be a QP if the eligible clinician appears on the Participation List on a date (March 31, June 30, or August 31) CMS uses to determine the APM Entity group and to make QP determinations collectively for the APM Entity group based on participation in the Advanced APM.

    (2) Affiliated practitioner individual determination. When the Affiliated Practitioner List defines the eligible clinicians to be assessed, for purposes of the QP determinations for a year, those eligible clinicians are assessed individually. To be assessed as an Affiliated Practitioner, an eligible clinician must be identified on an Affiliated Practitioner List on one of the dates: March 31, June 30, or August 31 of the QP Performance Period. An eligible clinician included on an Affiliated Practitioner List on any one of these dates is assessed as an Affiliated Practitioner even if that eligible clinician is not included on that Affiliated Practitioner List at one of the prior or later listed dates. For such eligible clinicians, CMS performs QP determinations during the QP Performance Period using claims data for services furnished from January 1 through each of the respective QP determination dates that the eligible clinician is on the Affiliated Practitioner List: March 31, June 30, and August 31.

    (c) QP determination. (1) CMS makes QP determinations as set forth in §§ 414.1435 and 414.1440.

    (2) An eligible clinician cannot be both a QP and a Partial QP for a year. A determination that an eligible clinician is a QP means that the eligible clinician is not a Partial QP.

    (3) An eligible clinician is a QP for a year if the eligible clinician is in an APM Entity group that achieves a Threshold Score that meets or exceeds the corresponding QP payment amount threshold or QP patient count threshold for that QP Performance Period, as described in § 414.1430(a)(1) and (3) and (b)(1) and (3).

    (4) Notwithstanding paragraph (c)(3) of this section, an eligible clinician is a QP for a year if:

    (i) The eligible clinician is included in more than one Advanced APM Entity group and none of the Advanced APM Entity groups in which the eligible clinician is included meets the QP payment amount threshold or the QP patient count threshold, or the eligible clinician is an Affiliated Practitioner; and

    (ii) CMS determines that the eligible clinician individually achieves a Threshold Score that meets or exceeds the QP payment amount threshold or the QP patient count threshold.

    (5) Notwithstanding paragraph (c)(3) of this section, an eligible clinician is not a QP for a year if the APM Entity group voluntarily or involuntarily terminates from an Advanced APM before the end of the QP Performance Period.

    (6) Notwithstanding paragraph (c)(4) of this section, an eligible clinician is not a QP for a year if any of the Advanced APM Entities in which the eligible clinician participates voluntarily or involuntarily terminates from the Advanced APM before the end of the QP Performance Period.

    (d) Partial QP determination. (1) An eligible clinician is a Partial QP for a year if the APM Entity group collectively achieves a Threshold Score that meets or exceeds the corresponding Partial QP threshold for that year, as described in § 414.1430(a)(2) and (4) and (b)(2) and (4).

    (2) Notwithstanding paragraph (d)(1) of this section, an eligible clinician is a Partial QP for a year if:

    (i) The eligible clinician is included in more than one APM Entity group and none of the APM Entity groups in which the eligible clinician is included meets the corresponding QP or Partial QP threshold, or the eligible clinician is an Affiliated Practitioner; and

    (ii) CMS determines that the eligible clinician individually achieves a Threshold Score that meets or exceeds the corresponding Partial QP Threshold.

    (3) Notwithstanding paragraph (d)(1) of this section, an eligible clinician is not a Partial QP for a year if the APM Entity group voluntarily or involuntarily terminates from an Advanced APM before the end of the QP Performance Period.

    (4) Notwithstanding paragraph (d)(2) of this section, an eligible clinician is not a Partial QP for a year if any of the Advanced APM Entities in which the eligible clinician participates voluntarily or involuntarily terminates from the Advanced APM before the end of the QP Performance Period.

    (e) Notification of QP determination. CMS notifies eligible clinicians determined to be QPs or Partial QPs for a year as soon as practicable following each QP determination date in the QP Performance Period.

    (f) Order of threshold options. (1) For payment years 2019 and 2020, CMS performs QP determinations for an eligible clinicians only under the Medicare Option described in § 414.1435.

    (2) For payment years 2021 and later, CMS performs QP determinations for eligible clinicians under the Medicare Option, as described in § 414.1435 and, except as specified in paragraphs (d)(2)(i) and (ii) of this section, the All-Payer Combination Option, described in § 414.1440.

    (i) If CMS determines the eligible clinician to be a QP under the Medicare Option, then CMS does not calculate a Threshold Score for such eligible clinician under the All-Payer Combination Option.

    (ii) If the Threshold Score for an eligible clinician under the Medicare Option is less than the amount specified in § 414.1430(b)(2)(ii) and (b)(3)(iii), then CMS does not perform a QP determination for such eligible clinician(s) under the All-Payer Combination Option.

    § 414.1430 Qualifying APM participant determination: QP and partial QP thresholds.

    (a) Medicare Option—(1) QP payment amount threshold. The QP payment amount thresholds are the following values for the indicated payment years:

    (i) 2019 and 2020: 25 percent.

    (ii) 2021 and 2022: 50 percent.

    (iii) 2023 and later: 75 percent.

    (2) Partial QP payment amount threshold. The Partial QP payment amount thresholds are the following values for the indicated payment years:

    (i) 2019 and 2020: 20 percent.

    (ii) 2021 and 2022: 40 percent.

    (ii) 2023 and later: 50 percent.

    (3) QP patient count threshold. The QP patient count thresholds are the following values for the indicated payment years:

    (i) 2019 and 2020: 20 percent

    (ii) 2021 and 2022: 35 percent

    (ii) 2023 and later: 50 percent

    (4) Partial QP patient count threshold. The Partial QP patient count thresholds are the following values for the indicated payment years:

    (i) 2019 and 2020: 10 percent

    (ii) 2021 and 2022: 25 percent

    (iii) 2023 and later: 35 percent

    (b) All-Payer Combination Option—(1) QP payment amount threshold.

    (i) The QP payment amount thresholds are the following values for the indicated payment years:

    (A) 2021 and 2022: 50 percent.

    (B) 2023 and later: 75 percent.

    (ii) To meet the QP payment amount threshold under this option, the eligible clinician must also meet a 25 percent QP payment amount threshold under the Medicare Option.

    (2) Partial QP payment amount threshold. (i) The Partial QP payment amount thresholds are the following values for the indicated payment years:

    (A) 2021 and 2022: 40 percent.

    (B) 2023 and later: 50 percent.

    (ii) To meet the QP payment amount threshold under this option, the eligible clinician must also meet a 20 percent Partial QP payment amount threshold under the Medicare Option.

    (3) QP patient count threshold. (i) The QP patient count thresholds are the following values for the indicated payment years:

    (A) 2021 and 2022: 35 percent.

    (B) 2023 and later: 50 percent.

    (ii) To meet the QP patient count threshold under this option, the eligible clinician must also meet a 20 percent QP patient count threshold under the Medicare Option.

    (4) Partial QP patient count threshold. (i) The Partial QP patient count thresholds are the following values for the indicated payment years:

    (A) 2021 and 2022: 25 percent.

    (B) 2023 and later: 35 percent.

    (ii) To meet the Partial QP patient count threshold under this option, the eligible clinician group or eligible clinician must also meet a 10 percent QP patient count threshold under the Medicare Option.

    § 414.1435 Qualifying APM participant determination: Medicare option.

    (a) Payment amount method. The Threshold Score for an Advanced APM Entity group or eligible clinician is calculated as a percent by dividing the value described under paragraph (a)(1) of this section by the value described under paragraph (a)(2) of this section.

    (1) Numerator. The aggregate of payments for Medicare Part B covered professional services furnished by the Advanced APM Entity group to attributed beneficiaries during the QP Performance Period.

    (2) Denominator. The aggregate of payments for Medicare Part B covered professional services furnished by the APM Entity group to all attribution-eligible beneficiaries during the QP Performance Period.

    (3) Claims and adjustments. In the calculations under paragraphs (a)(1) and (2) of this section, CMS compiles claims and treats claims adjustments, supplemental service payments, and alternative payment methods in the same manner as described in § 414.1450.

    (b) Patient count method. The Threshold Score for each eligible clinician in an APM Entity group is calculated as a percent under the patient count method by dividing the value described under paragraph (b)(1) of this section by the value described under paragraph (b)(2) of this section.

    (1) Numerator. The number of attributed beneficiaries to whom the Advanced APM Entity group furnishes Medicare Part B covered professional services or services by a Rural Health Clinic (RHC) or Federally-Qualified Health Center (FQHC) during the QP Performance Period.

    (2) Denominator. The number of attribution-eligible beneficiaries to whom the APM Entity group or eligible clinician furnish Medicare Part B covered professional services or services by a Rural Health Clinic (RHC) or Federally-Qualified Health Center (FQHC) during the QP Performance Period.

    (3) Unique beneficiaries. For each Advanced APM Entity group, a unique Medicare beneficiary is counted no more than one time for the numerator and no more than one time for the denominator.

    (4) Beneficiaries count multiple times. Based on attribution under the terms of an Advanced APM, a single Medicare beneficiary may be counted in the numerator or denominator for multiple different Advanced APM Entity groups.

    (c) Attribution. (1) Attributed beneficiaries are determined from Advanced APM attributed beneficiary lists generated by each Advanced APM's specific attribution methodology.

    (2) When operationally feasible, this attributed beneficiary list will be the final beneficiary list used for reconciliation purposes in the Advanced APM.

    (3) When it is not operationally feasible to use the final attributed beneficiary list, the attributed beneficiary list will be taken from the Advanced APM's most recently available attributed beneficiary list at the end of the QP Performance Period.

    (d) Use of methods. CMS calculates Threshold Scores for an Advanced APM Entity under both the payment amount and patient count methods for each QP Performance Period. CMS then assigns the score to the eligible clinicians included in the Advanced APM Entity that results in the greater QP status. QP status is greater than a Partial QP status, which is greater than no QP status.

    § 414.1440 Qualifying APM participant determination: All-payer combination option.

    (a) Payments excluded from calculations. (1) These calculations include a combination of both Medicare payments for Part B covered professional services and all other payments for all other payers, except for payments made by:

    (i) The Secretary of Defense for the costs of Department of Defense health care programs;

    (ii) The Secretary of Veterans Affairs for the cost of Department of Veterans Affairs health care programs; and

    (iii) Under Title XIX in a State in which no Medicaid Medical Home Model or APM is available.

    (2) Title XIX payments will only be included in the numerator and denominator as specified in paragraphs (b)(2) and (3) of this section for an Advanced APM Entity if:

    (i) A State has at least one Medicaid Medical Home Model or Medicaid APM in operation that is determined to be an Other Payer Advanced APM; and

    (ii) The Advanced APM Entity is eligible to participate in at least one of such Other Payer Advanced APMs during the QP Performance Period, regardless of whether the Advanced APM Entity actually participates in such Other Payer Advanced APMs. This will apply to both the payment amount and patient count methods.

    (b) Payment amount method—(1) In general. The Threshold Score for an Advanced APM Entity group or eligible clinician will be calculated by dividing the value described under the numerator by the value described under the denominator as specified in paragraphs (b)(2) and (3) of this section.

    (2) Numerator. The aggregate amount of all payments from all payers, except those excluded under paragraph (a) of this section, to the Advanced APM Entity group or eligible clinician under the terms of Other Payer Advanced APMs during the QP Performance Period. CMS calculates Medicare Part B covered professional services under the All-Payer Combination Option in the same manner as it is calculated under the Medicare Option.

    (3) Denominator. The aggregate amount of all payments from all payers, except those excluded under paragraph (a) of this section, to the Advanced APM Entity group during the QP Performance Period. The portion of this amount that relates to Medicare Part B covered professional services is calculated under the All-Payer Combination Option in the same manner as it is calculated under the Medicare Option.

    (c) Patient count method—(1) In general. The Threshold Score for an Advanced APM Entity group or eligible clinician is calculated by dividing the value described under the numerator by the value described under the denominator as specified in paragraphs (c)(2) and (3) of this section).

    (2) Numerator. The number of unique patients to whom the Advanced APM Entity group or eligible clinician furnishes services that are included in the measures of aggregate expenditures used under the terms of all of their Other Payer Advanced APMs during the QP Performance Period, plus the patient count numerator specified in paragraph (a)(1) of this section.

    (3) Denominator. The number of unique patients to whom eligible clinicians in the Advanced APM Entity group furnish services under all non-excluded payers during the QP Performance Period.

    (d) Participation in multiple Other Payer Advanced APMs. (1) For each APM Entity group or eligible clinician, a unique patient is counted no more than one time for the numerator and no more than one time for the denominator for each payer.

    (2) CMS may count a single patient in the numerator and/or denominator for multiple different Advanced APM Entities or eligible clinicians.

    (3) For purposes of this section, Advanced APM Entities are considered the same entity across Other Payer Advanced APMs if CMS determines that the Participation Lists are substantially similar or if one entity is a subset of the other.

    § 414.1445 Identification of other payer advanced APMs.

    (a) Identification of Medicaid APMs. CMS will make an annual determination prior to the QP Performance Period to identify Medicaid Medical Home Models and Medicaid APMs.

    (b) Data used to calculate the Threshold Score under the All-Payer Combination Option. To be assessed under the All-Payer Combination Option, APM Entities or eligible clinicians must submit the following information for each other payment arrangement in a manner and by a date specified by CMS:

    (1) Payment arrangement information necessary to assess the other payer arrangement on all Other Payer Advanced APM criteria under § 414.1420;

    (2) For each other payment arrangement, the amount of revenues for services furnished through the arrangement, the total revenues from the payer, the numbers of patients furnished any service through the arrangement, and the total numbers of patients furnished any service through the payer.

    (3) An attestation from the payer that the submitted information is accurate.

    (c) Requirement to submit adequate information. (1) CMS makes a QP determination with respect to the individual eligible clinician under the All-Payer Combination Option if:

    (i) The eligible clinician's Advanced APM Entity submits the information required under this section for CMS to assess the APM Entity group under the All-Payer Combination Option; or

    (ii) The eligible clinician submits adequate information under this section.

    (2) If neither the Advanced APM Entity nor the eligible clinician submits all of the information required under this section, then CMS does not make a QP assessment for such eligible clinician under the All-Payer Combination Option.

    (d) Outcome measure. An Other Payer Advanced APM must base payment on at least one outcome measure.

    (1) Exception. If an Other Payer Advanced APM has no outcome measure, the Advanced APM Entity must submit an attestation in a manner and by a date determined by CMS that there is no available or applicable outcome measure on the MIPS list of quality measures.

    (2) [Reserved]

    § 414.1450 APM incentive payment.

    (a) In general. (1) CMS makes a lump sum payment to QPs in the amount described in paragraph (b) of this section in the manner described in paragraphs (d) and (e) of this section.

    (2) CMS provides notice of the amount of the APM Incentive Payment to QPs as soon as practicable following the calculation and validation of the APM Incentive Payment amount, but in any event no later than 1 year after the incentive payment base period.

    (b) APM Incentive Payment amount. (1) The amount of the APM Incentive Payment is equal to 5 percent of the estimated aggregate payments for covered professional services as defined in section 1848(k)(3)(A) of the Act furnished during the calendar year immediately preceding the payment year.

    (2) The estimated aggregate payment amount for covered professional services includes all such payments to any and all of the TIN/NPI combinations associated with the NPI of the QP.

    (3) In calculating the estimated aggregate payment amount for a QP, CMS uses claims submitted with dates of service from January 1 through December 31 of the incentive payment base period, and processing dates of January 1 of the base period through March 31 of the subsequent payment year.

    (4) The payment adjustment amounts, negative or positive, as described in sections 1848(m), (o), (p), and (q) of the Act are not included in calculating the APM Incentive Payment amount.

    (5) Incentive payments made to eligible clinicians under sections 1833(m), (x), and (y) of the Act are not included in calculating the APM Incentive Payment amount.

    (6) Financial risk payments such as shared savings payments or net reconciliation payments are excluded from the amount of covered professional services in calculating the APM Incentive Payment amount.

    (7) Supplemental service payments in the amount of covered professional services are included in calculating the APM Incentive Payment amount according to this paragraph (b). Supplemental service payments are included in the amount of covered professional services when calculating the APM Incentive Payment amount when the supplemental service payment meets the following four criteria:

    (i) Is payment for services that constitute physicians services authorized under section 1832(a) and defined under section 1861(s) of the Act.

    (ii) Is made for only Part B services under the criterion in paragraph (b)(9)(i) of this section.

    (iii) Is directly attributable to services furnished to an individual beneficiary.

    (iv) Is directly attributable to an eligible clinician, including an eligible clinician that is a group of individual eligible clinicians.

    (8) For payment amounts that are affected by a cash flow mechanism, the payment amounts that would have occurred if the cash flow mechanism were not in place are used in calculating the APM Incentive Payment amount.

    (c) APM Incentive Payment recipient. (1) CMS pays the entire APM Incentive Payment amount to the TIN associated with the QP's participation in the Advanced APM entity that met the applicable QP threshold during the QP Performance Period.

    (2) In the event that an eligible clinician is no longer affiliated with the TIN associated with the QP's participation in the Advanced APM Entity that met the applicable QP threshold during the QP Performance Period at the time of the APM Incentive Payment distribution, CMS makes the APM Incentive Payment to the TIN listed on the eligible clinician's CMS-588 EFT Application form on the date that the APM Incentive Payment is distributed.

    (3) In the event that an eligible clinician becomes a QP through participation in multiple Advanced APMs, CMS divides the APM Incentive Payment amount between the TINs associated with the QP's participation in each Advanced APM during the QP Performance Period. Such payments will be divided in proportion to the amount of payments associated with each TIN that the eligible clinician received for covered professional services during the QP Performance Period.

    (d) Timing of the APM Incentive Payment. APM Incentive Payments made under this section are made as soon as practicable following the calculation and validation of the APM Incentive Payment amount, but in any event no later than 1 year after the incentive payment base period.

    (e) Treatment of APM Incentive Payment amount in APMs. (1) APM Incentive Payments made under this section are not included in determining actual expenditures under an APM.

    (2) APM Incentive Payments made under this section are not included in calculations for the purposes of rebasing benchmarks in an APM.

    (f) Treatment of APM Incentive Payment for other Medicare incentive payments and payment adjustments. APM Incentive Payments made under this section will not be included in determining the amount of incentive payment made to eligible clinicians under section 1833(m), (x), and (y) of the Act.

    § 414.1455 Limitation on review.

    There is no administrative or judicial review under sections 1869, 1878, or otherwise, of the Act of the following:

    (a) The determination that an eligible clinician is a QP or Partial QP under § 414.1425 and the determination that an APM Entity is an Advanced APM Entity under § 414.1410.

    (b) The determination of the amount of the APM Incentive Payment under § 414.1450, including any estimation as part of such determination.

    § 414.1460 Monitoring and program integrity.

    (a) Vetting eligible clinicians prior to payment of the APM Incentive Payment. Prior to payment of the APM Incentive Payment, CMS determines if eligible clinicians were in compliance with all Medicare conditions of participation and the terms of the relevant Advanced APMs in which they participate during the QP Performance Period. For QPs not meeting these standards there may be a reduction or denial of the APM Incentive Payment. A determination under this provision is not binding for other purposes.

    (b) Termination by Advanced APMs. CMS may reduce or deny an APM Incentive Payment to eligible clinicians who are terminated by APMs or whose Advanced APM Entities are terminated by APMs for non-compliance with all Medicare conditions of participation or the terms of the relevant Advanced APMS in which they participate during the QP Performance Periods.

    (c) Information submitted for All-Payer Combination Option. Information submitted by eligible clinicians or Advanced APM Entities to meet the requirements of the All-Payer Combination Option may be subject to audit by CMS. Eligible clinicians and Advanced APM Entities must maintain copies of any supporting documentation related to All-Payer Combination Option for at least 10 years and must attest to the accuracy and completeness of the data submitted.

    (d) Recoupment of APM Incentive Payment. For any QPs who are terminated from an Advanced APM or found to be in violation of any Federal, State, or tribal statute, regulation, or other binding guidance during the QP Performance Period or Incentive Payment Base Period or terminated after these periods as a result of a violation occurring during either period, CMS may rescind such eligible clinicians' QP determinations and, if necessary, recoup part or all of any such eligible clinicians' APM Incentive Payment or deduct such amount from future payments to such individuals. CMS may reopen and recoup any payments that were made in error in accordance with procedures similar to those set forth at 42 CFR 405.980 and 42 CFR 405.370 through 405.379 or established under the relevant APM. The APM Incentive Payment will be recouped if an audit reveals a lack of support for attested statements provided by eligible clinicians and Advanced APM Entities.

    (e) Maintenance of records. An Advanced APM Entity or eligible clinician that submits information to CMS under § 414.1445 for assessment under the All-Payer Combination Option must maintain such books contracts, records, documents, and other evidence for a period of 10 years from the final date of the QP Performance Period or from the date of completion of any audit, evaluation, or inspection, whichever is later, unless:

    (1) CMS determines there is a special need to retain a particular record or group of records for a longer period and notifies the Advanced APM Entity of eligible clinician at least 30 days before the formal disposition date; or

    (2) There has been a termination, dispute, or allegation of fraud or similar fault against the Advanced APM Entity or eligible clinician, in which case the Advanced APM Entity or eligible clinician must retain records for an additional 6 years from the date of any resulting final resolution of the termination, dispute, or allegation of fraud or similar fault.

    (f) OIG authority. None of the provisions of this part limit or restrict OIG's authority to audit, evaluate, investigate, or inspect the Advanced APM Entity, its eligible clinicians, and other individuals or entities performing functions or services related to its APM activities.

    § 414.1465 Physician-focused payment models.

    (a) Definition. A physician-focused payment model (PFPM) is an Alternative Payment Model:

    (1) In which Medicare is a payer;

    (2) In which eligible clinicians that are eligible professionals as defined in section 1848(k)(3)(B) of the Act are participants and play a core role in implementing the APM's payment methodology; and

    (3) Which targets the quality and costs of services that eligible professionals participating in the Alternative Payment Model provide, order, or can significantly influence.

    (b) Criteria. In carrying out its review of physician-focused payment model proposals, the PTAC must assess whether the physician-focused payment model meets the following criteria for PFPMs sought by the Secretary. The Secretary seeks PFPMs that:

    (1) Incentives: Pay for higher-value care. (i) Value over volume: provide incentives to practitioners to deliver high-quality health care.

    (ii) Flexibility: provide the flexibility needed for practitioners to deliver high-quality health care.

    (iii) Quality and Cost: are anticipated to improve health care quality at no additional cost, maintain health care quality while decreasing cost, or both improve health care quality and decrease cost.

    (iv) Payment methodology: pay APM Entities with a payment methodology designed to achieve the goals of the PFPM Criteria. Addresses in detail through this methodology how Medicare, and other payers if applicable, pay APM Entities, how the payment methodology differs from current payment methodologies, and why the PFPM cannot be tested under current payment methodologies.

    (v) Scope: aim to broaden or expand the CMS APM portfolio by addressing an issue in payment policy in a new way or including APM Entities whose opportunities to participate in APMs have been limited.

    (vi) Ability to be evaluated: have evaluable goals for quality of care, cost, and any other goals of the PFPM.

    (2) Care delivery improvements: Promote better care coordination, protect patient safety, and encourage patient engagement. (i) Integration and Care Coordination: encourage greater integration and care coordination among practitioners and across settings where multiple practitioners or settings are relevant to delivering care to the population treated under the PFPM.

    (ii) Patient Choice: encourage greater attention to the health of the population served while also supporting the unique needs and preferences of individual patients.

    (iii) Patient Safety: aim to maintain or improve standards of patient safety.

    (3) Information Enhancements: Improving the availability of information to guide decision-making. (i) Health Information Technology: encourage use of health information technology to inform care.

    (ii) [Reserved]

    PART 495—STANDARDS FOR THE ELECTRONIC HEALTH RECORD TECHNOLOGY INCENTIVE PROGRAM 4. The authority citation for part 495 continues to read as follows: Authority:

    Secs. 1102 and 1871 of the Social Security Act (42 U.S.C. 1302 and 1395hh).

    5. Section 495.4 is amended by revising the definition of “Meaningful EHR user” to read as follows:
    § 495.4 Definitions.

    Meaningful EHR user means—

    (1) Subject to paragraph (3) of this definition, an EP, eligible hospital or CAH that, for an EHR reporting period for a payment year or payment adjustment year, demonstrates in accordance with § 495.40 meaningful use of certified EHR technology by meeting the applicable objectives and associated measures under §§ 495.20, 495.22, and 495.24, supporting information exchange and the prevention of health information blocking and engaging in activities related to supporting providers with the performance of CEHRT, and successfully reporting the clinical quality measures selected by CMS to CMS or the States, as applicable, in the form and manner specified by CMS or the States, as applicable; and

    (2)(i) Except as specified in paragraph (2)(ii) of this definition, a Medicaid EP or Medicaid eligible hospital, that meets the requirements of paragraph (1) of this definition and any additional criteria for meaningful use imposed by the State and approved by CMS under §§ 495.316 and 495.332.

    (ii) An eligible hospital or CAH is deemed to be a meaningful EHR user for purposes of receiving an incentive payment under subpart D of this part, if the hospital participates in both the Medicare and Medicaid EHR incentive programs, and the hospital meets the requirements of paragraph (1) of this definition.

    (3) To be considered a meaningful EHR user, at least 50 percent of an EP's patient encounters during an EHR reporting period for a payment year (or, in the case of a payment adjustment year, during an applicable EHR reporting period for such payment adjustment year) must occur at a practice/location or practices/locations equipped with certified EHR technology.

    6. Section 495.40 is amended by— a. Revising paragraph (a) introductory text; b. Revising paragraphs (a)(2)(i)(E) and (F); c. Adding paragraphs (a)(2)(i)(G), (H), and (I); d. Revising paragraph (b) introductory text; and e. Adding paragraphs (b)(2)(i)(H) and (I).

    The revisions and additions read as follows:

    § 495.40 Demonstration of meaningful use criteria.

    (a) Demonstration by EPs. An EP must demonstrate that he or she satisfies each of the applicable objectives and associated measures under § 495.20 or § 495.24, supports information exchange and the prevention of health information blocking, and engages in activities related to supporting providers with the performance of CEHRT:

    (2) * * *

    (i) * * *

    (E) For CY 2015 and 2016, satisfied the required objectives and associated measures under § 495.22(e) for meaningful use.

    (F) For CY 2017, the EP may satisfy either the objectives and measures specified in § 495.22(e), or the objectives and measures specified in § 495.24(d).

    (G) For CY 2018 and subsequent years, satisfied the required objectives and associated measures under § 495.24(d) for meaningful use.

    (H) Supporting providers with the performance of CEHRT (SPPC). To engage in activities related to supporting providers with the performance of CEHRT, the EP—

    (1) Must attest that he or she:

    (i) Acknowledges the requirement to cooperate in good faith with ONC direct review of his or her health information technology certified under the ONC Health IT Certification Program if a request to assist in ONC direct review is received; and

    (ii) If requested, cooperated in good faith with ONC direct review of his or her health information technology certified under the ONC Health IT Certification Program as authorized by 45 CFR part 170, subpart E, to the extent that such technology meets (or can be used to meet) the definition of CEHRT, including by permitting timely access to such technology and demonstrating its capabilities as implemented and used by the EP in the field.

    (2) Optionally, may also attest that he or she:

    (i) Acknowledges the option to cooperate in good faith with ONC-ACB surveillance of his or her health information technology certified under the ONC Health IT Certification Program if a request to assist in ONC-ACB surveillance is received; and

    (ii) If requested, cooperated in good faith with ONC-ACB surveillance of his or her health information technology certified under the ONC Health IT Certification Program as authorized by 45 CFR part 170, subpart E, to the extent that such technology meets (or can be used to meet) the definition of CEHRT, including by permitting timely access to such technology and demonstrating capabilities as implemented and used by the EP in the field.

    (I) Support for health information exchange and the prevention of information blocking. For an EHR reporting period in CY 2017 and subsequent years, the EP must attest that he or she—

    (1) Did not knowingly and willfully take action (such as to disable functionality) to limit or restrict the compatibility or interoperability of certified EHR technology.

    (2) Implemented technologies, standards, policies, practices, and agreements reasonably calculated to ensure, to the greatest extent practicable and permitted by law, that the certified EHR technology was, at all relevant times—

    (i) Connected in accordance with applicable law;

    (ii) Compliant with all standards applicable to the exchange of information, including the standards, implementation specifications, and certification criteria adopted at 45 CFR part 170;

    (iii) Implemented in a manner that allowed for timely access by patients to their electronic health information; and

    (iv) Implemented in a manner that allowed for the timely, secure, and trusted bi-directional exchange of structured electronic health information with other health care providers (as defined by 42 U.S.C. 300jj(3)), including unaffiliated providers, and with disparate certified EHR technology and vendors.

    (3) Responded in good faith and in a timely manner to requests to retrieve or exchange electronic health information, including from patients, health care providers (as defined by 42 U.S.C. 300jj(3)), and other persons, regardless of the requestor's affiliation or technology vendor.

    (b) Demonstration by Eligible Hospitals and CAHs. An eligible hospital or CAH must demonstrate that it satisfies each of the applicable objectives and associated measures under § 495.20 or § 495.24, supports information exchange and the prevention of health information blocking, and engages in activities related to supporting providers with the performance of CEHRT, as follows:

    (2) * * *

    (i) * * *

    (H) Supporting providers with the performance of CEHRT (SPPC). To engage in activities related to supporting providers with the performance of CEHRT, the eligible hospital or CAH—

    (1) Must attest that it:

    (i) Acknowledges the requirement to cooperate in good faith with ONC direct review of his or her health information technology certified under the ONC Health IT Certification Program if a request to assist in ONC direct review is received; and

    (ii) If requested, cooperated in good faith with ONC direct review of its health information technology certified under the ONC Health IT Certification Program as authorized by 45 CFR part 170, subpart E, to the extent that such technology meets (or can be used to meet) the definition of CEHRT, including by permitting timely access to such technology and demonstrating its capabilities as implemented and used by the eligible hospital or CAH in the field.

    (2) Optionally, may attest that it:

    (i) Acknowledges the option to cooperate in good faith with ONC-ACB surveillance of his or her health information technology certified under the ONC Health IT Certification Program if a request to assist in ONC-ACB surveillance is received; and

    (ii) If requested, cooperated in good faith with ONC-ACB surveillance of his or her health information technology certified under the ONC Health IT Certification Program as authorized by 45 CFR part 170, subpart E, to the extent that such technology meets (or can be used to meet) the definition of CEHRT, including by permitting timely access to such technology and demonstrating its capabilities as implemented and used by the eligible hospital or CAH in the field.

    (I) Support for health information exchange and the prevention of information blocking. For an EHR reporting period in CY 2017 and subsequent years, the eligible hospital or CAH must attest that it—

    (1) Did not knowingly and willfully take action (such as to disable functionality) to limit or restrict the compatibility or interoperability of certified EHR technology.

    (2) Implemented technologies, standards, policies, practices, and agreements reasonably calculated to ensure, to the greatest extent practicable and permitted by law, that the certified EHR technology was, at all relevant times—

    (i) Connected in accordance with applicable law;

    (ii) Compliant with all standards applicable to the exchange of information, including the standards, implementation specifications, and certification criteria adopted at 45 CFR part 170;

    (iii) Implemented in a manner that allowed for timely access by patients to their electronic health information; and

    (iv) Implemented in a manner that allowed for the timely, secure, and trusted bi-directional exchange of structured electronic health information with other health care providers (as defined by 42 U.S.C. 300jj(3)), including unaffiliated providers, and with disparate certified EHR technology and vendors.

    (3) Responded in good faith and in a timely manner to requests to retrieve or exchange electronic health information, including from patients, health care providers (as defined by 42 U.S.C. 300jj(3)), and other persons, regardless of the requestor's affiliation or technology vendor.

    7. Section 495.102 is amended by revising paragraph (d)(1), (d)(2)(iv) and (d)(3) to read as follows:
    § 495.102 Incentive payments to EPs.

    (d) * * *

    (1) Subject to paragraphs (d)(3) and (4) of this section, for CY 2015 through the end of CY 2018, for covered professional services furnished by an EP who is not hospital-based, and who is not a qualifying EP by virtue of not being a meaningful EHR user (for the EHR reporting period applicable to the payment adjustment year), the payment amount for such services is equal to the product of the applicable percent specified in paragraph (d)(2) of this section and the Medicare physician fee schedule amount for such services.

    (2) * * *

    (iv) For 2018, 97 percent, except as provided in paragraph (d)(3) of this section.

    (3) Decrease in applicable percent in certain circumstances. In CY 2018, if the Secretary finds that the proportion of EPs who are meaningful EHR users is less than 75 percent, the applicable percent must be decreased by 1 percentage point for EPs from the applicable percent in the preceding year.

    8. Section 495.316 is amended by revising paragraph (g)(2) and adding paragraph (g)(3) to read as follows:
    § 495.316 State monitoring and reporting regarding activities required to receive an incentive payment.

    (g) * * *

    (2) Subject to paragraph (h)(2) of this section, provider-level attestation data for each eligible hospital that attests to demonstrating meaningful use for each payment year beginning with 2013.

    (3) Subject to paragraph (h)(2) of this section, provider-level attestation data for each eligible EP that attests to demonstrating meaningful use for each payment year beginning with 2013 and ending after 2016.

    Dated: October 5, 2016. Andrew M. Slavitt, Acting Administrator, Centers for Medicare & Medicaid Services. Dated: October 13, 2016. Sylvia M. Burwell, Secretary, Department of Health and Human Services. Note:

    The following Appendix will not appear in the Code of Federal Regulations.

    Appendix ER04NO16.029 ER04NO16.030 ER04NO16.031 ER04NO16.032 ER04NO16.033 ER04NO16.034 ER04NO16.035 ER04NO16.036 ER04NO16.037 ER04NO16.038 ER04NO16.039 ER04NO16.040 ER04NO16.041 ER04NO16.042 ER04NO16.043 ER04NO16.044 ER04NO16.045 ER04NO16.046 ER04NO16.047 ER04NO16.048 ER04NO16.049 ER04NO16.050 ER04NO16.051 ER04NO16.052 ER04NO16.053 ER04NO16.054 ER04NO16.055 ER04NO16.056 ER04NO16.057 ER04NO16.058 ER04NO16.059 ER04NO16.060 ER04NO16.061 ER04NO16.062 ER04NO16.063 ER04NO16.064 ER04NO16.065 ER04NO16.066 ER04NO16.067 ER04NO16.068 ER04NO16.069 ER04NO16.070 ER04NO16.071 ER04NO16.072 ER04NO16.073 ER04NO16.074 ER04NO16.075 ER04NO16.076 ER04NO16.077 ER04NO16.078 ER04NO16.079 ER04NO16.080 ER04NO16.081 ER04NO16.082 ER04NO16.083 ER04NO16.084 ER04NO16.085 ER04NO16.086 ER04NO16.087 ER04NO16.088 ER04NO16.089 ER04NO16.090 ER04NO16.091 ER04NO16.092 ER04NO16.093 ER04NO16.094 ER04NO16.095 ER04NO16.096 ER04NO16.097 ER04NO16.098 ER04NO16.099 ER04NO16.100 ER04NO16.101 ER04NO16.102 ER04NO16.103 ER04NO16.104 ER04NO16.105 ER04NO16.106 ER04NO16.107 ER04NO16.108 ER04NO16.109 ER04NO16.110 ER04NO16.111 ER04NO16.112 ER04NO16.113 ER04NO16.114 ER04NO16.115 ER04NO16.116 ER04NO16.117 ER04NO16.118 ER04NO16.119 ER04NO16.120 ER04NO16.121 ER04NO16.122 ER04NO16.123 ER04NO16.124 ER04NO16.125 ER04NO16.126 ER04NO16.127 ER04NO16.128 ER04NO16.129 ER04NO16.130 ER04NO16.131 ER04NO16.132 ER04NO16.133 ER04NO16.134 ER04NO16.135 ER04NO16.136 ER04NO16.137 ER04NO16.138 ER04NO16.139 ER04NO16.140 ER04NO16.141 ER04NO16.142 ER04NO16.143 ER04NO16.144 ER04NO16.145 ER04NO16.146 ER04NO16.147 ER04NO16.148

    NOTE: “TABLE C: Individual Quality Cross-Cutting Measures for the MIPS to Be Available to Meet the Reporting Criteria Via Claims, Registry, and EHR Beginning in 2017” has been removed per policy change—See (add reference) for Rationale]

    ER04NO16.150 ER04NO16.151 ER04NO16.152 ER04NO16.153 ER04NO16.154 ER04NO16.155 ER04NO16.156 ER04NO16.157 ER04NO16.158 ER04NO16.159 ER04NO16.160 ER04NO16.161 ER04NO16.162 ER04NO16.163 ER04NO16.164 ER04NO16.165 ER04NO16.166 ER04NO16.167 ER04NO16.168 ER04NO16.169 ER04NO16.170 ER04NO16.171 ER04NO16.172 ER04NO16.173 ER04NO16.174 ER04NO16.175 ER04NO16.176 ER04NO16.177 ER04NO16.178 ER04NO16.179 ER04NO16.180 ER04NO16.181 ER04NO16.182 ER04NO16.183 ER04NO16.184 ER04NO16.185 ER04NO16.186 ER04NO16.187 ER04NO16.188 ER04NO16.189 ER04NO16.190 ER04NO16.191 ER04NO16.192 ER04NO16.193 ER04NO16.194 ER04NO16.195 ER04NO16.196 ER04NO16.197 ER04NO16.198 ER04NO16.199 ER04NO16.200 ER04NO16.201 ER04NO16.202 ER04NO16.203 ER04NO16.204 ER04NO16.205 ER04NO16.206 ER04NO16.207 ER04NO16.208 ER04NO16.209 ER04NO16.210 ER04NO16.211 ER04NO16.212 ER04NO16.213 ER04NO16.214 ER04NO16.215 ER04NO16.216 ER04NO16.217 ER04NO16.218 ER04NO16.219 ER04NO16.220 ER04NO16.221 ER04NO16.222 ER04NO16.223 ER04NO16.224 ER04NO16.225 ER04NO16.226 ER04NO16.227 ER04NO16.228 ER04NO16.229 ER04NO16.230 ER04NO16.231 ER04NO16.232 ER04NO16.233 ER04NO16.234 ER04NO16.235 ER04NO16.236 ER04NO16.237 ER04NO16.238 ER04NO16.239 ER04NO16.240 ER04NO16.241 ER04NO16.242 ER04NO16.243 ER04NO16.244 ER04NO16.245 ER04NO16.246 ER04NO16.247 ER04NO16.248 ER04NO16.249 ER04NO16.250 ER04NO16.251 ER04NO16.252 ER04NO16.253 ER04NO16.254 ER04NO16.255 ER04NO16.256 ER04NO16.257 ER04NO16.258 ER04NO16.259 ER04NO16.260 ER04NO16.261 ER04NO16.262 ER04NO16.263 ER04NO16.264 ER04NO16.265 ER04NO16.266 ER04NO16.267 ER04NO16.268 ER04NO16.269 ER04NO16.270 ER04NO16.271 ER04NO16.272 ER04NO16.273 ER04NO16.274 ER04NO16.275 ER04NO16.276 ER04NO16.277 ER04NO16.278 ER04NO16.279 ER04NO16.280 ER04NO16.281 ER04NO16.282 ER04NO16.283 ER04NO16.284 ER04NO16.285 ER04NO16.286 ER04NO16.287 ER04NO16.288 ER04NO16.289 ER04NO16.290 ER04NO16.291 ER04NO16.292 ER04NO16.293 ER04NO16.294 ER04NO16.295 ER04NO16.296 ER04NO16.297 ER04NO16.298 ER04NO16.299 ER04NO16.300 ER04NO16.301 ER04NO16.302 ER04NO16.303
    [FR Doc. 2016-25240 Filed 10-19-16; 4:15 pm] BILLING CODE 4120-01-P
    81 214 Friday, November 4, 2016 Rules and Regulations Part III Department of Health and Human Services Centers for Medicare & Medicaid Services 42 CFR Parts 413, 414, and 494 Medicare Program; End-Stage Renal Disease Prospective Payment System, Coverage and Payment for Renal Dialysis Services Furnished to Individuals With Acute Kidney Injury, End-Stage Renal Disease Quality Incentive Program, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program Bid Surety Bonds, State Licensure and Appeals Process for Breach of Contract Actions, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program and Fee Schedule Adjustments, Access to Care Issues for Durable Medical Equipment; and the Comprehensive End-Stage Renal Disease Care Model; Final Rule DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare & Medicaid Services 42 CFR Parts 413, 414, and 494 [CMS-1651-F] RIN 0938-AS83 Medicare Program; End-Stage Renal Disease Prospective Payment System, Coverage and Payment for Renal Dialysis Services Furnished to Individuals With Acute Kidney Injury, End-Stage Renal Disease Quality Incentive Program, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program Bid Surety Bonds, State Licensure and Appeals Process for Breach of Contract Actions, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program and Fee Schedule Adjustments, Access to Care Issues for Durable Medical Equipment; and the Comprehensive End-Stage Renal Disease Care Model AGENCY:

    Centers for Medicare & Medicaid Services (CMS), HHS.

    ACTION:

    Final rule.

    SUMMARY:

    This rule updates and makes revisions to the End-Stage Renal Disease (ESRD) Prospective Payment System (PPS) for calendar year 2017. It also finalizes policies for coverage and payment for renal dialysis services furnished by an ESRD facility to individuals with acute kidney injury. This rule also sets forth requirements for the ESRD Quality Incentive Program, including the inclusion of new quality measures beginning with payment year (PY) 2020 and provides updates to programmatic policies for the PY 2018 and PY 2019 ESRD QIP.

    This rule also implements statutory requirements for bid surety bonds and state licensure for the Durable Medical Equipment, Prosthetics, Orthotics, and Supplies (DMEPOS) Competitive Bidding Program (CBP). This rule also expands suppliers' appeal rights in the event of a breach of contract action taken by CMS, by revising the appeals regulation to extend the appeals process to all types of actions taken by CMS for a supplier's breach of contract, rather than limit an appeal for the termination of a competitive bidding contract. The rule also finalizes changes to the methodologies for adjusting fee schedule amounts for DMEPOS using information from CBPs and for submitting bids and establishing single payment amounts under the CBPs for certain groupings of similar items with different features to address price inversions. Final changes also are made to the method for establishing bid limits for items under the DMEPOS CBPs. In addition, this rule summarizes comments on the impacts of coordinating Medicare and Medicaid Durable Medical Equipment for dually eligible beneficiaries. Finally, this rule also summarizes comments received in response to a request for information related to the Comprehensive ESRD Care Model and future payment models affecting renal care.

    DATES:

    These regulations are effective January 1, 2017.

    FOR FURTHER INFORMATION CONTACT:

    [email protected], for issues related to the ESRD PPS and coverage and payment for renal dialysis services furnished to individuals with AKI.

    Stephanie Frilling, (410) 786-4597, for issues related to the ESRD QIP.

    Julia Howard, (410) 786-8645, for issues related to DMEPOS CBP and bid surety bonds, state licensure, and the appeals process for breach of DMEPOS CBP contract actions.

    Anita Greenberg, (410) 786-4601, or Hafsa Vahora, (410) 786-7899, for issues related to competitive bidding and payment for similar DMEPOS items with different features and bid limits.

    Kristen Zycherman, for issues related to DME access issues.

    Tom Duvall, (410) 786-8887 or email [email protected], for issues related to the Comprehensive ESRD Care Model.

    SUPPLEMENTARY INFORMATION: Addenda Are Only Available Through the Internet on the CMS Web Site

    In the past, a majority of the Addenda referred to throughout the preamble of our proposed and final rules were available in the Federal Register. However, the Addenda of the annual proposed and final rules will no longer be available in the Federal Register. Instead, these Addenda to the annual proposed and final rules will be available only through the Internet on the CMS Web site. The Addenda to the End-Stage Renal Disease (ESRD) Prospective Payment System (PPS) rules are available at: http://www.cms.gov/ESRDPayment/PAY/list.asp. Readers who experience any problems accessing any of the Addenda to the proposed and final rules of the ESRD PPS that are posted on the CMS Web site identified above should contact [email protected].

    Table of Contents

    To assist readers in referencing sections contained in this preamble, we are providing a Table of Contents. Some of the issues discussed in this preamble affect the payment policies, but do not require changes to the regulations in the Code of Federal Regulations (CFR).

    I. Executive Summary A. Purpose 1. End-Stage Renal Disease (ESRD) Prospective Payment System (PPS) 2. Coverage and Payment for Renal Dialysis Services Furnished to Individuals With Acute Kidney Injury (AKI) 3. End-Stage Renal Disease (ESRD) Quality Incentive Program (QIP) 4. Durable Medical Equipment, Prosthetics, Orthotics Supplies (DMEPOS) Competitive Bidding Bid Surety Bonds, State Licensure and Appeals Process for a Breach of DMEPOS Competitive Bidding Program Contract Action 5. Durable Medical Equipment, Prosthetics, Orthotics Supplies (DMEPOS) Competitive Bidding Program and Fee Schedule Adjustments B. Summary of the Major Provisions 1. ESRD PPS 2. Coverage and Payment for Renal Dialysis Services Furnished to Individuals With AKI 3. ESRD QIP 4. DMEPOS Competitive Bidding Bid Surety Bonds, State Licensure and Appeals Process for a Breach of DMEPOS Competitive Bidding Program Contract Actions 5. DMEPOS Competitive Bidding Program and Fee Schedule Adjustments C. Summary of Cost and Benefits 1. Impacts of the Final ESRD PPS 2. Impact of the Final Coverage and Payment for Renal Dialysis Services Furnished to Individuals With AKI 3. Impacts of the Final ESRD QIP 4. Impacts of the Final DMEPOS Competitive Bidding Bid Surety Bonds, State Licensure and Appeals Process for a Breach of DMEPOS Competitive Bidding Program Contract Action 5. Impacts of the Final DMEPOS Competitive Bidding Program and Fee Schedule Adjustments II. Calendar Year (CY) 2017 End-Stage Renal Disease (ESRD) Prospective Payment System (PPS) A. Background 1. Statutory Background 2. System for Payment of Renal Dialysis Services 3. Updates to the ESRD PPS B. Summary of the Proposed Provisions, Public Comments, and Responses to Comments on the Calendar Year (CY) 2017 ESRD PPS 1. Payment for Hemodialysis When More Than 3 Treatments Are Furnished per Week a. Background b. Payment Methodology for HD When More Than 3 Treatments Are Furnished per Week c. Applicability to Medically Justified Treatments d. Applicability to Home and Self-Dialysis Training Treatments 2. Home and Self-Dialysis Training Add-On Payment Adjustment a. Background b. Analysis of ESRD Facility Claims Data c. Technical Correction of the Total Training Payment in the CY 2016 ESRD PPS Final Rule d. Analysis of ESRD Cost Report Data e. Final Increase to the Home and Self-Dialysis Training Add-On Payment Adjustment 3. Final CY 2017 ESRD PPS Update a. Final CY 2017 ESRD Market Basket Update, Productivity Adjustment, and Labor-Related Share for the ESRD PPS b. The Final CY 2017 ESRD PPS Wage Indices i. Annual Update of the Wage Index ii. Application of the Wage Index Under the ESRD PPS c. CY 2017 Update to the Outlier Policy i. CY 2017 Update to the Outlier Services MAP Amounts and Fixed-Dollar Loss Amounts ii. Outlier Percentage d. Update of the ESRD PPS Base Rate for CY 2017 i. Background ii. Payment Rate Update for CY 2017 4. Miscellaneous Comments III. Final Coverage and Payment for Renal Dialysis Services Furnished to Individuals With Acute Kidney Injury (AKI) A. Background B. Summary of the Proposed Provisions, Public Comments, and Responses to Comments on the Coverage and Payment for Renal Dialysis Services Furnished to Individuals With Acute Kidney Injury (AKI) C. Final Payment Policy for Renal Dialysis Services Furnished to Individuals With AKI 1. Definition of “Individual With Acute Kidney Injury” 2. The Payment Rate for AKI Dialysis 3. Geographic Adjustment Factor 4. Other Adjustments to the AKI Payment Rate 5. Renal Dialysis Services Included in the AKI Payment Rate D. Applicability of ESRD PPS Policies to AKI Dialysis 1. Uncompleted Dialysis Treatment 2. Home and Self-Dialysis 3. Vaccines and Their Administration E. Monitoring of Beneficiaries With AKI Receiving Dialysis in ESRD Facilities F. AKI and the ESRD Conditions for Coverage G. ESRD Facility Billing for AKI Dialysis H. Announcement of AKI Payment Rate in Future Years IV. End-Stage Renal Disease (ESRD) Quality Incentive Program (QIP) A. Background B. Summary of the Proposed Provisions, Public Comments, and Responses to Comments on the End-Stage Renal Disease (ESRD) Quality Incentive Program (QIP) C. Requirements for the Payment Year (PY) 2018 ESRD QIP 1. Small Facility Adjuster (SFA) Policy for PY 2018 2. Changes to the Hypercalcemia Clinical Measure D. Requirements for the PY 2019 ESRD QIP 1. New Measures for the PY 2019 ESRD QIP a. Reintroduction of the Expanded NHSN Dialysis Event Reporting Measure b. Scoring the NHSN Dialysis Event Reporting Measure 2. New Measure Topic Beginning With the PY 2019 ESRD QIP—NHSN BSI Measure Topic 3. New Safety Measure Domain 4. Scoring for the NHSN BSI Measure Topic 5. Performance Standards, Achievement Thresholds, and Benchmarks for the Clinical Measures Finalized for the PY 2019 ESRD QIP 6. Weighting for the Safety Measure Domain and Clinical Measure Domain for PY 2019 7. Example of the Final PY 2019 ESRD QIP Scoring Methodology 8. Payment Reductions for the PY 2019 ESRD QIP 9. Data Validation E. Requirements for the PY 2020 ESRD QIP 1. Replacement of the Mineral Metabolism Reporting Measure Beginning With the PY 2020 Program Year 2. Measures for the PY 2020 ESRD QIP a. PY 2019 Measures Continuing for PY 2020 and Future Payment Years b. New Clinical Measures Beginning With the PY 2020 ESRD QIP i. Standardized Hospitalization Ratio (SHR) Clinical Measure c. New Reporting Measures Beginning With the PY 2020 ESRD QIP i. Serum Phosphorus Reporting Measure ii. Ultrafiltration Rate Reporting Measure 3. Performance Period for the PY 2020 ESRD QIP 4. Performance Standards, Achievement Thresholds, and Benchmarks for the PY 2020 ESRD QIP a. Performance Standards, Achievement Thresholds, and Benchmarks for the Clinical Measures in the PY 2020 ESRD QIP 4. Performance Standards, Achievement Thresholds, and Benchmarks for the PY 2020 ESRD QIP a. Performance Standards, Achievement Thresholds, and Benchmarks for the Clinical Measures in the PY 2020 ESRD QIP b. Estimated Performance Standards, Achievement Thresholds, and Benchmarks for the Clinical Measures Proposed for the PY 2020 ESRD QIP c. Performance Standards for the PY 2020 Reporting Measures 5. Scoring the PY 2020 ESRD QIP a. Scoring Facility Performance on Clinical Measures Based on Achievement b. Scoring Facility Performance on Clinical Measures Based on Improvement c. Scoring the ICH CAHPS Clinical Measure d. Calculating Facility Performance on Reporting Measures 6. Weighting the Clinical Measure Domain, and Weighting the Total Performance Score a. Weighting of the Clinical Measure Domain for PY 2020 b. Weighting the Total Performance Score 7. Example of the PY 2020 ESRD QIP Scoring Methodology 8. Minimum Data for Scoring Measures for the PY 2020 ESRD QIP 9. Payment Reductions for the PY 2020 ESRD QIP F. Future Policies and Measures Under Consideration V. Durable Medical Equipment, Prosthetics, Orthotics, and Supplies (DMEPOS) Competitive Bidding Program (CBP) A. Background B. Summary of the Proposed Provisions, Public Comments, and Responses to Comments on the DMEPOS CBP 1. Bid Surety Bond Requirement 2. State Licensure Requirement 3. Appeals Process for a DMEPOS Competitive Bidding Breach of Contract Action VI. Method for Adjusting DMEPOS Fee Schedule Amounts for Similar Items With Different Features Using Information From Competitive Bidding Programs (CBPs) A. Background 1. Fee Schedule Payment Basis for Certain DMEPOS 2. DMEPOS Competitive Bidding Programs Payment Rules 3. Methodologies for Adjusting Payment Amounts Using Information From the DMEPOS Competitive Bidding Program a. Adjusted Fee Schedule Amounts for Areas Within the Contiguous United States b. Adjusted Fee Schedule Amounts for Areas Outside the Contiguous United States c. Adjusted Fee Schedule Amounts for Items Included in 10 or Fewer CBAs d. Updating Adjusted Fee Schedule Amounts e. Method for Avoiding HCPCS Price Inversions When Adjusting Fee Schedule Amounts Using Information From the DMEPOS Competitive Bidding Program B. Summary of the Proposed Provisions on the Method for Adjusting DMEPOS Fee Schedule Amounts for Similar Items With Different Features Using Information From Competitive Bidding Programs C. Response to Comments on the Method for Adjusting DMEPOS Fee Schedule Amounts for Similar Items With Different Features Using Information From Competitive Bidding Programs VII. Submitting Bids and Determining Single Payment Amounts for Certain Groupings of Similar Items With Different Features Under the Durable Medical Equipment, Prosthetics, Orthotics, and Supplies (DMEPOS) Competitive Bidding Program (CBP) A. Background on the DMEPOS CBPs B. Summary of the Proposed Provisions on Submitting Bids and Determining Single Payment Amounts for Certain Groupings of Similar Items With Different Features Under the DMEPOS Competitive Bidding Program C. Response to Comments on Submitting Bids and Determining Single Payment Amounts for Certain Groupings of Similar Items With Different Features Under the DMEPOS Competitive Bidding Program VIII. Bid Limits for Individual Items Under the Durable Medical Equipment, Prosthetics, Orthotics, and Supplies (DMEPOS) Competitive Bidding Program (CBP) A. Background B. Summary of the Proposed Provisions, Public Comments, and Responses to Comments on the Bid Limits for Individual Items Under the Durable Medical Equipment, Prosthetics, Orthotics, and Supplies (DMEPOS) Competitive Bidding Program (CBP) C. Response to Comments on Bid Limits for Individual Items Under the Durable Medical Equipment, Prosthetics, Orthotics, and Supplies (DMEPOS) Competitive Bidding Program (CBP) IX. Access to Care Issues for DME A. Background B. Summary of Public Comments, and Responses to Comments on Access to Care Issues for DME C. Provisions of Request for Information X. Comprehensive End-Stage Renal Disease Care Model and Future Payment Models A. Background B. Summary of the Proposed Provisions, Public Comments, and Responses to Comments on the Comprehensive End-Stage Renal Disease Care Model and Future Payment Models C. Provisions of the Notice XI. Technical Correction for 42 CFR 413.194 and 413.215 XII. Waiver of Proposed Rulemaking XIII. Advancing Health Information Exchange XV. Collection of Information Requirements A. Legislative Requirement for the Solicitation of Comments B. Requirement in Regulation Text C. Additional Information Collection Requirements 1. ESRD QIP a. Wage Estimates b. Time Required To Submit Data Based on Reporting Requirements c. Data Validation Requirements for the PY 2019 ESRD QIP d. Ultrafiltration Rate Reporting Measure XVI. Economic Analyses A. Regulatory Impact Analysis 1. Introduction 2. Statement of Need 3. Overall Impact B. Detailed Economic Analysis 1. CY 2017 End-Stage Renal Disease Prospective Payment System a. Effects on ESRD Facilities b. Effects on Other Providers c. Effects on the Medicare Program d. Effects on Medicare Beneficiaries e. Alternatives Considered 2. Coverage and Payment for Renal Dialysis Services Furnished to Individuals With AKI a. Effects on ESRD Facilities b. Effects on Other Providers c. Effects on the Medicare Program d. Effects on Medicare Beneficiaries e. Alternatives Considered 3. End-Stage Renal Disease Quality Incentive Program a. Effects of the PY 2020 QIP 4. DMEPOS Competitive Bidding Bid Surety Bonds, State Licensure and Appeals Process for a Breach of DMEPOS Competitive Bidding Program Contract Action a. Effects on Competitive Bidding Program Suppliers b. Effects on the Medicare Program c. Effects on Medicare Beneficiaries d. Alternatives Considers 5. Other DMEPOS Provisions a. Effects of the Method for Adjusting DMEPOS Fee Schedule Amounts for Similar Items With Different Features Using Information From the DMEPOS Competitive Bidding Programs b. Effects of the Final Rules Determining Single Payment Amounts for Similar Items With Different Features Under the DMEPOS Competitive Bidding Program c. Effects of the Revisions to the Bid Limits Under the DMEPOS Competitive Bidding Program C. Accounting Statement XVII. Regulatory Flexibility Act Analysis XVIII. Unfunded Mandates Reform Act Analysis XIX. Federalism Analysis XX. Congressional Review Act Regulations Text Acronyms

    Because of the many terms to which we refer by acronym in this final rule, we are listing the acronyms used and their corresponding meanings in alphabetical order below:

    AAPM Advanced Alternative Payment Model ABLE The Achieving a Better Life Experience Act of 2014 AHRQ Agency for Healthcare Research and Quality AKI Acute Kidney Injury ANOVA Analysis of Variance APM Alternative Payment Model ARM Adjusted Ranking Metric ASP Average Sales Price ATRA The American Taxpayer Relief Act of 2012 BEA Bureau of Economic Analysis BLS Bureau of Labor Statistics BMI Body Mass Index BSA Body Surface Area BSI Bloodstream Infection CB Consolidated Billing CBA Competitive Bidding Area CBP Competitive Bidding Program CBSA Core Based Statistical Area CCN CMS Certification Number CDC Centers for Disease Control and Prevention CEC Comprehensive ESRD Care CFR Code of Federal Regulations CHIP The Children's Health Insurance Program CIP Core Indicators Project CKD Chronic Kidney Disease CLABSI Central Line Access Bloodstream Infections CMS Centers for Medicare & Medicaid Services CPM Clinical Performance Measure CPT Current Procedural Terminology CROWNWeb Consolidated Renal Operations in a Web-Enabled Network CY Calendar Year DMEPOS Durable Medical Equipment, Prosthetics, Orthotics Supplies DFR Dialysis Facility Report EOD Every Other Day ESA Erythropoiesis stimulating agent ESCO End-Stage Renal Disease Seamless Care Organization ESRD End-Stage Renal Disease ESRDB End-Stage Renal Disease Bundled ESRD PPS End-Stage Renal Disease Prospective Payment System ESRD QIP End-Stage Renal Disease Quality Incentive Program FDA Food and Drug Administration HAIs Healthcare-Acquired Infections HCFA Health Care Financing Administration HCPCS Healthcare Common Procedure Coding System HD Hemodialysis HHD Home Hemodialysis HHS Department of Health and Human Services HCC Hierarchical Comorbidity Conditions HRQOL Health-Related Quality of Life ICD International Classification of Diseases ICD-9-CM International Classification of Disease, 9th Revision, Clinical Modification ICD-10-CM International Classification of Disease, 10th Revision, Clinical Modification ICH CAHPS In-Center Hemodialysis Consumer Assessment of Healthcare Providers and Systems IGI IHS Global Insight IIC Inflation-Indexed Charge IPPS Inpatient Prospective Payment System IUR Inter-Unit Reliability KDIGO Kidney Disease: Improving Global Outcomes KDOQI Kidney Disease Outcome Quality Initiative KDQOL Kidney Disease Quality of Life Kt/V A measure of dialysis adequacy where K is dialyzer clearance, t is dialysis time, and V is total body water volume LCD Local Coverage Determination LDO Large Dialysis Organization MAC Medicare Administrative Contractor MAP Medicare Allowable Payment MCP Monthly Capitation Payment MDO Medium Dialysis Organization MFP Multifactor Productivity MIPPA Medicare Improvements for Patients and Providers Act of 2008 (Pub. L. 110-275) MLR Minimum Lifetime Requirement MMA Medicare Prescription Drug, Improvement and Modernization Act of 2003 MMEA Medicare and Medicaid Extenders Act of 2010 Public Law 111-309 MSA Metropolitan Statistical Areas NHSN National Healthcare Safety Network NQF National Quality Forum NQS National Quality Strategy NAMES National Association of Medical Equipment Suppliers OBRA Omnibus Budget Reconciliation Act OMB Office of Management and Budget PAMA Protecting Access to Medicare Act of 2014 PC Product Category PD Peritoneal Dialysis PEN Parenteral and Enteral Nutrition PFS Physician Fee Schedule PPI Producer Price Index PPS Prospective Payment System PSR Performance Score Report PY Payment Year QIP Quality Incentive Program REMIS Renal Management Information System RFA Regulatory Flexibility Act RN Registered Nurse SBA Small Business Administration SFA Small Facility Adjuster SPA Single Payment Amount SRR Standardized Readmission Ratio SSA Social Security Administration STrR Standardized Transfusion Ratio The Act Social Security Act The Affordable Care Act The Patient Protection and Affordable Care Act The Secretary Secretary of the Department of Health and Human Services TPEA Trade Preferences Extension Act of 2015 TPS Total Performance Score URR Urea Reduction Ratio VAT Vascular Access Type VBP Value Based Purchasing I. Executive Summary A. Purpose 1. End-Stage Renal Disease (ESRD) Prospective Payment System (PPS)

    On January 1, 2011, we implemented the ESRD PPS, a case-mix adjusted, bundled prospective payment (PPS) system for renal dialysis services furnished by ESRD facilities. This rule updates and makes revisions to the End-Stage Renal Disease (ESRD) (PPS) for calendar year (CY) 2017. Section 1881(b)(14) of the Social Security Act (the Act), as added by section 153(b) of the Medicare Improvements for Patients and Providers Act of 2008 (MIPPA) (Pub. L. 110-275), and section 1881(b)(14)(F) of the Act, as added by section 153(b) of MIPPA and amended by section 3401(h) of the Affordable Care Act (Pub. L. 111-148), established that beginning CY 2012, and each subsequent year, the Secretary shall annually increase payment amounts by an ESRD market basket increase factor, reduced by the productivity adjustment described in section 1886(b)(3)(B)(xi)(II) of the Act.

    2. Coverage and Payment for Renal Dialysis Services Furnished to Individuals With Acute Kidney Injury (AKI)

    On June 29, 2015, the President signed the Trade Preferences Extension Act of 2015 (TPEA) (Pub. L. 114-27). Section 808(a) of TPEA amended section 1861(s)(2)(F) of the Act to provide coverage for renal dialysis services furnished on or after January 1, 2017, by a renal dialysis facility or a provider of services paid under section 1881(b)(14) to an individual with AKI. Section 808(b) of TPEA amended section 1834 of the Act by adding a new paragraph (r) of the Act that provides for payment for renal dialysis services furnished by renal dialysis facilities or providers of services paid under section 1881(b)(14) to individuals with AKI at the ESRD PPS base rate beginning January 1, 2017.

    3. End-Stage Renal Disease (ESRD) Quality Incentive Program (QIP)

    This rule also sets forth requirements for the ESRD QIP, including for payment years (PYs) 2018, 2019, and 2020. The program is authorized under section 1881(h) of the Social Security Act (the Act). The ESRD QIP is the most recent step in fostering improved patient outcomes by establishing incentives for dialysis facilities to meet or exceed performance standards established by CMS.

    4. Durable Medical Equipment, Prosthetics, Orthotics Supplies (DMEPOS) Competitive Bidding Bid Surety Bonds, State Licensure and Appeals Process for Breach of DMEPOS Competitive Bidding Program Contract Action

    This rule implements statutory requirements for Bid Surety Bonds and State Licensure. We are revising the appeals regulation to expand suppliers' appeal rights in the event of a breach of contract determination to allow suppliers to appeal any breach of contract action CMS takes, rather than just a termination action.

    5. Durable Medical Equipment, Prosthetics, Orthotics and Supplies (DMEPOS) Competitive Bidding Program and Fee Schedule Adjustments

    This rule adjusts the method for adjusting DMEPOS fee schedule amounts for certain groupings of similar items with different features using information from DMEPOS competitive bidding programs (CBPs), submitting bids and determining single payment amounts for certain groupings of similar items with different features under the DMEPOS CBPs, and establishing bid limits for individual items under the DMEPOS CBP.

    B. Summary of the Major Provisions 1. ESRD PPS

    Update to the ESRD PPS base rate for CY 2017: For CY 2017, the ESRD PPS base rate is $231.55. This amount reflects a final market basket increase (0.55 percent), and application of the wage index budget-neutrality adjustment factor (0.999781) as well as the application of the training budget-neutrality adjustment factor (0.999737).

    Annual update to the wage index and wage index floor: We adjust wage indices on an annual basis using the most current hospital wage data and the latest core-based statistical area (CBSA) delineations to account for differing wage levels in areas in which ESRD facilities are located. For CY 2017, we did not propose any changes to the application of the wage index floor and we will continue to apply the current wage index floor (0.400) to areas with wage index values below the floor.

    Update to the outlier policy: Consistent with our policy to annually update the outlier policy using the most current data, we are updating the outlier services fixed dollar loss amounts for adult and pediatric patients and Medicare Allowable Payments (MAPs) for adult and pediatric patients for CY 2017 using 2015 claims data. Based on the use of more current data, the fixed-dollar loss amount for pediatric beneficiaries will increase from $62.19 to $68.49 and the MAP amount will decrease from $39.20 to $38.29, as compared to CY 2016 values. For adult beneficiaries, the fixed-dollar loss amount will decrease from $86.97 to $82.92 and the MAP amount will decrease from $50.81 to $45.00. The 1 percent target for outlier payments was not achieved in CY 2015. We believe using CY 2015 claims data to update the outlier MAP and fixed-dollar loss amounts for CY 2017 will increase payments for ESRD beneficiaries requiring higher resource utilization in accordance with a 1 percent outlier percentage.

    Payment for hemodialysis when more than 3 treatments are furnished per week: We are not finalizing an equivalency payment for hemodialysis (HD) when more than 3 treatments are furnished in a week, similar to what is applied to peritoneal dialysis (PD). In response to comments received from stakeholders, we have determined that the burden placed on providers would be substantial and we are exploring alternate avenues for collecting these data.

    The home and self-dialysis training add-on payment adjustment: We are finalizing an increase in the total number of hours of training by an RN (registered nurse) for PD and HD that is accounted for by the home and self-dialysis training add-on payment adjustment (hereinafter referred to as the home dialysis training add-on). The current amount of the home dialysis training add-on is $50.16, which reflects 1.5 hours of training by a nurse per treatment. We calculated the increase based on the average treatment times and weights based on utilization for each modality. We used treatment times as proxies for the total time spent by nurses training beneficiaries for home or self-dialysis in calculating the increase to the home dialysis training add-on. Based on these proxies, for CY 2017, we have increased the hours of per-treatment training time provided by a nurse that is accounted for by the home dialysis training add-on to 2.66 hours. We also updated the national hourly wage for a nurse providing dialysis training for 2017 to $35.94, resulting in a home and self-dialysis training add-on payment adjustment amount of $95.60.

    2. Coverage and Payment for Renal Dialysis Services Furnished to Individuals With AKI

    We are implementing the TPEA amendments to sections 1834(r) and 1861(s)(2)(F) by finalizing coverage of renal dialysis services furnished by renal dialysis facilities paid under section 1881(b)(14) of the Act to individuals with AKI. We will pay ESRD facilities for renal dialysis services furnished to individuals with AKI at the amount of the ESRD PPS base rate, as adjusted by the ESRD PPS wage index. In addition, drugs, biologicals, and laboratory services that ESRD facilities are certified to furnish, but that are not renal dialysis services, may be paid for separately when furnished by ESRD facilities to individuals with AKI. In addition, because AKI patients are often under the care of a hospital, physician, or other practitioner, these providers and practitioners may continue to bill Medicare for services in the same manner as they did before the payment rate for renal dialysis services furnished by dialysis facilities to individuals with AKI was adopted.

    3. ESRD QIP

    This rule sets forth requirements for the ESRD QIP for payment years (PYs) 2018, 2019 and 2020.

    Hypercalcemia Clinical Measure: We proposed to make two substantive updates to the technical specifications for the Hypercalcemia clinical measure beginning with PY 2018, as recommended during the measure maintenance process at the National Quality Forum (NQF). In response to comments received, we are finalizing these changes but are delaying their implementation until PY 2019. First, we are adding plasma as an acceptable substrate in addition to serum calcium. Second, we are amending the denominator definition to include patients regardless of whether any serum calcium values were reported at the facility during the 3-month study period. These changes will ensure that, beginning in PY 2019, the measure aligns with the NQF-endorsed measure and will continue to satisfy the requirements of the Protecting Access to Medicare Act of 2014 (PAMA), which requires that the ESRD QIP include in its measure set, measures (outcomes-based, to the extent feasible), that are specific to the conditions treated with oral-only drugs.

    New Requirements for PY 2019: Beginning with PY 2019, we are reintroducing the National Healthcare Safety Network (NHSN) Dialysis Event Reporting Measure back into the ESRD QIP measure set. Additionally, beginning with PY 2019, we are creating a new NHSN Bloodstream Infection (BSI) Measure Topic which will consist of the proposed NHSN Dialysis Event Reporting Measure and the existing NHSN BSI Clinical Measure. We are also establishing a new Safety Measure Domain, which will be separate from, and in addition to, the existing Clinical Measure and Reporting Measure Domains for the purposes of scoring in the ESRD QIP. The Safety Measure Domain will initially consist of the proposed NHSN BSI Measure Topic.

    PY 2020 Measure Set: Beginning with PY 2020, we are replacing the Mineral Metabolism Reporting Measure with the newly finalized Serum Phosphorus Reporting Measure because replacing this measure is consistent with our intention to increasingly rely on CROWNWeb as the data source used to calculate measures in the ESRD QIP. Additionally, we are adopting two new measures: (1) The Standardized Hospitalization Ratio (SHR) Clinical Measure and (2) the Ultrafiltration Rate Reporting Measure.

    Weighting for the Clinical Measure Domain, the Reporting Measure Domain and the Safety Measure Domain: With the addition of the Safety Measure Domain into the ESRD QIP, we are making changes to the weighting of the Clinical Measure Domain and the Reporting Measure Domain, and we are establishing weights for the Safety Measure Domain for PY 2019 and for PY 2020.

    Specifically, for PY 2019, we are assigning 15 percent of a facility's total performance score (TPS) to the Safety Measure Domain, 75 percent of the TPS to the Clinical Measure Domain and 10 percent to the Reporting Measure Domain. To accommodate the removal of the Safety Subdomain from the Clinical Measure Domain, we are adjusting individual measure weights for the measures that remain in the Clinical Measure Domain. In response to comments received, for PY 2020, we are maintaining the weight of the Safety Measure Domain at 15 percent of a facility's TPS rather than at 10 percent as proposed.

    Data Validation: In section IV.C.9 of this final rule, we set forth the updates to the data validation program in the ESRD QIP. For PY 2019, we are continuing the pilot validation study for validation of CROWNWeb data. Under this continued validation study, we are continuing to use the same methodology used for the PY 2017 and PY 2018 ESRD QIP. We will sample the same number of records (approximately 10 per facility) from the same number of facilities (that is, 300) during CY 2017. Once we have developed and adopted a methodology for validating the CROWNWeb data, we intend to consider whether payment reductions under the ESRD QIP should be based, in part, on whether a facility has met our standards for data validation.

    For PY 2019, we are increasing the size of the NHSN BSI Data Validation study. Specifically, we will randomly select 35 facilities to participate in an NHSN dialysis event validation study for two quarters of data reported in CY 2017. A CMS contractor will send these facilities requests for medical records for all patients with “candidate events” during the evaluation period, as well as randomly selected patient records. Each facility selected will be required to submit 10 records total to the validation contractor. The CMS contractor will utilize a methodology for reviewing and validating the candidate events and will analyze those records to determine whether the facility reported dialysis events for those patients in accordance with the NHSN Dialysis Event Protocol. Information from the validation study may be used to develop a methodology to score facilities based on the accuracy of their reporting of the NHSN BSI measure.

    4. DMEPOS Competitive Bidding Bid Surety Bonds, State Licensure and Appeals Process for a Breach of DMEPOS Competitive Bidding Program Contract Action

    This final rule implements statutory requirements for the DMEPOS CBP for bid surety bonds and state licensure. In addition, we are finalizing a definition for the term “bidding entity” for purposes of the DMEPOS CBP. We also are finalizing revisions to the appeals regulations to expand suppliers' appeal rights in the event of a breach of contract determination to allow suppliers to appeal any breach of contract action CMS takes, rather than just a termination action. The final rule establishes the following:

    • A bidding entity must obtain a bid surety bond from an authorized surety on the Department of the Treasury's Listing of Certified Companies, submit proof of the surety bond by the deadline for bid submission, and the bond must meet certain specifications. We define the term “bidding entity” to mean the entity whose legal business name is identified in the “Form A: Business Organization Information” section of the bid.

    • If the bidding entity is offered a contract for any product category for a competitive acquisition area (herein referred to as a “Competitive Bidding Area” or “CBA”), and its composite bid for such product category and area is at or below the median composite bid rate for all bidding entities included in the calculation of the single payment amounts for the product category/CBA combination (herein also referred to as “competition”), and the entity does not accept the contract offered, the entity's bid surety bond for the applicable CBA will be forfeited and CMS will collect on the bid surety bond via Electronic Funds Transfer from the respective authorized surety. If the forfeiture conditions are not met, the bond liability will be returned to the bidding entity. Bidding entities that provide a falsified bid surety bond will be prohibited from participation in the DMEPOS CBP for the current round of the CBP in which they submitted a bid and also from bidding in the next round of the CBP. Bidding entities that provide a falsified bid surety bond will also be referred to the Office of Inspector General and Department of Justice for further investigation.

    • We are conforming the language of our regulation at 42 CFR 414.414(b)(3) to the language of section 1847(b)(2)(A)(v) of the Act, as added by section 522 of MACRA, which requires bidding entities to meet applicable State licensure requirements in order to be eligible for a DMEPOS CBP contract. We note, however, that this does not reflect a change in policy as CMS already has a regulation in place that requires suppliers to meet applicable State licensure requirements.

    • We are finalizing changes to § 414.423 to extend the appeals process to all breach of contract actions taken by CMS specified in § 414.422(g)(2). We are finalizing revisions to § 414.422(g)(2) to eliminate certain breach of contract actions. We also are finalizing revisions to § 414.423(l) to describe the effects of certain breach of contract actions that CMS takes.

    5. DMEPOS Competitive Bidding Program and Fee Schedule Adjustments

    This final rule sets forth requirements for the CBP and Fee Schedule Adjustments.

    • Methodologies for Adjusting DMEPOS Fee Schedule Amounts for Certain Groupings of Similar Items with Different Features using Information from Competitive Bidding Programs: Within the Healthcare Common Procedure Coding System (HCPCS), there are many instances where there are multiple codes for an item that are distinguished by the addition of a feature (for example, non-powered versus powered mattress, Group 1 versus Group 2 power wheelchair, pump without alarm versus pump with alarm, walker without wheels versus walker with wheels, etc.) Under CBPs, the code with the higher utilization (typically the item with additional features and higher fee schedule amounts) receives a higher weight and the bid for this item has a greater impact on the supplier's composite bid than the bids for the less frequently used codes. This is resulting in price inversions where the single payment amounts (SPAs) for the item without the feature are higher than the SPAs for the item with the feature. This could lead to program vulnerability by shifting beneficiaries from products with features to less appropriate products without the features because the product without the features receives higher payment under competitive bidding. We are finalizing provisions of § 414.210 to limit SPAs for certain items without a feature to the weighted average of the SPAs for the items both with and without the feature prior to using the SPAs to adjust the fee schedule amounts for certain groupings of similar items specified below. The item weights will be the same weights used in calculating the composite bids under the CBP.

    • Submitting Bids and Determining Single Payment Amounts for Certain Groupings of Similar Items with Different Features under the DMEPOS CBP: This rule addresses the price inversions under competitive bidding to prevent situations where beneficiaries receive items with fewer features at a higher price than items with more features. In addition to affecting the appropriateness of items supplied to beneficiaries, these price inversions also undermine the CBP and diminish the savings intended from implementation of the program. We are finalizing provisions of § 414.412 to add a lead item bidding method where all of the HCPCS codes for similar items with different features will be grouped together and will be priced relative to the bid for the lead item in order to prevent price inversions under the DMEPOS CBPs. We are applying this as an alternative to the current bidding method so that CMS will be able to apply this method to situations where groupings of similar items have resulted in price inversions based on past experience. This alternative method will only replace the current method of bidding for select groupings of similar items within product categories.

    • Bid Limits for Individual Items under the DMEPOS CBP: Current regulations require that bids submitted by suppliers under the CBP be lower than the amount that would otherwise apply (that is, the fee schedule amount). This ensures that total payments expected to be made to contract suppliers in a CBA are less than the total amounts that would otherwise be paid, as required by section 1847(b)(2)A)(iii) of the Act for awarding contracts under the program in an area. Beginning in 2016, the fee schedule amounts for DMEPOS items and services are adjusted based on information from the CBPs. We indicated in the final rule (79 FR 66232), which was published in the Federal Register on November 6, 2014, that these adjusted fee schedule amounts become the bid limits for future competitions (79 FR 66232). We have heard concerns that as the amounts paid under CBPs decline, this may ultimately make it difficult for suppliers to bid below the adjusted fee schedule amounts and accept contract offers at the median bid level. To avoid this situation and enhance the long term viability of the CBPs, we are finalizing revisions to the regulations to limit bids for future competitions to the fee schedule amounts that would otherwise apply if CBPs had not been implemented, prior to making adjustments to the fee schedule amounts using information from CBPs. This will allow suppliers to take into account both decreases and increases in costs in determining their bids, while ensuring that payments under the CBPs do not exceed the amounts that would otherwise be paid had the DMEPOS CBP not been implemented.

    C. Summary of Costs and Benefits

    In section XV.A of this final rule, we set forth a detailed analysis of the impacts of the finalized changes for affected entities and beneficiaries. The impacts include the following:

    1. Impacts of the Final ESRD PPS

    The impact chart in section XV.B.1 of this final rule displays the estimated change in payments to ESRD facilities in CY 2017 compared to estimated payments in CY 2016. The overall impact of the CY 2017 changes is projected to be a 0.73 percent increase in payments. Hospital-based ESRD facilities have an estimated 0.9 percent increase in payments compared with freestanding facilities with an estimated 0.7 percent increase.

    We estimate that the aggregate ESRD PPS expenditures will increase by approximately $80 million from CY 2016 to CY 2017. This reflects a $60 million increase from the payment rate update and a $20 million increase due to the updates to the outlier threshold amounts. As a result of the projected 0.73 percent overall payment increase, we estimate that there will be an increase in beneficiary co-insurance payments of 4.2 percent in CY 2017, which translates to approximately $10 million.

    2. Impact of the Final Coverage and Payment for Renal Dialysis Services Furnished to Individuals With AKI

    We anticipate an estimated $2 million being redirected from hospital outpatient departments to ESRD facilities in CY 2017 as a result of some AKI patients receiving renal dialysis services in the ESRD facility at the lower ESRD PPS base rate versus continuing to receive those services in the hospital outpatient setting.

    3. Impacts of the Final ESRD QIP

    The impact chart in section XVI.B.3.a of this final rule displays estimated QIP impacts for payment year (PY) 2020. The overall impact is an expected reduction in payment to all facilities of $31 million, with an estimated total facility burden for the collection of data of $91 million.

    4. Impacts of the Final DMEPOS Competitive Bidding Bid Surety Bonds, State Licensure and Appeals Process for a Breach of DMEPOS Competitive Bidding Program Contract Actions

    The DMEPOS CBP bidding entities will be impacted by the bid surety bond requirement as they will be required to purchase a bid surety bond for each CBA in which they are submitting a bid. The state licensure requirement will have no new impact on the supplier community because this is already a Medicare DMEPOS supplier requirement and the appeals process for a breach of a DMEPOS CBP contract actions expected to have a beneficial, positive impact on suppliers.

    Overall, the bid surety bond requirement may have a positive financial impact on the program as CMS anticipates that the requirement will encourage all bidding entities to submit substantiated bids. However, there will be an administrative burden for implementation of the bid surety bond requirement for CMS. The final state licensure and appeals process for breach of DMEPOS CBP contract actions regulations will have minimal administrative costs.

    We do not anticipate that the final DMEPOS CBP regulations for bid surety bonds, state licensure, and the appeals process for breach of DMEPOS CBP contract actions will have an impact on Medicare beneficiaries.

    5. Impacts of the Final DMEPOS Competitive Bidding Program and Fee Schedule Adjustments

    The overall economic impact for the final changes to the DMEPOS CBPs and Fee Schedule Adjustments would be about $20 million dollars in savings to the Part B Trust Fund over 5 years beginning January 1, 2017. The savings are a result of avoiding price inversions. This final rule should have a minor impact on the suppliers of CBAs and in the non-competitive bidding areas (non-CBAs). Beneficiaries would have lower coinsurance payments and receive the most appropriate items as a result of this final rule.

    II. Calendar Year (CY) 2017 End-Stage Renal Disease (ESRD) Prospective Payment System (PPS) A. Background 1. Statutory Background

    On January 1, 2011, we implemented the End-Stage Renal Disease (ESRD) Prospective Payment System (PPS), a case-mix adjusted bundled PPS for renal dialysis services furnished by ESRD facilities as required by section 1881(b)(14) of the Social Security Act (the Act), as added by section 153(b) of the Medicare Improvements for Patients and Providers Act of 2008 (MIPPA) (Pub. L. 110-275). Section 1881(b)(14)(F) of the Act, as added by section 153(b) of MIPPA and amended by section 3401(h) of the Patient Protection and Affordable Care Act (the Affordable Care Act) (Pub. L. 111-148), established that beginning with calendar year (CY) 2012, and each subsequent year, the Secretary of the Department of Health and Human Services (the Secretary) shall annually increase payment amounts by an ESRD market basket increase factor, reduced by the productivity adjustment described in section 1886(b)(3)(B)(xi)(II) of the Act.

    Section 632 of the American Taxpayer Relief Act of 2012 (ATRA) (Pub. L. 112-240) included several provisions that apply to the ESRD PPS. Section 632(a) of ATRA added section 1881(b)(14)(I) to the Act, which required the Secretary, by comparing per patient utilization data from 2007 with such data from 2012, to reduce the single payment for renal dialysis services furnished on or after January 1, 2014 to reflect the Secretary's estimate of the change in the utilization of ESRD-related drugs and biologicals (excluding oral-only ESRD-related drugs). Consistent with this requirement, in the CY 2014 ESRD PPS final rule we finalized $29.93 as the total drug utilization reduction and finalized a policy to implement the amount over a 3- to 4-year transition period (78 FR 72161 through 72170).

    Section 632(b) of ATRA prohibited the Secretary from paying for oral-only ESRD-related drugs and biologicals under the ESRD PPS prior to January 1, 2016. And section 632(c) of ATRA required the Secretary, by no later than January 1, 2016, to analyze the case-mix payment adjustments under section 1881(b)(14)(D)(i) of the Act and make appropriate revisions to those adjustments.

    On April 1, 2014, Congress enacted the Protecting Access to Medicare Act of 2014 (PAMA) (Pub. L. 113-93). Section 217 of PAMA included several provisions that apply to the ESRD PPS. Specifically, sections 217(b)(1) and (2) of PAMA amended sections 1881(b)(14)(F) and (I) of the Act and replaced the drug utilization adjustment that was finalized in the CY 2014 ESRD PPS final rule (78 FR 72161 through 72170) with specific provisions that dictated the market basket update for CY 2015 (0.0 percent) and how the market basket should be reduced in CYs 2016 through CY 2018.

    Section 217(a)(1) of PAMA amended section 632(b)(1) of ATRA to provide that the Secretary may not pay for oral-only ESRD-related drugs under the ESRD PPS prior to January 1, 2024. Section 217(a)(2) further amended section 632(b)(1) of ATRA by requiring that in establishing payment for oral-only drugs under the ESRD PPS, the Secretary must use data from the most recent year available. Section 217(c) of PAMA provided that as part of the CY 2016 ESRD PPS rulemaking, the Secretary shall establish a process for (1) determining when a product is no longer an oral-only drug; and (2) including new injectable and intravenous products into the ESRD PPS bundled payment.

    Finally, on December 19, 2014, the President signed the Stephen Beck, Jr., Achieving a Better Life Experience Act of 2014 (ABLE) (Pub. L. 113-295). Section 204 of ABLE amended section 632(b)(1) of ATRA, as amended by section 217(a)(1) of PAMA, to provide that payment for oral-only renal dialysis services cannot be made under the ESRD PPS bundled payment prior to January 1, 2025.

    2. System for Payment of Renal Dialysis Services

    Under the ESRD PPS, a single, per-treatment payment is made to an ESRD facility for all of the renal dialysis services defined in section 1881(b)(14)(B) of the Act and furnished to individuals for the treatment of ESRD in the ESRD facility or in a patient's home. We have codified our definitions of renal dialysis services at 42 CFR 413.171 and our other payment policies are included in regulations in subpart H to 42 CFR part 413. The ESRD PPS base rate is adjusted for characteristics of both adult and pediatric patients and accounts for patient case-mix variability. The adult case-mix adjusters include five categories of age, body surface area (BSA), low body mass index (BMI), onset of dialysis, four comorbidity categories, and pediatric patient-level adjusters consisting of two age categories and two dialysis modalities (42 CFR 413.235(a) and (b)).

    In addition, the ESRD PPS provides for three facility-level adjustments. The first payment adjustment accounts for ESRD facilities furnishing a low volume of dialysis treatments (42 CFR 413.232). The second adjustment reflects differences in area wage levels developed from Core Based Statistical Areas (CBSAs) (42 CFR 413.231). The third payment adjustment accounts for ESRD facilities furnishing renal dialysis services in a rural area (42 CFR 413.233).

    The ESRD PPS allows for a training add-on for home and self-dialysis modalities (42 CFR 413.235(c)). Lastly, the ESRD PPS provides additional payment for high cost outliers due to unusual variations in the type or amount of medically necessary care when applicable (42 CFR 413.237).

    3. Updates to the ESRD PPS

    Policy changes to the ESRD PPS are proposed and finalized annually in the Federal Register. The CY 2011 ESRD PPS final rule was published on August 12, 2010 in the Federal Register (75 FR 49030 through 49214). That rule implemented the ESRD PPS beginning on January 1, 2011 in accordance with section 1881(b)(14) of the Act, as added by section 153(b) of MIPPA, over a 4-year transition period. Since the implementation of the ESRD PPS, we have published annual rules to make routine updates, policy changes, and clarifications.

    On November 6, 2015, we published in the Federal Register a final rule (80 FR 68968 through 69077) titled, “Medicare Program; End-Stage Renal Disease Prospective Payment System, and Quality Incentive Program; Final Rule” (hereinafter referred to as the CY 2016 ESRD PPS final rule). In that final rule, we made a number of routine updates to the ESRD PPS for CY 2016, refined the ESRD PPS case-mix adjustments, implemented a drug designation process, updated the outlier policy, and made additional policy changes and clarifications. For a summary of the provisions in that final rule, we refer readers to the CY 2017 ESRD PPS proposed rule (81 FR 42809 through 42810).

    B. Summary of the Proposed Provisions, Public Comments, and Responses to Comments on the Calendar Year (CY) 2017 ESRD PPS

    The proposed rule, titled “End-Stage Renal Disease Prospective Payment System, Coverage and Payment for Renal Dialysis Services Furnished to Individuals with Acute Kidney Injury, End-Stage Renal Disease Quality Incentive Program, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program Bid Surety Bonds, State Licensure and Appeals Process for Breach of Contract Actions, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program and Fee Schedule Adjustments, Access to Care Issues for Durable Medical Equipment; and the Comprehensive End-Stage Renal Disease Care Model” (81 FR 42802 through 42880), hereinafter referred to as the CY 2017 ESRD PPS proposed rule, was published in the Federal Register on June 30, 2016, with a comment period that ended on August 23, 2016. In that proposed rule, for the ESRD PPS, we proposed to (1) make a number of annual updates for CY 2017, (2) increase the home and self-dialysis training add-on payment adjustment, (3) implement the statutory provisions set forth in the Trade Preferences Extension Act of 2015 (TPEA) amendments to the Act, and (4) utilize a payment equivalency for hemodialysis furnished more than 3 times per week. We received approximately 340 public comments on our proposals, including comments from ESRD facilities; national renal groups, nephrologists and patient organizations; patients and care partners; manufacturers; health care systems; and nurses.

    In this final rule, we provide a summary of each proposed provision, a summary of the public comments received and our responses to them, and the policies we are finalizing for the CY 2017 ESRD PPS. Comments related to the paperwork burden are addressed in the “Collection of Information Requirements” section in this final rule. Comments related to the impact analysis are addressed in the “Economic Analyses” section in this final rule.

    1. Payment for Hemodialysis When More Than 3 Treatments Are Furnished per Week a. Background

    Since the composite rate payment system was implemented in the 1980s, we have reimbursed ESRD facilities for up to three hemodialysis (HD) treatments per week and only paid for weekly dialysis treatments beyond this limit when those treatments were medically justified due to the presence of specific comorbid diagnoses that necessitate additional dialysis treatments (see paragraph (c) of this section). When we implemented the ESRD PPS in 2011, we adopted a per treatment unit of payment (75 FR 49064). This per treatment unit of payment is the same base rate that is paid for all dialysis treatment modalities furnished by an ESRD facility (HD and the various forms of peritoneal dialysis (PD) (75 FR 49115). Consistent with our policy since the composite rate payment system was implemented in the 1980s, we also adopted the 3-times weekly payment limit for HD under the ESRD PPS (74 FR 49931). When a beneficiary's plan of care requires more than 3 weekly dialysis treatments, whether HD or daily PD, we apply payment edits to ensure that Medicare payment on the monthly claim is consistent with the 3-times weekly dialysis treatment payment limit. Thus, for a 30-day month, payment is limited to 13 treatments, and for a 31-day month payment is limited to 14 treatments.

    Because PD is typically furnished more frequently than HD, we calculate HD-equivalent payment rates for PD that are based on the ESRD PPS base rate per treatment. To do this, we adjust the base rate by any applicable patient- or facility-level adjustments, and then multiply the adjusted base rate by 3 (the weekly treatment limit), and divide this number by 7. This approach creates a per treatment amount that is paid for each day of PD treatment and that complies with the monthly treatment payment limit. With regard to HD, because we do not have a payment mechanism for the ESRD facility to bill and be paid for every treatment furnished when more than 3 treatments are furnished per week (for example, how they bill daily for PD), we apply edits to the monthly claim so that in total for the month (as described above) Medicare does not make payment for more than 3 weekly HD treatments. In the situation where an ESRD facility bills for more than 3 weekly HD treatments (or more than 13 or 14 for the month, depending on the days in the month) without medical justification, we deny payment for the additional HD treatments. We calculate HD-equivalent payments for PD so that the amount we pay for dialysis is modality-neutral. As we explained in the CY 2011 ESRD PPS final rule (75 FR 49115), we chose not to use dialysis modality as a payment variable when we developed the ESRD PPS because utilizing one dialysis-neutral payment resulted in a slightly higher payment for PD than a modality-specific payment, which we believed would encourage home dialysis, which is typically PD.

    In recent years, ESRD facilities have increasingly begun to offer HD where the standard treatment regimen exceeds 3 treatments per week. At the same time, we observed variation in how Medicare Administrative Contractors (MACs) processed claims for HD treatments exceeding three treatments per week, resulting in payment of more than 13 or 14 treatments per month. As a result, in the CY 2015 ESRD PPS final rule (79 FR 66145 through 66147), we reminded ESRD facilities and MACs that the Medicare ESRD benefit allows for the payment of 3 weekly dialysis treatments, and that additional weekly dialysis treatments may be paid only if there is documented medical justification. Additional conventional HD treatments are reimbursed at the full ESRD PPS payment if the facility's MAC determines the treatments are medically justified based on a patient condition, such as congestive heart failure or pregnancy. MACs have developed Local Coverage Determinations (LCDs) and automated processes to pay for all the treatments reported on the claim if the ESRD facility reports diagnoses determined by the MAC to medically justify treatments beyond 3 times per week.

    The option to furnish more than 3 HD treatments per week is the result of evolving technology. We believe that, in some cases, use of this treatment option provides a level of toxin clearance on a weekly basis similar to that achieved through 3-times weekly conventional in-center HD. However, HD treatments exceeding 3 times per week are generally shorter and afford patients greater flexibility in managing their ESRD and other activities. As stated above, under the ESRD PPS, we currently do not have a payment mechanism that could apply a 3 treatments-per week equivalency to claims for patients with prescriptions for more than 3 HD treatments per week that do not have medical justification (see paragraph (c) of this section). As a result, the additional payments for treatments beyond 3 per week are denied, except where medically justified. Payment for HD treatments that exceed 3 treatments per week occurs when those treatments are medically justified, as indicated by diagnosis codes. There are specific conditions that require more medical attention, documentation in the medical record, and the results of the higher frequency treatments can be objectively measured through the collection of testing data and are therefore justified as necessary. In cases where the HD exceeds 3 treatments per week for reasons other than medical justification, there is a lack of objective data to justify additional payment for HD treatments beyond 3 treatments per week.

    ESRD facilities have expressed concern that due to the monthly payment limit of 13 or 14 treatments, they are unable to report all dialysis treatments on their monthly claim, and therefore, they are not appropriately paid for each treatment furnished. We understand ESRD facilities' concerns and also would like to ensure that facilities are able to accurately report all of the treatments they furnish. Therefore, we analyzed 2015 ESRD facility claims data and found that there is a discrepancy between treatments furnished and treatments billed and paid for HD patients. The data indicate that HD patients are receiving HD treatments in excess of 3 per week, but facilities are usually only being paid for 3 treatments per week. The creation of an equivalency payment mechanism serves multiple purposes. First, it allows for payment for situations in which more than 3 HD treatments are furnished in a week that complies with the 3 treatment per week payment limit. Second, it encourages facilities to report all treatments furnished. This, in turn, would provide us with the information necessary to determine exactly how many treatments are being furnished. Finally, it would allocate the total amount of payment based on 3 HD sessions per week in accordance with the number of treatments actually furnished. For these reasons, we proposed a payment equivalency for HD treatment regimens when more than 3 treatments are furnished per week, similar to the HD-equivalency payment that has been used for PD since the composite rate payment system was implemented in 1983. While the policy would be effective January 1, 2017, we proposed not to implement the HD equivalency payments until July 1, 2017, to allow time to make operational changes to accommodate this new payment mechanism.

    b. Payment Methodology for HD When More Than 3 Treatments Are Furnished per Week

    For CY 2017, for adult patients, we proposed to calculate a per treatment payment amount that would be based upon the number of treatments prescribed by the physician and would be composed of the ESRD PPS base rate as adjusted by applicable patient and facility-level adjustments, the home dialysis training add-on (if applicable), and the outlier payment adjustment (if applicable). To calculate the equivalency payment where more than 3 HD treatments are furnished per week, we would first adjust the ESRD PPS base rate by the applicable patient-level adjustments (patient age, body surface area, low body mass index, comorbidities, and onset of dialysis) and facility-level adjustments (wage index, rural facility, and low-volume facility). Second, we would multiply the adjusted ESRD PPS base rate by 3 to develop the weekly treatment amount and then we would divide this number by the number of treatments prescribed to determine the per treatment amount. Third, we would multiply the calculated outlier payment amount by 3 and divide this number by the number of treatments prescribed to determine the per treatment outlier amount. Finally, we would add the per-treatment ESRD PPS base rate and the per treatment outlier amount together to determine the final per treatment payment amount. For example, a beneficiary whose prescription indicates 5 treatments per week would be paid as follows: (Adjusted Base Rate * 3/5) + (Outlier Payment * 3/5) = per treatment payment amount.

    While we proposed an equivalency payment based on 3 HD treatments per week, ESRD facilities submit bills monthly and, as a result, the monthly maximums presented below are the treatment limits that would be applied to 30-day and 31-day months:

    Prescribed weekly
  • treatments
  • Maximum number of monthly
  • treatments—30-day month
  • Maximum number of
  • monthly treatments—31-day
  • month
  • 4 18 19 5 23 24 6 26 27 7 30 31

    For pediatric patients, the calculation would be the same as that proposed for adult patients, except that the ESRD PPS payment amount for pediatric patients would be based on the pediatric case mix adjustments and would not include the rural or low-volume facility-level adjustments.

    In order to accommodate this policy change, we would establish new claim processing guidelines and edits that would allow facilities to report the prescribed number of HD treatments for each patient. There would be individual claims processing system identifiers established for treatments provided 4 times per week, 5 times per week, 6 times per week, and 7 times per week. These identifiers would allow the claims processing system to adjust the payment calculation and allow the appropriate payment for each treatment. The comments and our responses to the comments for these proposals are set forth in section II.B.1.d below.

    c. Applicability to Medically Justified Treatments

    While the majority of ESRD patients are prescribed conventional 3-times-per-week HD, we have always recognized that some patient conditions benefit from more than 3 HD sessions per week and as such, we developed a policy for payment of medically necessary dialysis treatments beyond the 3-treatments-per-week payment limit. Under this policy, the MACs determine whether additional treatments furnished during a month are medically necessary and when the MACs determine that the additional treatments are medically justified, we pay the full base rate for the additional treatments. While Medicare does not define specific patient conditions that meet the requirements of medical necessity, the MACs consider appropriate patient conditions that would result in a patient's medical need for additional dialysis treatments (for example, excess fluid). When such patient conditions are indicated on the claim, we instruct MACs to consider medical justification and the appropriateness of payment for the additional sessions.

    The medical necessity for additional dialysis sessions must be documented in the patient's medical record at the dialysis facility and available for review upon request. The documentation should include the physician's progress notes, the dialysis records and the results of pertinent laboratory tests. The submitted medical record must support the use of the diagnosis code(s) reported on the claim and the medical record documentation must support the medical necessity of the services. This documentation would need to be available to the contractor upon request.

    In section 50.A of the Medicare Benefit Policy Manual (Pub. 100-02), we explain our policy regarding payment for HD-equivalent PD and payment for more than 3 dialysis treatments per week under the ESRD PPS. This proposal does not affect our policy to pay the full ESRD PPS base rate for medically justified treatments beyond 3 treatments per week. Rather, the intent is to provide a payment mechanism for patients with more than 3 HD treatments per week that do not have medical justification. In the event that a beneficiary receives traditional HD treatments in excess of 3 per week without medical justification for the additional treatments, these additional treatments will not be paid. The comments and our responses to the comments for these proposals are set forth in section II.B.1.d below.

    d. Applicability to Home and Self-Dialysis Training Treatments

    Beneficiary training is crucial for the long-term efficacy of home dialysis. Under our current policy for PD training, we pay the full ESRD PPS base rate, not the daily HD-equivalent payment amount, for each PD training treatment a beneficiary receives up to the limit of 15 training treatments for PD. As we discussed in section II.B.2 of the proposed rule (81 FR 42812) and in section II.B.2 below, we are investigating payments and costs related to training and plan to refine training payments in the future. Until that time, we believe that paying the full base rate during training continues to support home dialysis modalities. When training accompanies HD treatments exceeding 3 per week, the training would continue to be limited to 25 total sessions, in accordance with our policy for training for conventional HD.

    Because the home dialysis training add-on under the ESRD PPS is applied to each treatment on training claims up to the applicable limits for HD or PD, we anticipate that ESRD facilities will appreciate the ability to receive payment for each training treatment when more than 3 HD treatments are furnished per week and training is furnished with each of those treatments. We believe this effect of our proposed policy would be beneficial to facilities and beneficiaries receiving HD treatment more than 3 times per week because, as mentioned above, under our current policy, our claim edits only allow payment for 13 or 14 HD treatments in a monthly billing cycle. This means that ESRD facilities can only bill for 13 or 14 treatments for the month and may not receive the full number of home dialysis training add-on for the treatments that would otherwise be billable because of these payment limits. We believe that permitting facilities to bill for training treatments that are furnished to beneficiaries receiving more than 3 HD treatments per week will allow these facilities to receive payment for training more consistently with how they are furnishing these treatments. We expect ESRD facilities to engage patients in the decision making process for determining the best candidates for additional weekly hemodialysis beyond 3 treatments per week and thoroughly discuss with the patient the potential benefits and adverse effects associated with more frequent dialysis. For example, while there could be potential quality of life and physiological benefits there is also risk of a possible increase in vascular access procedures and the potential for hypotension during dialysis.

    In the CY 2017 ESRD PPS proposed rule (81 FR 42812), we explained that we believe this payment mechanism would provide several benefits. Facilities would be able to bill for treatments accurately and be paid appropriately for the treatments they furnish. This policy would provide clarity for the MACs and providers on billing and payment for HD regimens that exceed 3 treatments per week and assist MACs in determining which HD treatments should be paid at the equivalency payment rate and which HD treatments should be paid at the full base rate because the facility has provided adequate evidence of medical justification. Beneficiaries and facilities would have more flexibility to request and furnish patient-centered treatment options. Finally, the proposal would increase the accuracy of payments and data and would provide CMS the ability to monitor outcomes for beneficiaries utilizing various treatment frequencies.

    The comments and our responses to the comments for the proposals related to payment for HD when more than 3 treatments are furnished per week are set forth below.

    Comment: The majority of comments were from individual patients and their care partners describing their dialysis experience from onset, through PD, transplant, return to in-center 3 times weekly and finally to more frequent home HD. The commenters describe significant improvement in their health status, including better blood pressure, cardiac status, and phosphorus levels, fewer dietary restrictions, less fatigue after dialysis, and the ability to schedule dialysis around work and family activities. Many commenters strongly encouraged CMS to review the clinical literature related to dialysis frequency because based on the literature and their own clinical experience, more frequent dialysis has many benefits. They believe CMS payment policy should be modified to more closely align with evidence-based research. They urged CMS to take steps to facilitate access to home HD, such as routinely paying for more than 3 treatments per week for any patient who agrees to have more, so that more patients can receive the same benefits.

    Other commenters indicated that their more frequent home dialysis resulted in more hours of dialysis treatment than is typically furnished in-center. One commenter pointed out that typically patients on more frequent dialysis generally treat 30-40 percent longer than patients receiving 3 times per week therapy in-center. Commenters also described the health advantages of nocturnal dialysis and other dialysis schedules that provide a similar level of toxin and fluid removal to in-center dialysis, but spread out the treatments over 4 or more days. Another commenter pointed out that with the same weekly volume of fluid to be removed it is clearly demonstrable that removal in five treatments is safer, protects vital organs and is far more stable for patients. This does not mean that all patients must be treated 5 times per week or that all patients receiving that frequency are necessarily fully dialyzed. Therefore, some flexibility in approach is necessary. The commenter concluded that dialysis patients are in general intolerant of fluid removal. Elderly nursing home patients are at greater risk of problems that can be alleviated substantially by more frequent dialysis.

    Many other commenters urged CMS to provide payment for customizing the dialysis treatment to the patient. One commenter indicated that unlike in-center dialysis, which is one size fits all, they are able to tailor each treatment to their physical needs; for example, if the beneficiary has too much fluid after travelling, then a few extra, longer, slower treatments could be done to gently remove the fluid. The commenter stated that a diabetic controls their treatment by regulating their blood sugar, and a patient on dialysis should be allowed the same freedom to treat accordingly. More frequent treatments, as needed, are a must for maintaining maximum health. There must not be a one size fits all dialysis treatment mentality.

    Several commenters objected to the proposed update to home HD payment policies because they believed that it locks in the 3-times-per-week schedule. The comments indicated that there is no research that supports capping the dialysis dose in such an unsafe way. A 3-day a week schedule requires a nearly 3-day “dialysis weekend” every week, which is a risky choice. Another commenter stated that 3-times-per-week dialysis (Monday, Wednesday, Friday and Tuesday, Thursday, Saturday schedules) was not based on clinical research, but rather was a way to dialyze two groups of patients and allow the nurses to have Sunday off. Another commenter believes the 3-times-per-week scheduling reflects the shortage of dialysis machines and supplies in the 1960s when HD began. Other commenters pointed out that alternative schedules are unavailable in-center, other than in very narrow circumstance where there is medical justification, and thus are generally furnished at home.

    Response: We believe that the choice of modality and frequency of treatments for a patient are decisions that are made by the physician and the patient. We continue to believe that patients should have access to various treatment options and schedules and facilities should offer various treatment options to meet the needs of its patients. Comments recommending that we facilitate access to home HD by routinely paying for more than 3 treatments per week are beyond the scope of the proposed rule. However, we believe that routinely paying ESRD facilities the full ESRD PPS payment for up to 6 or even 7 treatments per week for home HD patients would overpay facilities relative to their resources and cost. Patients on more frequent schedules have indicated in public comments that they no longer need to take many of the medications routinely provided to in-center patients and have limited involvement with their ESRD facility, two significant components of the ESRD PPS base rate.

    We acknowledge that the proposed HD equivalency would have maintained the current policy which limits monthly payment to 13 or 14 treatments, which reflects the number of treatments received by the vast majority of ESRD patients; but our intention was to provide more flexibility for patients, not to increase the overall amount of payment. Patients with certain medical conditions reportedly benefit from shorter and/or longer and more frequent HD and, as a result, MACs can approve additional treatments. While we have reviewed the studies regarding more frequent HD that have been conducted, many of the studies are too small in scope and do not provide a sufficient basis for a national payment policy change of this magnitude. In particular, in a literature review reported November 2015 in the American Journal of Kidney Diseases, titled “Timing of Dialysis Initiation, Duration and Frequency of Hemodialysis Sessions, and Membrane Flux: A Systematic Review for a KDOQI Clinical Practice Guideline”, Slinin et al, reported that more than thrice-weekly hemodialysis and extended-length hemodialysis did not improve clinical outcomes compared to conventional hemodialysis and resulted in a greater number of vascular access procedures. The authors concluded that the limited data available indicate that more frequent and longer hemodialysis did not improve clinical outcomes compared to conventional hemodialysis. As a result, we believe that payment for additional treatments should remain individualized to the patient as medically necessary and that the determination continue to be made on a case-by-case basis by the MACs.

    Comment: While many commenters expressed support for CMS' efforts to obtain a reliable source of data for the number of HD treatments patients receive each week, most of the comments from individual facilities and dialysis organizations of all sizes, physicians, and patient advocacy organizations strongly objected to the HD equivalency proposal because they believe it is unnecessary, would increase providers' burden, would be administratively complex, and would discourage growth of home HD. Although we developed the proposal based on provider feedback about their inability to report all dialysis treatments on a monthly claim, many commenters indicated that this concern is unfounded because current claims processes allow providers to report all dialysis treatments delivered either in-center or at home. They suggested that modifiers could be used to distinguish medically justified additional treatments from those that do not meet their MAC's LCD for medically justified treatments.

    Dialysis organizations pointed out that use of the prescribed number of treatments as the basis of payment increases the burden. An LDO pointed out that the number of prescribed treatments can change weekly based on a patient's condition. For other various reasons (for example, hospitalization), a patient may not receive a prescribed treatment, making the proposal administratively challenging for facilities and providers. In addition, the HD equivalency proposal only achieves CMS' goal of allocating the total amount of payment based on three HD sessions per week in accordance with the number of treatments actually furnished when the actual and prescribed treatments are equal.

    An MDO agreed and expressed serious reservations about substituting prescribed treatments for delivered treatments in the calculation of payments, as the proposal contemplates. The commenter indicated the proposed HD equivalency policy would increase the reporting burden in order to correct claims for patients who do not attend the prescribed number of treatments. The line item billing requirements would impose further burden in billing for patients treated on schedules, such as every other day treatments. Moreover, months ending in the middle of a week would pose additional complexity, since it would be necessary to use 2 monthly claims to determine whether there had been more than three treatments during the week.

    The commenters stated these additional burdens would represent additional administrative cost for every dialysis provider, for every vendor supplying dialysis billing software, for every MAC receiving these claims, and for CMS itself. They stated that this will be particularly burdensome for smaller organizations and independent providers which are not highly automated and tightly integrated with clinical systems. Another organization representing nonprofit facilities pointed out that with all the other requirements being placed on providers, particularly smaller providers, they do not see how CMS' need for better data outweighs the additional burden at this time and strongly opposed CMS finalizing the proposal.

    Many other commenters objected to the proposal to pay for shorter, more frequent HD in a similar manner as PD, pointing out that PD and home HD are vastly different therapies and should not be compared to one another clinically or paid as if they are equivalent therapies. The comments indicated that PD is currently paid as the equivalent to 3 treatments per week HD because it requires multiple exchanges per day to achieve the same basic outcomes for patients.

    Many commenters recommended that CMS issue simple billing clarifications to ESRD facilities to encourage reporting of all treatments and remind the MACs that their LCD or similar policies should include criteria for additional, medically justified dialysis treatments. Otherwise, the commenters indicated that CMS' current policies are sufficient to meet the needs of beneficiaries, providers, and Medicare, and the HD equivalency is not necessary.

    Response: After careful consideration of the public comments, we agree with commenters and believe that implementing HD-equivalent payment for shorter, more frequent HD could be burdensome. Following publication of the proposed rule, we learned that ESRD facilities in certain MAC areas have the ability to report all treatments furnished, whether paid or not. We are exploring claim reporting mechanisms, such as modifiers, to meet our data needs and reflect patient treatments provided while minimizing burden on facilities. Once we decide on the mechanism for reporting treatments that are medically justified and those that do not meet the MAC's LCD for medically justified additional treatments, we will issue billing clarifications to MACs and ESRD facilities.

    Comment: Although many commenters requested that CMS withdraw the equivalency proposal, a few commenters believe that the status quo should not remain in place and that CMS is on the right track with the HD equivalency proposal. One commenter expressed concern that the proposal could produce a perverse unintended consequence of rewarding facilities that provide more frequent dialysis but less in the aggregate than is necessary to give patients high-quality care. We are unsure exactly what the commenter meant by this comment and the commenter did not elaborate on this point.

    Another commenter pointed out that current reimbursement for more frequent home HD creates for this one particular therapy a reimbursement level that can be double that of conventional 3-times-per-week HD if all the HD treatments are paid as medically justified treatments. The commenter stated that the cost to the provider for additional treatments (beyond 3 per week) delivered at home with more frequent home HD should be a relatively small incremental cost as compared to the first 3 treatments per week. Within the reimbursement of the first 3 treatments (the conventional schedule) the cost of the machine, the patient training, the nursing support, etc., would already have been covered and the incremental cost for additional home HD treatments is strictly the treatment supplies.

    The commenter stated that reimbursing for the additional treatments beyond 3 treatments per week at the full bundled base rate does not seem appropriate and creates at least the appearance of a profit incentive for providers (and their physician partners) to utilize this therapy. Patients should have access to more frequent home HD as a therapy option, but the reimbursement for this therapy should be more straightforward and transparent, and on a level playing field with other dialysis therapy options, such as conventional 3 times weekly HD or PD. The commenter believes the CMS equivalency proposal would do that.

    The commenter suggested that CMS consider adding a new lower incremental treatment rate for home HD treatments beyond 3 treatments per week to cover the additional incremental supply cost beyond the first 3 treatments per week, if CMS feels that is appropriate and is interested in promoting more frequent home HD therapy. However, another commenter stated that dialysis centers not only incur the cost of supplies for the additional treatments, but also incur the cost for staff to manage the treatments. It makes sense they should be paid accordingly and therefore avoid costly emergency rooms visits for episodes of fluid overload or hyperkalemia.

    Response: We agree with the commenter that paying the full base rate amount for treatments over 3 per week without documented medical justification would have created risks for patients but we note that this is not the policy that we proposed. We also note that we aggressively monitor ESRD facility claims so that we are aware of changes in practice, and they may prompt us to engage in future rulemaking in this area. As we explained previously, we are not finalizing the HD equivalency proposal. As an alternative, we will be making changes in reporting treatments that will allow us to monitor changes in treatment patterns more effectively.

    Comment: Several commenters, while disagreeing with the equivalency payment proposal as discussed above, supported CMS in paying the full ESRD PPS base rate for each home HD training treatments, even when those treatments are furnished more than 3 times per week. The commenters agreed that this frequency of payment would assist CMS in the investigation for payments and costs related to training for future refinement. The commenters indicated that the proposal is appropriate because training treatments are an essential process to transitioning patients home safely. In addition, they agreed it would permit facilities to bill for training treatments that are furnished to beneficiaries receiving more than 3 HD treatments per week and allow these facilities to receive payment for training more consistently with how they are furnishing these treatments.

    Response: We appreciate the commenters' support for the proposal regarding allowing the payment of the full base rate for all home dialysis training treatments, even when they are furnished more than 3 times per week, subject to our payment limit of 25 HD training sessions. While we are not finalizing the equivalency payment for maintenance HD (discussed above) when it is furnished more than 3 times per week, we continue to believe that it is important for our payment for home HD training to be consistent with how we pay for home PD training. In addition, we do not believe that this will change the amount of total dollars paid out for home HD training because facilities will receive the training add-on for only 25 treatments, which has been a longstanding policy. The difference is that facilities can receive the full base rate for more than 3 HD training treatments in a single week. Therefore, for this rule we are finalizing our proposal to pay the full ESRD PPS base rate for all training treatments even when they exceed 3 times per week with a limit of 25 sessions as proposed.

    Comment: A commenter suggested what they believe is a much simpler solution under which CMS would instruct the MACs to apply payment edits to ensure that Medicare payment on the monthly claim is consistent with the 3-times weekly dialysis treatment payment limit. Thus, for a 30-day month, the commenter believes payment should be limited to 13 treatments and for a 31-day month the commenter believes payment should be limited to 14 treatments. The commenter indicates this approach enforces the 3 times a week rule effectively. In addition, it permits flexibility, allowing payment for a 4 treatment week followed by a 2 treatment week for those few cases having logistical but no medical justification, such as Christmas and New Year's, weather or water system failures causing unexpected facility closure, as well as major events in patients' lives such as out of town family weddings and funerals.

    Several commenters stated that Medicare reimbursement should signal its willingness to support safe schedules, especially every other day (EOD) HD schedules. The commenter recommended that the PPS should base home HD reimbursement on 7 treatments every 2 weeks, that is, reimburse home HD fully, equivalent to EOD schedules, and to reimburse a partial bundle amount for treatments in excess of EOD.

    Other commenters implored CMS to explore paying for HD by the hour rather than by the treatment, or, minimally, to pay for up to 15 standard in-center HD treatments per month without medical justification to allow dialysis every other day and eliminate the 3-day dialysis weekend.

    Response: Since ESRD facilities submit bills on a monthly basis, we currently enforce the 3-treatments-per-week payment policy through established treatment limits by month, that is, 13 treatments for 30-day months and 14 treatments for 31-day months and we will continue to do so. We appreciate the suggestions to increase the monthly limits, however, these suggestions are beyond the scope of the proposed rule. As we mentioned above, payment for additional treatments should remain individualized to the patient as medically necessary and that the determination will continue to be made on a case-by-case basis by the MACs.

    Comment: We received many comments objecting to the notion expressed in the proposed rule that extra sessions would be prescribed based on patient preference or convenience. One commenter stated that the idea that they took on the responsibility for their treatments, coordinating and storage of medical supplies, cannulating themselves, drawing blood, completing and filing flow sheets, troubleshooting medical and mechanical emergencies, and then having to clean up and sanitize the equipment as a matter of convenience is ludicrous. Another commenter pointed out that patients receiving additional treatments only consent to them because they experience a real and sustained clinical benefit.

    Another commenter objected to statements in the proposed rule stating that more frequent HD is the result of evolving or new technology. The commenter believes it is more accurate to say that the option to furnish more than 3 HD treatments per week is an existing option that is increasingly utilized because of evolving technology that facilitates treatment in the home setting, where more frequent HD is more feasible, as well as increasing awareness of the unsolved clinical problems that more frequent HD can positively address. The commenter also pointed out evidence that more frequent HD is not new and referred to a systematic review of clinical outcomes in patients on more frequent HD that studied patients who initiated more frequent hemodialysis in Asia, Europe, North America, and South America as early as 1972. In other words, more frequent hemodialysis was an internationally-recognized prescription long before the advent of the currently dominant home HD technology in the US.

    One commenter expressed concern about the implication that a significant number of prescribed extra HD sessions are not predicated upon medical necessity. The commenter pointed out that more frequent HD requires a greater investment of time on therapy than thrice-weekly therapy, no matter how it is prescribed. This therapy is not prescribed for convenience. The commenter pointed out that CMS has noted that no HD session is without risks, and more frequent therapy would not be prescribed unless it is clinically necessary to address a particular patient's needs. The commenter believed suggesting otherwise is inconsistent with the responsible practice of medicine. Another commenter explained that the hemodynamic benefits are a major reason why doctors prescribe, and patients embrace, this form of therapy. As such, the hemodynamic benefits are at the very core of the basis for the medical necessity for more frequent HD therapy.

    Response: We appreciate these clarifications. Our intent was merely to pay appropriately for shorter, more frequent dialysis prescriptions that are equivalent to in-center treatments. We did not mean to imply that physicians order treatments that are not medically necessary, or that patients receive shorter, but more frequent dialysis solely for their convenience. However, when a home dialysis machine supplier met with us and was asked if their machine could perform in the same way as an in-center machine performs, that is, whether patients could dialyze 3 times per week, we were told the patients could do so, but that it would take longer. Consequently, the patients using this home modality choose shorter, more frequent dialysis treatments at home 5 times per week. We agree with the commenter that it is more accurate to say that the option to furnish more than 3 HD treatments per week has been increasingly chosen as a treatment option. This may be due to the evolving technology facilitating more frequent HD treatment in the home setting.

    Comment: An LDO, a national dialysis industry organization, a patient advocacy organization and many patients, caregivers, physicians, and nurses supported the proposal to continue current payment policy for treatments determined medically justified based on MAC consideration of medical evidence as required under a LCD. The commenters stated this is an important existing policy that allows patients who have a medical need to be able to obtain extra treatments and for the facilities to be reimbursed for them. They also noted that this policy preserves the physician's medical decision-making to meet the individual needs of patients.

    A dialysis nursing association expressed concern that despite the promulgation of LCDs for additional dialysis treatments, there are substantial differences in the MAC's assessment of medical justification for these treatments. They urged CMS to continue to educate the MACs on what constitutes medical justification and ensure the MACs are thoroughly examining each medical record in its entirety when assessing whether there is medical justification for additional treatments. They pointed out that differences in documentation requirements necessitate additional work for their members, and it is imperative that the MACs exhibit greater consistency when determining the appropriateness of payment based upon the medical documentation.

    However, many other commenters, primarily physicians, implored Medicare not to interfere with the physician's clinical judgment in determining the best treatment regimen that meets the needs of their patients. Physicians indicated that all the treatments they prescribe are medically necessary. Several commenters expressed concern the proposal may limit the physician's freedom to prescribe additional HD sessions for patients who could benefit. Commenters pointed out that currently there is no national policy that restricts a physician's ability to prescribe medically appropriate extra HD sessions for their patients and that the decision about whether the therapy prescribed is medically appropriate is made locally, between the physician and the local MAC. The commenter expressed concern that the HD equivalency proposal may take away some of that freedom if certain language in the rule is not changed. One commenter stated they are not asking CMS to specify what the MACs should or should not pay for, but rather that CMS should leave that decision to physicians.

    A clinical association stated that while they are generally supportive of the current medical justification approach, they noted that it can create administrative burdens and, in some cases, interfere with the patient-physician relationship. Due to the heterogeneity with which various MACs interpret what is medically justified, clinicians in some areas have less latitude to provide what they believe is medically justified care. For example, it may be appropriate for certain patients who have benefitted from a fourth dialysis session in 1 week to receive a fourth dialysis session in the following week as a prophylactic measure to prevent an adverse outcome from occurring again. The commenter believes CMS should urge all MACs to approach medical justification with a consistent, broad view and a respect for physicians' responsibility in determining, in consultation with their patients, what constitutes medically necessary additional dialysis sessions.

    Another commenter agreed, stating that absence of documentation on some claims forms requesting payment for extra prescribed sessions does not indicate absence of medical necessity. Instead, it may be due to variations in the documentation particular MACs are seeking, or a misunderstanding of how to properly submit a claim for a type of therapy that is rarely prescribed. In these instances, documentation of medical necessity likely is to be found in the prescribing physician's patient records. The commenter stated that it is rational to assume that a reiteration of clear instructions on this point, from CMS and the MACs, would address the discrepancies in claims submissions that CMS has noted.

    An advocacy organization asked that CMS reiterate again in final rulemaking that there is no national coverage decision for additional hemodialysis sessions, that the determination of medical justification for both acute and chronic prescriptions involving more than three sessions per week is left entirely to the discretion of the MACs and that if a MAC wishes to restrict coverage to any certain conditions or require any unique documentation, it must execute a formal LCD process with public comment.

    Other commenters stated that the overwhelming clinical evidence shows that the closer HD treatment approximates the functioning of the healthy human kidney (24 hours/per day, 7 days/per week), the better the patient outcomes. Therefore, they believe Medicare should presume that longer, more frequent dialysis is medically justifiable in all cases, and that the actual treatment regimen should be determined by the patient, in consultation with their physician, taking into account both anticipated clinical outcomes and the patient's overall life goals.

    Another commenter suggested that a conversation should be opened with Medicare contractors to permit a full understanding for the reasons for more frequent HD therapy. Justifications for on-going more frequent HD therapy are not necessarily the same as that for a one-time only justification for an extra treatment for a conventionally treated patient. The justifications for the two groups should be separated. The commenter stated that Medicare should unequivocally signal support for the concept of more frequent HD and should also clearly signal that more frequent HD treatments, when justified, will be funded. Lastly, the commenter stated that should more frequent HD be prescribed without justification, then treatments in excess of 3-per-week should not be reimbursed. Another commenter agreed, stating that all home HD treatments provided should be reported and, through use of a modifier, be indicated as medically supported or not medically supported with all supported treatments being paid at the designated HD facility rate.

    Response: We thank the commenters for their comments. However, we did not propose to change the process for MAC approval of additional dialysis treatments. We believe the current process has been effective in approving additional treatments based on the medical evidence for individual patients. We agree with the commenter who stated that there is no national coverage decision for additional HD sessions and that the determination of medical justification for prescriptions involving more than three sessions per week is left entirely to the discretion of the MACs and related administrative processes. We support more frequent HD for those patients who can benefit from it and agree that if more frequent HD is prescribed without medical justification, the treatments in excess of 3-per-week should not be paid. We thank the commenters for their suggestions and will consider them if we make changes to this policy.

    Comment: Several commenters stated they appreciate that CMS listed heart failure, a chronic disease, as a potential medical justification for the delivery of more than 3 HD treatments per week. They noted that the medical directors of at least one MAC have asserted that CMS has guided that only acute diseases can constitute medical justification for additional treatments. They encouraged CMS to reiterate in the final rule that both acute and chronic diseases can constitute medical justification. The commenter indicated that heart failure is a good example of a chronic disease that may constitute medical justification for more frequent HD because of its leading role in morbidity, mortality, and medical spending among dialysis patients, but it is certainly not the only example of a chronic disease. Persistent hypertension, persistent hyperphosphatemia, sleep disturbances, pain attributable to dialysis-related amyloidosis, and symptomatic intradialytic hypotension are all examples of chronic comorbid conditions that may be positively addressed by ongoing treatment with more frequent HD.

    However, another commenter pointed out that the need for more than 3 HD treatments per week occurs in less than 1 percent of the ESRD population and the need for additional treatments is very brief in duration. This commenter indicated that after receiving perhaps a few extra treatments, the patient should be able to be managed with 3 treatments a week. The commenter indicated that if facilities report a diagnostic code such as congestive heart failure (CHF), the extra treatments are automatically paid by the MAC without pre-payment review and, moreover, the MAC will continue to pay for these treatments as long as the diagnosis is included on the claim. The commenter believes that this payment procedure is an invitation to serious Medicare abuse and recommended that CMS demand pre-payment review of every patient requiring more than 3 treatments a week for a period of more than 1 week. Specifically, the facility should be required to provide monthly physician progress notes, chest x-ray reports, and other confirmatory testing and medical justification for the ongoing need for extra treatments and the patient's inability to return to 3 times a week treatments.

    Response: In the proposed rule (81 FR 42810), we mentioned that additional conventional HD treatments are reimbursed at the full ESRD PPS payment if the facility's MAC determines the treatments are medically justified based on a patient condition, such as CHF or pregnancy. We did not mean to imply that the MACs should view the presence of a CHF diagnosis on a claim as medical justification for additional treatments, nor did we mean to imply that chronic disease diagnoses should confer medical justification. We agree with the commenter that automatically paying for additional treatments for patients with chronic medical conditions every month for as long as bills with the diagnosis code for CHF appear does not seem appropriate. However, all decisions regarding medical justification for additional dialysis treatments are paid at the discretion of the MAC. We will continue to monitor claims that include additional treatments and will consider whether additional guidance or other prepayment review as suggested by the commenter is needed.

    Final Rule Action: After considering the comments we received, we are not finalizing our proposal for payment for HD when more than 3 treatments are furnished per week. Based on the feedback from commenters regarding the administrative burden associated with this policy, we have determined that the best course is not to finalize this policy and, instead, to evaluate other billing mechanisms to collect data on the treatments provided to beneficiaries. We are reiterating that facilities are expected to report all dialysis treatments provided, whether they are separately paid or not paid.

    However, we reiterate that we are finalizing our proposal to pay the full ESRD PPS base rate for all training treatments even when they exceed 3 times per week with a limit of 25 sessions as proposed.

    2. Home and Self-Dialysis Training Add-On Payment Adjustment a. Background

    In 2014, Medicare paid approximately $30 million to ESRD facilities for home and self-dialysis training claims, $6 million of which is in the form of home dialysis training add-on payments. These payments accounted for 115,593 dialysis training treatments (77,481 peritoneal dialysis (PD) training treatments and 38,112 hemodialysis (HD) training treatments) for 12,829 PD beneficiaries and 2,443 HD beneficiaries. Hereinafter, we will refer to this training as home dialysis training. Under the ESRD PPS, there are three components to payment for home dialysis training: The base rate, a wage-adjusted home dialysis training add-on payment, and an allowable number of training treatments to which the training add-on payment can be applied.

    When the ESRD PPS was implemented in 2011, we proposed that the cost for all home dialysis services would be included in the bundled payment (74 FR 49930), and therefore, the computation of the base rate included home dialysis training add-on payments made to facilities as well as all composite rate payments, which account for facility costs associated with equipment, supplies, and staffing. In response to public comments, in the CY 2011 ESRD PPS final rule (75 FR 49062), we noted that although we were continuing to include training payments in computing the ESRD PPS base rate, we agreed with commenters that we should treat training as an adjustment under the ESRD PPS. Accordingly, we finalized the home dialysis training add-on amount of $33.44 per treatment as an additional payment made under the ESRD PPS when one-on-one home dialysis training is furnished by a nurse for either HD or PD training or retraining (75 FR 49063). In addition, we continued the policy of paying the home dialysis training add-on payment for 15 training treatments for PD and 25 training treatments for HD. In 2011, the amount we finalized for the home dialysis training add-on was $33.44, which was updated from the previous adjustment amount of $20. This updated amount of $33.44 per treatment was based on the national average hourly wage for Registered Nurses (RN), from the Bureau of Labor Statistics (BLS) data updated to 2011 (75 FR 49063), and reflects 1 hour of training time by a RN for both HD and PD. Section 494.100(a)(2) of the Conditions for Coverage for ESRD Facilities stipulates that the RN must conduct the home dialysis training, but in the ESRD Program Interpretive Guidance published October 3, 2008 (http://www.cms.gov/Medicare/Provider-Enrollment-and-Certification/SurveyCertificationGenInfo/downloads/SCletter09-01.pdf) we clarify that other members of the clinical dialysis staff may assist in providing the home training. We also elaborate in this guidance that the qualified home training RN is responsible for ensuring that the training is in accordance with the requirements at § 494.100, with oversight from the ESRD facility's interdisciplinary team.

    The $33.44 amount of the home dialysis training add-on was based on the national mean hourly wage for RNs as published in the Occupational Employment Statistics (OES) data compiled by BLS. This mean hourly wage was then inflated to 2011 by the ESRD wages and salaries proxy used in the 2008-based ESRD bundled market basket. In the calendar year (CY) 2014 ESRD PPS final rule (78 FR 72185), CMS further increased this amount from $33.44 to $50.16 to reflect 1.5 hours of training time by an RN in response to stakeholder concerns that the training add-on was insufficient.

    In response to the CY 2016 ESRD PPS proposed rule, we received a significant number of stakeholder comments concerning the adequacy of the home dialysis training add-on for HD. Because we did not make any proposals regarding the home dialysis training add-on in the CY 2016 ESRD PPS proposed rule, we made no changes to the home dialysis training add-on for CY 2016 but we did provide a history of the home dialysis training add-on and stated our intention to conduct further analysis of the adjustment.

    While some commenters, primarily patients on home HD and a manufacturer of home HD machines, requested that we increase the home dialysis training add-on payment adjustment so that more ESRD patients could receive the benefit of home HD, we also heard from large dialysis organizations (LDOs) that the current home dialysis training add-on amount is sufficient. In addition to these differing viewpoints, we received public comments indicating a wide variance in training hours per treatment and the number of training sessions provided. As we indicated in the CY 2016 ESRD PPS final rule (80 FR 69004), patients who have been trained for home HD and their care partners have stated that the RN training time per session spanned from 2 to 6 hours per training treatment, that the number of training sessions ranged from 6 to 25 sessions, and that the training they received took place in a group setting. The range of hours per training treatment may indicate that the amount of RN training time gradually decreased over the course of training so that by the end of training, the patient was able to perform home dialysis independently.

    In order to incentivize the use of PD when medically appropriate, Medicare pays the same home dialysis training add-on for all home dialysis training treatments for both PD and HD, even though PD training takes fewer hours per training treatment. It has never been our intention that the training add-on payment adjustment would reimburse a facility for all of its costs associated with home dialysis training treatments. Rather, for each home dialysis training treatment, Medicare pays the ESRD PPS base rate, all applicable case-mix and facility-level adjustments, and outlier payments plus a training add-on payment of $50.16 to account for RN time devoted to training. The home dialysis training add-on payment provides ESRD facilities with payment in addition to the ESRD PPS payment amount. Therefore, the ESRD PPS payment amount plus the $50.16 training add-on payment should be considered the Medicare payment for each home dialysis training treatment and not the home dialysis training add-on payment alone.

    We are committed to analyzing the home dialysis training add-on to determine whether an increase in the amount of the adjustment is appropriate. To begin an analysis of the home dialysis training add-on payment adjustment, we looked at the information on 2014 ESRD facility claims and cost reports.

    b. Analysis of ESRD Facility Claims Data

    We analyzed the ESRD facility claims data to evaluate if the information currently reported provides a clear representation of the utilization of training. We note that after an initial home dialysis training program is completed, ESRD facilities may bill for the retraining of patients who continue to be good candidates for home dialysis. We indicated in the proposed rule that retraining is allowed for certain reasons as specified in the Medicare Claims Processing Manual (Pub 100-4, Chapter 8, section 50.8): The patient changes from one dialysis modality to another (for example, from PD to HD); the patient's home dialysis equipment changes; the patient's dialysis setting changes; the patient's dialysis partner changes; or the patient's medical condition changes (for example, temporary memory loss due to stroke, physical impairment) (81 FR 42813). We also noted that we are not able to differentiate training treatments from retraining treatments. That is, all training claims are billed with condition code 73, which is what an ESRD facility would use for both training and retraining treatments. Under the current claims processing systems, we are unable to identify in the data when the maximum number of training treatments have been completed, 25 for HD and 15 for PD, however, administrative guidance will be forthcoming on this issue. Therefore, we are unable to clearly tell when the patient is still training on the modality versus when they have completed the initial training and need retraining for one of these reasons provided in the claims processing manual noted above.

    To be able to make informed decisions on future training payment policies we would need to have specificity regarding the utilization for each service. We are interested in assessing the extent to which patients are retrained and the number of retraining sessions furnished. The findings of this assessment will inform future decisions about how we compute the training add-on payment and whether we should consider payment edits for retraining treatments. For this reason, we stated our intention to issue sub-regulatory guidance to provide a method for facilities to report retraining treatments. We solicited input from stakeholders on retraining, how often retraining occurs, how much RN time is involved, and the most common reason for retraining.

    A summary of these comments and our responses are provided below. In addition, historically ESRD facilities have indicated they are unable to report all treatments furnished on the monthly claim. For this reason, we believe the number of training treatments currently reported on claims may be inaccurate. As discussed in detail in section II.B.1.a of the proposed rule (81 FR 42813), there are claims processing edits in place that may prevent reporting of HD treatments, including both training and maintenance treatments, that exceed the number of treatments typically furnished for conventional HD, that is, 3 per week, unless the additional treatments are medically justified. This is because of the longstanding Medicare payment policy of basing payment on 3 HD treatments per week, which, for claims processing purposes is 13 to 14 treatments per month. For PD, which is furnished multiple times each day, ESRD facilities report a treatment every day of the month and MACs pay for these treatments by applying an HD-equivalent daily rate. We proposed a similar payment approach for HD treatments furnished more than 3 times per week, which would allow facilities to report all HD treatments furnished, but payment would be made based on a 3 treatments per week daily rate.

    As we explain in section II.B.1 of this final rule, we are not finalizing the HD payment equivalency proposal due to the burden it would have on facilities, however, we are pursuing other methods for identifying medically justified treatments and treatments that do not meet the MAC's LCD for additional dialysis treatments, such as through the use of modifiers. We are also finalizing that we would not limit the number of home HD training treatments per week for which we would pay the full ESRD PPS base rate to be consistent with how we pay for PD training and to better align Medicare payments for training to when facilities are incurring the cost for training. We believe these changes will greatly improve the accuracy of the reporting of training treatments.

    We solicited comments on implementing the HD payment equivalency and sought information on the use of retraining and the establishment of coding on the ESRD facility claim for retraining. The comments and our responses to the comments regarding retraining are set forth below. The comments and our responses regarding the HD payment equivalency proposal are located in section II.B.1.d of this final rule.

    Comment: A dialysis industry organization appreciates that CMS will begin working with the kidney care community as it seeks to better understand retraining, how often it occurs, the amount of nursing time involved, and the most common reasons for it. They and many other commenters stated their support for the definition of retraining found in the Medicare Claims Processing Manual, described above. They believe that retraining does not occur often, but when it does, each retraining can vary depending on the specific circumstances. In some instances, it would be the same as training, but designated as retraining only because the patient had received home dialysis training previously. For example, when a patient changes modality, there may be consistency in partner support, but the same amount of RN training time and number of training sessions may be required to ensure that the patient understands how to operate the new device safely. The same could be true if a patient experienced a temporary memory loss. In some instances, it might be possible to reduce the number of training sessions, such as when there is a minor modification to the device, something changes in the patient's home, or the patient's dialysis partner changes. As discussed in the Medicare Claims Processing Manual, Chapter 8, Section 50.8, retraining may also be necessary when there is evidence that a patient needs a refresher in how to properly use the device because they have developed an infection or other problems. They and other organizations expressed support for CMS' efforts to improve data collection that would give CMS and providers a clearer sense of the incidence of training and retraining in the aggregate to inform policy decisions.

    A physician organization agreed, stating that some research has shown that individuals starting PD commonly develop complications like peritonitis, need hospitalization, and are transferred to catheter-based HD within the first 90 days of dialysis initiation. The organization noted that adapting to home dialysis is challenging and may indicate a need for improved initial training and a targeted increase in early retraining interventions.

    Based on an informal survey of their members, the organization suggests that retraining is warranted in the following circumstances: After any episode of peritonitis, bacteremia, or infection in which root-cause analyses suggests that the condition resulted from a break in sterility of technique; after prolonged period of hospitalization or skilled nursing facility care, when the patient or caregiver may be out of practice; after changes in HD access (catheter to fistula or graft, new fistula or graft, especially if on the opposite side, or difficulty with cannulation at a particular part of a fistula or graft); training for use of a heparin pump; change in dialysis machine or equipment; when there is a change in who is going to perform or assist with home PD or HD (for example, if a patient has had a stroke and now their spouse will do PD or if one caregiver is replaced by another); when home dialysis patients move or transfer to another program (whether permanently or temporarily), reflecting that protocols, equipment and care practices may differ among programs.

    An LDO indicated that in its experience retraining typically occurs at six-month intervals and following a hospitalization, infection, or return to therapy. The commenter agreed that in some circumstances, it can be difficult to differentiate training from retraining treatments. A patient advocacy organization urged CMS to allow flexibility for facilities to deliver retraining, when it is necessary, to ensure patients continue to dialyze safely at home. They also noted that training currently is and should continue to be individualized and tailored to the patients' needs and learning aptitude, and policies should remain flexible to ensure a patient-centered approach is attainable. A manufacturer stated that the first step will be to establish nomenclature and definitions. The commenter indicated that they plan to send a communication on this point separately, not as part of this comment process.

    Response: We appreciate the valuable information submitted and will address retraining once we are able to analyze claims data that identifies retraining treatments. We are pleased to announce that we have been approved to establish a condition code to identify retraining treatments. Change Request 9609 (https://www.cms.gov/Outreach-and-Education/Medicare-Learning-Network-MLN/MLNMattersArticles/Downloads/MM9609.pdf), titled “Updates to the 72X Type of Bill for Home and Self-Dialysis Training, Retraining, and Nocturnal Hemodialysis” and issued on September 16, , which establishes a condition code for retraining treatments effective July 1, 2017.

    c. Technical Correction of the Total Training Payment in the CY 2016 ESRD PPS Final Rule

    In the CY 2016 ESRD PPS final rule (80 FR 60093), we incorrectly cited the payment amount to facilities for HD training as $1,881 based on a total of 37.5 hours of training. The amount we should have cited is $1,254. This is the result of a multiplication error.

    We did not receive any comments on this technical correction.

    d. Analysis of ESRD Cost Report Data

    CMS evaluated 2014 ESRD cost report data in an effort to identify the nature of the specific costs reported by ESRD facilities associated with home dialysis training treatments. We found that there is a significant disparity among facilities with regard to their reported average cost per home dialysis training treatment particular to HD training, ranging from under $100 per treatment to as high as several thousand dollars per treatment. Because of this substantial variation, we believe that the cost report data we currently collect cannot be used to accurately gauge the adequacy of the current $50.16 amount of the per treatment training add-on and that additional cost reporting instructions are necessary. We believe that the cost difference between training treatment costs and maintenance treatment costs is primarily the additional staff time required for training and inconsistencies in how to report related costs. All other training costs, that is, equipment, supplies, and support staff are accounted for in the ESRD PPS base rate. Based on this understanding, extreme variations in staff time should not occur as the number of hours required should fluctuate only slightly for some patients depending on modality or other factors. However, one patient needing a total nursing time of 1-2 hours compared to another patient needing 50 hours for the same modality indicates a lack of precision in the data.

    In response to these findings and in an effort to obtain a greater understanding of costs for dialysis facilities, and as we discussed in the CY 2017 ESRD PPS proposed rule (81 FR 42814), we are considering a 3-pronged approach to improve the quality and the value of the cost report data and to enable us to use the average cost per home dialysis training treatment reported by ESRD facilities to set the amount of the training add-on payment adjustment in the future. First, CMS would complete an in-depth analysis of cost report data elements. The analysis would assist CMS in determining what areas of the cost report are being incorrectly populated by ESRD facilities, what fields are left blank, and which ESRD facilities are deviating from the instructions for the proper completion of various fields within the report. Once we identify facilities that are deviating from proper reporting procedures, we would further evaluate the specific nature of how other ESRD facilities' cost reports were completed to see if there is a systemic problem that may be the result of imprecise instructions. If so, we would update the instructions appropriately to fix the common error. If we believe the instructions are clear but facilities are not following the guidance, we would work through the MACs to correct errors. We anticipate the result of our analysis will be greater uniformity in reporting methods and in turn, heightened data quality in future years.

    Second, in accordance with section 217(e) of PAMA, CMS is currently performing comprehensive audits of ESRD facility cost reports. We anticipate the audits will also result in greater uniformity in reporting methods and in turn, heightened data quality in future years.

    Third, we are considering an update to the independent ESRD facility cost report (CMS-265-11) to include new fields and to rework several worksheets in an effort to obtain more granularity in data on home dialysis training. Also, we are considering a locking mechanism that would prevent a facility from submitting a cost report if certain key fields have not been completed, such as those in Worksheet S, allowing CMS to capture the needed information to appropriately pay home dialysis training by an RN.

    The comments and our responses to the comments for this 3-pronged strategy to improve the ESRD cost report data are set forth below.

    Comment: Several industry organizations and clinical associations agreed that the current cost report data do not provide an accurate view of home dialysis training costs. They noted that there is significant variation between ESRD facilities' cost report data, and it is likely that CMS is collecting data that inaccurately assesses the adequacy of the home and self-dialysis training add-on. They believe CMS should update the cost reports and insert new fields with clear instructions on how to report training costs and labor. They and many other commenters strongly encouraged CMS to work with dialysis facilities to provide clear and accurate instructions as to how to report training costs and labor to address this problem. One organization emphasized the importance of CMS working with the provider community to identify possible changes to cost reports and other data collection mechanisms and expressed their interest in working with CMS on any proposals while in development and under consideration.

    One commenter indicated that new fields on the cost report can provide additional information on patient training resource allocation (among other issues), however, they strongly recommended that the new fields be designed to have clear and concise micro specifications (that is, specific description of definitions, criteria, and contents) to avoid ambiguity and multiple interpretations among dialysis facility personnel and vendors. They further recommended that these micro specifications be released for public comment in order for CMS to appreciate how the different stakeholders interpret them and to allow for feedback and questions, thereby allowing for clarification and modifications prior to implementation. They also urged CMS to implement changes in a manner that recognizes that providers have different cost reporting periods, requiring longer—at least 6 months—lead time to implement. As CMS begins this data collection and analysis initiative, they recommended inclusion of industry stakeholders to provide input on appropriate changes.

    Another commenter indicated that the proposed approach to improving the quality of cost report data, and to improve the estimate of the cost of home training, is very reasonable, as long as the locking mechanism is implemented cautiously. New fields on cost reports will probably require new fields in electronic health records and bookkeeping systems. Users should receive warnings and notifications when they skip mandatory fields, to avoid last-minute crises when they discover that they have omitted required data. If not prepared by such warnings, commenters fear that the requirement to meet a filing deadline might lead some users to submit less precise data.

    Another commenter strongly supports CMS' multi-pronged effort to improve the data associated with the cost of home dialysis training treatments. In their analysis of resources necessary to deliver home training, they found similar data variances, especially between those programs with a higher volume of home patients and those who were training only a few individuals. The commenters believe that the analysis and audits proposed will result in a greater understanding of common errors, and lead to agency clarification and guidance around the reporting elements that will greatly improve data quality.

    MedPAC supports CMS' effort to collect more reliable data on the cost of providing home dialysis training. Once CMS collects sufficiently reliable data about the duration and composition of training treatments, MedPAC believes the agency should assess the need to adjust the training add-on payment amount from the current rate.

    A dialysis industry organization had thoughtful suggestions on how the current cost report might be used in a way that avoids issues with data variability. They proposed using an alternative weighting scheme based on an analysis of total HD treatments versus PD treatments that yielded a training add-on payment of $229.83 for 2017. Using cost report data, the analysis established 4.65 hours of additional staff time per training treatment and RN hourly compensation of $49.43. As a result, the organization urged CMS to increase the proposed training add-on adjustment to $229.83 per treatment for 2017.

    Response and Final Rule Action: While we appreciate the efforts made by an organization to establish a training add-on amount using the current cost report, we note that the organization's analysis addressed the variability in costs by removing facilities with extreme values and estimated the add-on based on 70 percent of facility cost reports. Although we usually apply edits to remove outlier costs from our analyses to ensure that our results are not skewed by extreme values, we did not feel comfortable removing 30 percent of the data in order to set the training add-on payment amount. Rather, we believe our proposed approach to revise the cost report will allow us to use more facility cost report data to set the training add-on payment amount.

    We appreciate the views expressed by commenters and are proceeding with changes to the ESRD facility cost report as proposed. As we work to improve the data reporting ability on claims and cost reports, we will keep in mind the various helpful suggestions made by commenters on this topic. We are considering various options for obtaining assistance from stakeholders, such as obtaining feedback via the ESRD Payment mailbox at [email protected].

    e. Final Increase to the Home and Self-Dialysis Training Add-On Payment Adjustment

    Based on our analysis of ESRD facility claims and cost reports which we describe above, we are pursuing changes which we believe will enable us to use the data to set the home dialysis training add-on payment adjustment in the future. Although we have already begun the process to implement changes to the cost report and claims, it will take several years for the changes to be implemented and yield data we could use as the basis for a change in the home training add-on payment adjustment. However, each year since implementation of the ESRD PPS in 2011, we have received public comments about the inadequacy of the home dialysis training add-on payment adjustment. In addition, we are committed to ensuring that all beneficiaries who are appropriate candidates for home dialysis have access to these treatment options, which generally improve beneficiaries' quality of life. For these reasons, we looked for a reasonable proxy for the home dialysis training add-on so that we could provide additional payments to support home dialysis in the interim until we are able to make changes to the home dialysis training add-on based on claims and cost report data.

    Under the ESRD PPS, and in accordance with section 1881(b)(14)(A)(i) of the Act, we implemented a single base rate that applies to all treatments, even though PD costs facilities less than HD in terms of staff time, equipment, and supplies. To be consistent with this payment approach for routine maintenance dialysis treatments, we implemented a single home dialysis training add-on for both PD and HD, even though home dialysis training for PD takes half the time per training treatment on average than HD.

    In order to maintain this payment approach and provide an increase in the payment for home dialysis training treatments, we proposed an increase in the single home dialysis training add-on amount for PD and HD, based on the average treatment time for PD and HD and the percentage of total training treatments for each modality as a proxy for nurse training time as described below, until such time as we have data that concretely indicates what an adequate payment should be.

    For wages, we proposed to use the latest Occupational Employment Statistics (http://www.bls.gov/oes/tables.htm) released by BLS ($34.14 in 2015), inflated to CY 2017 using the wages and salaries proxy used in the 2012-based ESRD bundled market basket. This would result in a new RN hourly wage of $35.93. For the hours, we proposed an increase to the number of hours of home dialysis training by an RN that is accounted for by the home dialysis training add-on. We used the average treatment times for PD and HD as proxies for training times. The sources we researched indicated 4 hours is a clinically appropriate length of time for HD and 2 hours is a clinically appropriate length of time for a PD treatment. We noted that the Kidney Disease Outcomes Quality Initiative (KDOQI) guidelines and educational material from various patient advocacy groups are examples of these sources.

    Since PD training is approximately 67 percent of total training treatments and takes an average of 2 hours per treatment and HD is 33 percent of total training treatments and takes an average of 4 hours per treatment, we proposed to base the payment for home dialysis training on 2.66 hours of treatment time ((.67 × 2 hours) + (.33 × 4 hours) = 2.66 hours) resulting in a training add-on payment of $95.57 (2.66 hours × $35.93 = $95.57). This would provide for an increase of $45.41 per training treatment (that is, $95.57−$50.16 = $45.41). This approach would provide a significant increase in payment for home dialysis training for CY 2017 while maintaining consistent payment for both PD and HD modalities.

    As we did in CY 2014 when we last increased the training add-on payment, we proposed that the increase in the training add-on payment would be made in a budget neutral manner by applying a budget neutrality adjustment to the ESRD PPS base rate. The proposed increase resulted in a budget neutrality adjustment of 0.999729.

    The comments and our responses to the comments for the proposed increase to the home dialysis training add-on are set forth below.

    Comment: Many commenters, including patients and their care partners, nurses, and physicians described the benefits of home dialysis overall and the importance of training, and requested CMS' continued support of the modality. Commenters indicated that home dialysis is more convenient, particularly in rural settings, and stressed that training makes dialyzing at home feel safer.

    One LDO noted that dialysis modality selection is a complex decision for any individual and believes that too much attention has been paid to the training an individual receives (and the cost of such training) and too little has been paid to the myriad other factors that influence this decision. The commenter pointed out that numerous comment letters from the community and a recent report from the General Accounting Office (GAO) have identified factors that influence decisions regarding home dialysis, including everything from an individual's home life to their familial support structure to their clinical status, as well as their physician's familiarity with home therapies.

    One commenter urged CMS to set separate payment rates for home HD and for PD training to eliminate any payment incentive for a center to favor PD training over the more-costly home HD training. The commenter indicated that the only incentive for choosing one mode of home dialysis over the other should be how closely each modality comes to making it possible for patient to meet his or her treatment and lifestyle goals, after being fully informed about the clinical and lifestyle implications of each type of dialysis modality. Another commenter expressed support for CMS' proposals to obtain better data, and noted that separately evaluating the adequacy of the payment for each unique modality may be warranted.

    A physician stated that home HD is ultimately a better treatment option medically for many patients and would like to see improved access to home training. This commenter went on to explain that in order to accomplish this dialysis centers would need to invest additional resources into home training, and the physician is hopeful that the proposed increased training payment would allow for this. The commenter noted that in their experience most dialysis centers do not offer home HD training and those that do offer training usually have a long waitlist for patients to receive the training, resulting in delays in training for patients. The commenter indicated that applying the same training payment for PD and home HD seems to benefit PD because they have not experienced delays in training PD patients due to lack of staff resources. Finally, the commenter indicated that training treatments are an essential process to transitioning patients home safely and agrees that these treatments should all be paid.

    Response: We appreciate the comments emphasizing the importance of home dialysis training and we share the commenter's hope that the increased home dialysis training add-on will lead to greater investment by ESRD facilities into home modalities and home dialysis training. We believe that dialysis modality selection and whether dialysis will occur in-center or at home is a decision made by the patient and their physician. We continue to make an effort to provide proper payment for home dialysis training because that is something we can do through the ESRD PPS to encourage more ESRD facilities to offer home modalities and home dialysis training.

    With respect to the comments requesting that we establish separate training rates for PD and HD, we will take these views into account as we contemplate revisions to the cost report to better capture training costs. However, we note that historically, we have paid the same base rate and per treatment training add-on to both PD and HD to encourage use of PD for those patients who can benefit from that modality. As we explained in the CY 2011 ESRD PPS proposed rule (74 FR 49115), composite rate costs and separately billable payments are lower for PD, and as a result, the use of a modality payment variable would result in substantially lower payments for PD patients. We stated that we believed the substantially lower payments for PD patients that would result if modality were used as a payment adjuster in the ESRD PPS would discourage the increased use of PD for patients able to use that modality (74 FR 49967). Because we want to encourage home dialysis, in which PD is currently the prevailing mode of treatment, we adopted an ESRD PPS base rate that did not rely on separate payment rates based on modality.

    With regard to the comment about the proposal to pay for all treatments during training, we will no longer apply weekly training limits during HD training. However, we continue to believe that the limit of 25 home HD training treatments is appropriate. In response to the CY 2011 ESRD PPS proposed rule, we received numerous comments requesting that CMS retain the existing policy that limits coverage of the total number of training treatments at the current levels of 15 for PD (CAPD and CCPD) and 25 for HD. In the CY 2011 ESRD PPS final rule (75 FR 49063, we agreed with the commenters and stated that under the ESRD PPS, we will continue the current cap on training treatments at 15 for PD (CAPD and CCPD) and 25 for HD training because most commenters indicated that they can complete training within these training treatment parameters. Based on an analysis of claims data, it appears that patients are still able to be trained for home dialysis within the existing limits and we are finalizing the proposal to pay the full base rate for all treatments furnished during home dialysis training, up to the current limits of 15 for PD and 25 for HD.

    Comment: Several industry organizations, a manufacturer and a clinical association supported the training add-on increase but only if CMS implements the increase without applying the budget neutrality reduction to the base rate. Commenters stated that there is no requirement for CMS to make such a change in a budget neutral manner. The commenter noted that the budget neutrality requirements associated with the ESRD PPS, as set forth in section 1881(b)(14)(A) of the Act, are plainly limited to the first year of the ESRD PPS. As we are many years into the functioning of the ESRD PPS, the commenters believe that CMS has no statutory obligation to continue to apply a budget neutrality adjustment. Another commenter indicated that the budget-neutral approach is inappropriate because the increased training add-on payments represent new costs outside of the ESRD PPS that facilities incur for a specific group of patients.

    Many commenters argued that the training add-on is different than other adjusters. For example, case-mix adjusters seek to tailor the more general base rate to ensure that facilities are not penalized for caring for patients who require more resources than those who do not. So, while the rate goes up slightly for the more expensive patients, it is reduced for the less expensive patients. This approach seeks to even out the resources being provided.

    However, due to the fact that the training rate is an add-on and not an adjuster, the commenter contends that the training add-on is not redistributing existing resources according to patient need. Rather, it is meant to reimburse facilities for additional costs that otherwise would not be necessary for the typical in-center patient. These costs are outside of the base rate and, as such, the commenter believes there is no rationale for making the adjustment budget-neutral.

    The commenter acknowledged that CMS has historically made modifications to the home dialysis training add-on in a budget-neutral manner. However, given the ongoing concerns related to the integrity of the ESRD PPS bundle, underpayments, and the growing instability of the economics of the ESRD system overall, the commenter believes there is a solid rationale for changing this policy. The commenter indicated that the ESRD PPS bundle continues to erode each year and creating further erosion by imposing budget neutrality in the context of the training add-on is inappropriate. While it may be true that a 6-cents-per-treatment reduction is small, the problem is that the ongoing systemic reduction of the base rate places in-center patients, as well as those receiving home dialysis, at risk.

    MedPAC, however, believes that CMS should make a change to the training add-on payment in a budget-neutral manner. They stated that it is unclear whether the proposed budget-neutrality adjustment factor accounts for any increase in the number of home HD training treatments eligible for Medicare payment that may result from the proposed claims adjudication process change and recommended that CMS clearly explain the methods used to calculate the budget-neutrality adjustment factor and identify the total number of training treatments accounted for by the factor.

    Response: In responding to these comments, we believe it may be helpful to first recount the significant history of the home dialysis training add-on adjustment. In the CY 2011 ESRD PPS proposed rule, we proposed that the cost for all home dialysis services would be included in the bundled payment (74 FR 49930). We noted that because we were proposing that training costs under the ESRD PPS would be treated no differently than any other overhead expense, an explicit adjustment to the bundled payment amount for HD and PD training expenditures would not be necessary (74 FR 49931). We also explained in the proposed rule that we were proposing modality-neutral payments, because PD, the predominant modality for home dialysis at that time, is less costly than HD, and we believed that estimating a prospective rate that is higher for PD than it would otherwise be would encourage home dialysis for PD patients (74 FR 49967).

    In the CY 2011 ESRD PPS final rule, we explained that we received comments encouraging us to consider utilizing an add-on payment adjustment to pay for the costs of home dialysis training. In response to those comments, we explained that although we were continuing to include training payments in computing the ESRD PPS base rate, we agreed with commenters that we should treat training as an adjustment under the ESRD PPS. Thus, we finalized the home dialysis training add-on payment adjustment of $33.44 per treatment as an additional payment made under the ESRD PPS when one-on-one home dialysis training is furnished by a nurse for either hemodialysis or peritoneal dialysis training and retraining (75 FR 49063). We chose to calculate a home dialysis training add-on payment adjustment based on one hour of nursing time because it was similar to the existing training add-on payments under the basic case-mix payment system (75 FR 49062). The amount we finalized for the adjustment—$33.44 per training treatment—was updated from the previous adjustment amount of $20 per hour and was based on the national average hourly wage for nurses from Bureau of Labor Statistics data updated to 2011 (75 FR 49063). We noted that because nursing salaries differ greatly based on geographic location, we would adjust the training add-on payment by the geographic area wage index applicable to the ESRD facility. Based on the amount of the home dialysis training add-on payment adjustment that was finalized in 2011, facilities that furnished 25 HHD training treatments would receive around $500 in the form of home dialysis training add-on adjustment payments in addition to the dollars included in the base rate to account for training costs.

    We clarified our policy on payment for home dialysis training again in the CY 2013 ESRD PPS final rule in which we stated that training costs are included in the ESRD PPS base rate, however, we also provide an add-on adjustment for each training treatment furnished by a Medicare-certified home dialysis training facility (77 FR 67468). As such, we explained that it is not the intent of the add-on treatment to reimburse a facility for all of the training costs furnished during training treatments. Rather, the single ESRD PPS base rate, all applicable case-mix and facility-level adjustments, as well as the add-on payment should be considered the Medicare payment for each training treatment and not the training add-on payment alone. We noted that the fact that the add-on payment for training accounts for one hour of training time per treatment is not intended to imply that it only takes one hour per training session to properly educate a beneficiary to perform home dialysis.

    Then in the CY 2014 ESRD PPS final rule (78 FR 72183), we concluded in response to public comments that the training add-on, which represented 1 hour of nursing time, did not adequately represent the staff time required to ensure that a patient is able to perform home dialysis safely. We had received numerous comments on the home dialysis training add-on payment adjustment raising concerns about access to home dialysis and identifying training elements that were not contemplated in 2011, such as self-cannulation and certain aspects of operating an HHD machine. As a result, we recomputed the add-on based upon 1.5 hours of nursing time per training treatment, which amounted to a 50 percent payment increase of $16.72 per training treatment in addition to the training treatment costs included in the base rate. Therefore, the add-on payment rose from $33.44 to $50.16. In calculating the budget neutrality factor, the historical number of home HD training treatments was used. We did not attempt to guess how much that number would change in the future under the new reporting principles. This is consistent with the approach taken for other issues in the past such as the number of patients with comorbidity adjusters or outlier thresholds. Historic data, not speculation about future behavior, were used to set the payment parameters. We have the flexibility to make adjustments budget neutral and have chosen to do so with past adjustments. Our decision to make the training add-on adjustment budget neutral is consistent with other past adjustments.

    We believe increasing the training adjustment in a budget-neutral manner is appropriate. As noted above, we consider this increase to be a temporary accommodation while we collect cost and claims data to determine a more accurate training add-on payment adjustment in the future. We are increasing the training adjustment before we are able to collect that data to ensure continued access to this important modality. However, we do not believe it is appropriate to increase overall expenditures under the ESRD PPS during this interim period. As we note above, home dialysis training is also accounted for in the base rate and not just paid for through the home dialysis training adjustment. Because of this, we view moving dollars from the base rate to the home dialysis training adjustment as a way to effectively target this modality. When we have collected sufficient data to examine the cost and utilization of home dialysis training, we will be in a better position to evaluate whether it may be more appropriate to not make the adjustment budget neutral.

    Finally, in terms of how we calculated the budget neutrality adjustment factor, we first evaluated the impact of increasing the home and self-dialysis training add-on from $50.16 (as of CY 2016) to $95.60 (which is being finalized for CY 2017). This was done by comparing the Medicare Allowable Payments (MAP) that were estimated under a PPS with the existing training add-on of $50.16 with those that were estimated under a PPS with the revised training add-on of $95.60. This comparison was made while holding other aspects of the ESRD PPS policy constant, and before determining estimated outlier payments. The number of training treatments estimated to be eligible for the adjustment was based on the most recent year of claims data. Training treatments were identified on 2015 claims containing pricer return codes that indicated the training adjustment was applied, which included 72,364 training treatments during 2015 based on the claims data used for the final rule. In estimating payments, the existing training-add on for CY 2016 and the revised training add-on for CY 2017 were applied to the eligible training treatments identified on the 2015 claims. The training budget neutrality adjustment factor was calculated as the ratio of the estimated MAP when applying the CY 2016 training add-on to the total estimated MAP when applying the CY 2017 training add-on. This calculation resulted in a training budget neutrality adjustment factor of 0.999737 for CY 2017.

    Comment: Many home dialysis advocates requested that the training add-on be increased to recognize the full cost of training and include a factor to reflect the value of employee benefits and taxes. They believe that CMS intended to reimburse the full cost of the incremental labor necessary to deliver home training treatments. Commenters pointed out that the Office of Management and Budget (OMB) suggests a benefit rate of 36.2 percent. As OMB Circular 76-A states, in calculating direct labor, agencies should not only include salaries and wages, but also other “entitlements” such as fringe benefits. CMS uses the fringe benefits assumptions from OMB Circular 76-A in calculations in other sections of the proposed rule, but neglected to apply it in the calculation of the training adjustment. The factor defined in OMB 76-A for civilians is 36.25 percent. The commenters recommended that we apply the fringe benefit percentage to the reference wage rate which would increase the wage rate from the proposed $35.93/hour to $48.95/hour ($35.93 × 1.3625) and result in a home dialysis training add-on payment of $130.21 ($48.95/hour × 2.66 hours = $130.21.

    Many other commenters pointed out that the proposed payment is a move in the right direction; however, the training add-on falls short of covering training costs. One commenter stated that while they appreciate CMS' proposal to increase the training add-on payment adjustment in 2017, they strongly urged CMS to raise the amount to $229.83 per treatment to better account for facility training costs. The commenters contend that the proposed amount simply does not adequately cover facility training costs to sufficiently promote and facilitate greater use of home and self-dialysis, particularly for small and medium dialysis facilities.

    Response: We did not propose the increase to the home dialysis training add-on payment amount to reflect the full cost for the RN. Instead, as we explained in the proposed rule, it has never been our intention that the training add-on payment adjustment would reimburse a facility for all of its costs associated with home dialysis training treatments. Rather, for each home dialysis training treatment, Medicare pays the ESRD PPS base rate, all applicable case-mix and facility-level adjustments, and outlier payments plus a training add-on payment of $95.60 (as finalized below) to account for RN time devoted to training. As such, we did not apply the fringe benefit factor described in OMB Circular 76-A to the training add-on proxy, similar to the original add-on methodology, as it was not intended to cover all costs. We further note that most of the training treatment payment is derived from the ESRD PPS payment amount which is updated annually by the ESRD bundled market basket and includes a fringe benefits weighting factor. The home dialysis training add-on payment provides ESRD facilities with payment in addition to the ESRD PPS payment amount, which accounts for the costs associated with the actual treatment, that is, the equipment, supplies, and staffing. Therefore, the ESRD PPS payment amount plus the $95.60 (as finalized below) training add-on payment should be considered the Medicare payment for each home dialysis training treatment and not the home dialysis training add-on payment alone.

    In order to provide additional payments to support home dialysis in the interim until we are able to make changes to the home dialysis training add-on based on claims and cost report data, we looked for a reasonable proxy for the home dialysis training add-on. We believe the interim rate, which is not intended to reflect the full cost of the RN, and almost doubles the current training add-on payment amount, is sufficient. Once reliable data is available, we will consider whether the adjustment needs to be increased or decreased.

    Comment: Several individual commenters indicated that nursing care during training is vital to the success of the training period and that the proposed increase to 2.6 hours is good, but more is needed as 3 to 3.5 hours of training better represents the typical amount of time needed. Other commenters pointed out that their training was 4 hours per day for four weeks, others said eight weeks, some commenters recommended 4.5 hours and others said 4 to 5 hours, and one commenter recommended 6 hours.

    However, another commenter pointed out that increasing the training add-on from 1.5 to 2.66 hours of RN labor is a move in the right direction. Providing training for patients and care partners is a critical element of facilitating and maintaining a home treatment regimen for the highest number of patients who are candidates for home dialysis. The commenter stated that as CMS works to improve their own data related to costs, this is an appropriate interim step.

    Response: We have learned through public comments that training appears to vary widely from patient to patient. As we stated above, the ESRD PPS base rate reflects the costs for the staff time involved with treatment and the training add-on serves as a supplemental payment. Furthermore, we pay based on averages. While home HD training may take 4 hours, PD takes considerably less time. As the training add-on is meant to address the training for both modalities, 2.66 hours represents the average time for both modalities, weighted by their frequency. Lastly, we believe that the updated training add-on payment rate is sufficient as an interim rate until we are able to develop a rate based on our data.

    Comment: A patient advocacy organization expressed concern that when outlining the formula CMS used for determining the increased training adjuster, CMS references that there are KDOQI guidelines on the nursing hours recommended to train patients. However, none of the KDOQI guidelines include recommendations related to the number of hours a nurse is involved in training patients for PD or home HD and the commenter is unaware of any conclusive evidence that would point to such a recommendation.

    Another commenter agreed indicating that the KDOQI guidelines are clinical practice guidelines which are not based on time studies of actual training sessions. While guidelines may provide an outline of the expected time for training sessions, they do not accurately represent the time spent training home dialysis patients. The commenter encouraged CMS to continue to research and evaluate this issue to align payments with the true cost of training services.

    Response: We did not mean to imply that the KDOQI guidelines were used as a source for establishing the number of hours of RN training time. We used the KDOQI guidelines strictly for the average number of hours for HD, which is 3 to 4 hours. We intend to maintain the current amount of the training add-on, which is based on treatment times, until we are able to analyze reliable cost report data after the cost report refinements are complete in order to align payments with the true cost of training services.

    Comment: One commenter stated that CMS allows dialysis providers 90 days to stabilize a patient on therapy and create a plan of care and questioned why that approach was not the same for training patients on a new therapy. The commenter pointed out that dialysis providers take months to train employees who already have medical backgrounds and throughout employee training, there is a mentor who continues to educate and ensure the new employee's work is thorough and reflects knowledge of the therapy and the job. The commenter questioned why we do not ensure that home dialysis patients receive the same level of intensive training.

    Response: ESRD facilities that are certified to provide home dialysis training are responsible for providing support services to patients dialyzing at home. The support services required are specified in 42 CFR 494.100(c) and include periodic monitoring of the patient's home adaptation, including visits to the patient's home by facility personnel in accordance with the patient's plan of care, coordination of the home patient's care by a member of the dialysis facility's interdisciplinary team, and development and periodic review of the patient's individualized comprehensive plan of care that specifies the services necessary to address the patient's needs and expected outcomes.

    We thank the commenter for their suggestion. Our policy is to pay for 25 training treatments for home hemodialysis patients and 15 training treatments for peritoneal dialysis patients, which remains unchanged at this time. The goal of training is to ensure that beneficiaries are able to safely dialyze independently at home once complete. We do allow for additional retraining treatments under specific reasons detailed in the Medicare Claims Processing Manual (Pub 100-4, Chapter 8, section 50.8). We will consider this comment as we evaluate our training and retraining policies as we collect data.

    Comment: An LDO indicated that CMS needs to ensure that it does not create a perverse incentive for physicians to start patients on a modality that is unlikely to succeed for them. The commenter does not observe an access barrier to home HD, and they noted that they do not turn away eligible patients from this modality. However, they are mindful of the long-term viability of this modality for many of their patients given the burdens it places on them and their care partners. Rather than view home HD myopically as a stand-alone therapy as some in the dialysis community seek to do, they agree with CMS that home HD must be viewed in the broader context of the overall performance of the ESRD PPS.

    Response: As we have previously stated, the decision about modality selection and location is determined by the patient and their physician. We rely on the physician to recommend home HD only for those patients who have the ability to learn the dialysis process and dialyze themselves at home, with the support of their ESRD facility.

    Comment: One commenter pointed out that the 67 percent/33 percent weighting used in the calculation appears to assume that the dialysis training add-on payment is paid for in all PD training treatments, when, in fact, most are paid under the new patient adjustment, or more specifically, the onset of dialysis payment adjustment. The commenter urged CMS to recalculate the proxy to take into account only those PD training sessions that actually receive the training add-on payment rather than those that are paid under the new patient adjustment (onset of dialysis adjustment).

    Response: When patients are in the onset of dialysis period (the first 4 months of dialysis), the ESRD facility receives the onset of dialysis adjustment and does not receive the training add-on payment adjustment. As a result, the calculation for the weighting ratio of PD included only PD treatments with the home dialysis training add-on payment applied which is what we understand the commenter to suggest. We believe that ESRD facilities correctly accounted for all PD treatments during training because they receive the full ESRD PPS base rate for training treatments rather than the HD-equivalent rate they receive for treatments after training is completed.

    Comment: One commenter recommended that CMS provide for an annual inflation adjustment to the training add-on payment.

    Response: In consideration of industry concerns about applying the training add-on in a budget neutral manner, we are not implementing an annual inflation update to the training add-on. Instead, we intend to monitor changes in the BLS data to determine if an update to the national average RN hourly wage is warranted. If we determine an update is necessary, we would propose a change to the training add-on and solicit public comments.

    Comment: One organization commented that it would have been more appropriate for CMS to use the BLS RN salary for Outpatient Care Centers (Industry Group 621400) in the BLS Occupational Employment Statistics. Thus, the more appropriate wage proxy for renal nurses is the national mean hourly wage for RN (Occupation 29-1141) in the Outpatient Care Centers industry group. The commenter pointed out that the data collected by BLS are gross pay wages, excluding overtime, shift differentials, and employer cost of supplemental benefits.

    Response: We agree that the BLS data provides various wages for RNs that we could have proposed to use for establishing an interim increase for the home dialysis training add-on and we are aware that the BLS data are gross wages, without supplemental benefits. We looked at many sources of wage data and selected the BLS because their Occupational Employment Statistics (OES) program provides comprehensive data on wages which is updated annually and identifies wages by setting. In CY 2011 when we first established the training add-on, we based the training add-on on the national RN average hourly wage because we believed that the training activities we were paying for were best reflected in that wage rather than any of the other categories BLS data includes.

    We do not believe that use of the Outpatient Care Center group wage is a better reflection of the training performed by these RNs, and, for this reason, we are utilizing the BLS wage rate we proposed.

    Final Rule Action: We are finalizing the proposal to base the payment for home dialysis training on 2.66 hours of treatment time ((.67 × 2 hours) + (.33 × 4 hours) = 2.66 hours) resulting in a training add-on payment of $95.60 (2.66 hours × $35.94 = $95.60). This provides an increase of $45.44 per training treatment (that is, $95.60−$50.16 = $45.44). This approach provides a significant increase in payment for home dialysis training for CY 2017 while maintaining consistent payment for both PD and HD modalities. We intend to apply the above referenced payment amount, without adjustment, until we have empirical evidence for a change, which could increase or decrease the home dialysis training add-on payment amount. Additionally, we are also finalizing the home and self-dialysis training add-on budget neutrality adjustment factor.

    3. Final CY 2017 ESRD PPS Update a. Final CY 2017 ESRD Market Basket Update, Productivity Adjustment, and Labor-Related Share for the ESRD PPS

    In accordance with section 1881(b)(14)(F)(i) of the Act, as added by section 153(b) of MIPPA and amended by section 3401(h) of the Affordable Care Act, beginning in 2012, the ESRD PPS payment amounts are required to be annually increased by an ESRD market basket increase factor and reduced by the productivity adjustment described in section 1886(b)(3)(B)(xi)(II) of the Act. The application of the productivity adjustment may result in the increase factor being less than 0.0 for a year and may result in payment rates for a year being less than the payment rates for the preceding year. The statute also provides that the market basket increase factor should reflect the changes over time in the prices of an appropriate mix of goods and services used to furnish renal dialysis services.

    Section 1881(b)(14)(F)(i)(I) of the Act, as added by section 217(b)(2)(A) of PAMA, provides that in order to accomplish the purposes of subparagraph (I) with respect to 2016, 2017, and 2018, after determining the market basket percentage increase factor for each of 2016, 2017, and 2018, the Secretary shall reduce such increase factor by 1.25 percentage points for each of 2016 and 2017 and by 1.0 percentage point for 2018. Accordingly, for CY 2017, we proposed to reduce the amount of the market basket percentage increase by 1.25 percent and to further reduce it by the productivity adjustment.

    We proposed to use the CY 2012-based ESRDB market basket as finalized and described in the CY 2015 ESRD PPS final rule (79 FR 66129 through 66136) to compute the CY 2017 ESRDB market basket increase factor and labor-related share based on the best available data. Consistent with historical practice, we estimate the ESRDB market basket update based on the IHS Global Insight (IGI), Inc. forecast using the most recently available data. IGI is a nationally recognized economic and financial forecasting firm that contracts with CMS to forecast the components of the market baskets.

    As a result of these provisions, and using the IGI forecast for the first quarter of 2016 of the CY 2012-based ESRDB market basket (with historical data through the fourth quarter of 2015), the proposed CY 2017 ESRD market basket increase was 0.35 percent. This market basket increase was calculated by starting with the proposed CY 2017 ESRDB market basket percentage increase factor of 2.1 percent, reducing it by the mandated legislative adjustment of 1.25 percent (required by section 1881(b)(14)(F)(I)(i) of the Act), and reducing it further by the MFP adjustment (the 10-year moving average of MFP for the period ending CY 2017) of 0.5 percent. As is our general practice, we proposed that if more recent data are subsequently available (for example, a more recent estimate of the market basket or MFP adjustment), we will use such data to determine the CY 2017 market basket update and MFP adjustment in the CY 2017 ESRD PPS final rule.

    For the CY 2017 ESRD payment update, we proposed to continue using a labor-related share of 50.673 percent for the ESRD PPS payment, which was finalized in the CY 2015 ESRD final rule (79 FR 66136).

    We did not receive any comments on the proposed market basket update, multi-factor productivity (MFP) adjustment, or labor-related share.

    Final Rule Action: As noted, the final CY 2017 market basket update and MFP adjustment in the ESRD PPS final rule will be based on the most recent forecast of data available. Therefore, using the most recent data available, the final CY 2017 ESRDB update is 0.55 percent. This is based on a 2.1 percent market basket update, less a 1.25 percent adjustment as required by section 1881(b)(14)(F)(i)(I) of the Act, as amended by section 217(b)(2)(A)(ii) of PAMA, and further reduced by a 0.3 percent MFP update. The CY 2017 ESRDB market basket update and MFP adjustment are based on the IGI 3rd quarter 2016 forecast with historical data through the 2nd quarter 2016.

    b. The Final CY 2017 ESRD PPS Wage Indices i. Annual Update of the Wage Index

    Section 1881(b)(14)(D)(iv)(II) of the Act provides that the ESRD PPS may include a geographic wage index payment adjustment, such as the index referred to in section 1881(b)(12)(D) of the Act, as the Secretary determines to be appropriate. In the CY 2011 ESRD PPS final rule (75 FR 49117), we finalized the use of the Office of Management and Budget's (OMB) Core-Based Statistical Area (CBSA)-based geographic area designations to define urban and rural areas and their corresponding wage index values. OMB publishes bulletins regarding CBSA changes, including changes to CBSA numbers and titles. The latest bulletin, as well as subsequent bulletins, is available online at http://www.whitehouse.gov/omb/bulletins_index2003-2005.

    For CY 2017, we stated that we would continue to use the same methodology as finalized in the CY 2011 ESRD PPS final rule (75 FR 49117) for determining the wage indices for ESRD facilities. Specifically, we are updating the wage indices for CY 2017 to account for updated wage levels in areas in which ESRD facilities are located. We use the most recent pre-floor, pre-reclassified hospital wage data collected annually under the inpatient prospective payment system. The ESRD PPS wage index values are calculated without regard to geographic reclassifications authorized under section 1886(d)(8) and (d)(10) of the Act and utilize pre-floor hospital data that are unadjusted for occupational mix. The final CY 2017 wage index values for urban areas are listed in Addendum A (Wage Indices for Urban Areas) and the final CY 2017 wage index values for rural areas are listed in Addendum B (Wage Indices for Rural Areas). Addenda A and B are located on the CMS Web site at https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/ESRDpayment/End-Stage-Renal-Disease-ESRD-Payment-Regulations-and-Notices.html.

    In the CY 2011 and CY 2012 ESRD PPS final rules (75 FR 49116 through 49117 and 76 FR 70239 through 70241, respectively), we also discussed and finalized the methodologies we use to calculate wage index values for ESRD facilities that are located in urban and rural areas where there is no hospital data. For urban areas with no hospital data, we compute the average wage index value of all urban areas within the State and use that value as the wage index. For rural areas with no hospital data, we compute the wage index using the average wage index values from all contiguous CBSAs to represent a reasonable proxy for that rural area.

    We apply the wage index for Guam as established in the CY 2014 ESRD PPS final rule (78 FR 72172) (0.9611) to American Samoa and the Northern Mariana Islands. We apply the statewide urban average based on the average of all urban areas within the state (78 FR 72173) (0.8637) to Hinesville-Fort Stewart, Georgia. We note that if hospital data becomes available for these areas, we will use that data for the appropriate CBSAs instead of the proxy.

    A wage index floor value has been used in lieu of the calculated wage index values below the floor in making payment for renal dialysis services under the ESRD PPS. In the CY 2011 ESRD PPS final rule (75 FR 49116 through 49117), we finalized that we would continue to reduce the wage index floor by 0.05 for each of the remaining years of the ESRD PPS transition. In the CY 2012 ESRD PPS final rule (76 FR 70241), we finalized the 0.05 reduction to the wage index floor for CYs 2012 and 2013, resulting in a wage index floor of 0.5500 and 0.5000, respectively. We continued to apply and to reduce the wage index floor by 0.05 in the CY 2013 ESRD PPS final rule (77 FR 67459 through 67461). Although our intention initially was to provide a wage index floor only through the 4-year transition to 100 percent implementation of the ESRD PPS (75 FR 49116 through 49117; 76 FR 70240 through 70241), in the CY 2014 ESRD PPS final rule (78 FR 72173), we continued to apply the wage index floor and continued to reduce the floor by 0.05 per year for CY 2014 and for CY 2015.

    In the CY 2016 ESRD PPS final rule (80 FR 69006 through 69008), we finalized the continuation of the application of the wage index floor of 0.4000 to areas with wage index values below the floor, rather than reducing the floor by 0.05. We stated in that rule that we needed more time to study the wage indices that are reported for Puerto Rico to assess the appropriateness of discontinuing the wage index floor. Also, in that rule a commenter provided several alternative wage indexes for Puerto Rico for the CY 2016 ESRD PPS final rule: (1) Utilize our policy for areas that do not have reliable hospital data by applying the wage index for Guam as we did in implementing the ESRD PPS in the Northern Marianas and American Samoa; (2) use the U.S. Virgin Islands as a proxy for Puerto Rico, given the geographic proximity and its “non-mainland” or “island” nature; or (3) reestablish the wage index floor in effect in 2010 when Puerto Rico became the only wage areas subject to the floor, that is, 0.65.

    For the CY 2017 proposed rule, we analyzed ESRD facility cost report and claims data submitted by facilities located in Puerto Rico and compared them to mainland facilities. Specifically, we analyzed CY 2013 claims and cost report data for 37 freestanding Puerto Rico facilities and compared it to 5,024 non-Puerto Rico freestanding facilities. We found that the freestanding facilities in Puerto Rico are bigger than facilities elsewhere in the United States. The Puerto Rico facilities produce roughly twice the number of treatments as other facilities and this larger size likely results in higher labor productivity. Finally, dialysis patients in Puerto Rico are much more likely to be non-Medicare. We discussed the findings in detail in the CY 2017 proposed rule (81 FR 42817)

    Therefore, for CY 2017, we solicited public comments on the wage index for CBSAs in Puerto Rico as part of our continuing effort to determine an appropriate course of action. We did not propose to change the wage index floor for CBSAs in Puerto Rico, but requested public comments in which stakeholders can provide useful input for consideration in future decision-making. Specifically, we solicited comment on the useful suggestions that were submitted in last year's final rule (80 FR 69007) and reiterated above.

    The comments and our responses to the comments for the proposal and solicitation are set forth below.

    Comment: An LDO that operates 27 ESRD facilities in Puerto Rico pointed out that the continued gradual reduction in the wage index floor has impaired operations in Puerto Rico since all areas of the island have been subject to the floor due to low wage index values. This commenter appreciates CMS' recommendation to apply a wage index of .40 to areas with a wage index below the floor for CY 2017, but believes the Agency must do more. Until CMS is able to adjust the wage index used to calculate ESRD facility reimbursements and fully take into account the totality of circumstances challenging facilities operating in Puerto Rico, they recommend that the wage index floor be re-instituted at a level that will avoid a negative impact on dialysis facilities. They recommend that CMS consider using the wage index for Guam or the Virgin Islands as they are similar to Puerto Rico in their island and U.S. territory status. The commenter believed CMS' policy to utilize the same wage index as Guam for the Northern Marianas and American Samoa could serve as a precedent for doing the same thing for Puerto Rico. The commenter does not believe maintaining a wage index of 0.40 for CY 2017 in Puerto Rico is adequate to offset the poor economic conditions to which patients and dialysis facilities are exposed.

    An organization of community stakeholders agreed, suggesting that CMS apply ESRD wage indexes in Puerto Rico that are consistent with other territories through the use of a temporary proxy. This group is requesting urgent administrative action from CMS. They are requesting that CMS: (1) Re-establish a fair and meaningful wage index floor given factual uncertainties and the demonstrated anomalies with the wage index for Puerto Rico; (2) Establish a temporary alternative wage index for Puerto Rico, given the observed disadvantage and the inconsistencies with the indexes used for other Territories; and (3) Ensure the corresponding adjustment in MA benchmarks for ESRD to secure the appropriate support to the Medicare program that serves 90 percent of all the Medicare A & B beneficiaries in Puerto Rico.

    However, an industry organization expressed support for our current methodology for determining the wage indices and the continued application of the wage index floor of 0.4000.

    Response: For the commenters that asked us to take an administrative action to establish a temporary alternative wage index value for Puerto Rico until we are able to correct the anomalies, we unfortunately, are unable to do so for several reasons. First, we did not propose an alternative to the wage indices for Puerto Rico based on reported hospital wage data. Rather, we presented various alternatives and requested public comment on those alternatives. We would need to have proposed changes to the Puerto Rico wage index in order to finalize a change in their wage index. With regard to the corresponding adjustment in MA benchmarks for ESRD to secure the appropriate support to the Medicare program, we note that this comment is beyond the scope of the proposed rule.

    One of the commenters who addressed the proposed wage index alternatives expressed an interest in basing the wage indices for Puerto Rico CBSAs on the wage values applied to other U.S. Territories and another commenter suggested applying the wage value for the U.S. Virgin Islands. The only other recommendation was maintenance of the current floor of 0.4000 with no comment on the alternatives in the proposed rule.

    When we developed the wage indices for the Pacific Rim territories in the CY 2014 ESRD PPS final rule (78 FR 40845), we applied the methodologies we use to calculate wage index values for ESRD facilities that are located in urban and rural areas where there is no hospital data. Those policies were finalized in the CY 2011 and CY 2012 ESRD PPS final rules (75 FR 49116 through 49117 and 76 FR 70239 through 70241, respectively). For urban areas with no hospital data, we compute the average wage index value of all urban areas within the State and use that value as the wage index. For rural areas with no hospital data, we compute the wage index using the average wage index values from all contiguous CBSAs to represent a reasonable proxy for that rural area.

    As we explained in the CY 2014 ESRD PPS final rule (78 FR 72172 through 72173), in the case of American Samoa and the Northern Mariana Islands, we determined that Guam represented a reasonable proxy because the islands are located within the Pacific Rim and share a common status as United States Territories. In addition, the Northern Marianas and American Samoa are rural areas with no hospital data. Therefore, we used the established methodology to compute an appropriate wage index using the average wage index values from contiguous CBSAs, to represent a reasonable proxy. While the islands of the Pacific Rim are not actually contiguous, we determined that Guam is a reasonable proxy for American Samoa and the Northern Marianas.

    The primary difference between how we handled the wage index for the Pacific Rim islands and the situation in Puerto Rico is that we were able to rely upon existing policy for determining a wage index for areas with no hospital data for the Pacific Rim islands. We have hospital data upon which to base wage index values for Puerto Rico CBSAs, so our policy for CBSAs without wage index data does not apply to Puerto Rico, despite the fact that its, wage index data results in very low wage index values compared to other Territories and mainland CBSAs. This is a complex policy issue that cannot be resolved for CY 2017. We intend to continue analysis in this area so that we can address this issue in a future rulemaking.

    Final Rule Action: After considering the public comments we received regarding the wage index, we are finalizing the CY 2017 ESRD PPS wage indices based on the latest hospital wage data as proposed. In addition, we are maintaining a wage index floor of 0.4000.

    ii. Application of the Wage Index Under the ESRD PPS

    A facility's wage index is applied to the labor-related share of the ESRD PPS base rate. In the CY 2015 ESRD PPS final rule (79 FR 66136), we finalized a new labor-related share of 50.673 percent, which was based on the 2012-based ESRDB market basket finalized in that rule, and transitioned the new labor-related share over a 2-year period. Thus, for CY 2017, the labor-related share to which a facility's wage index would be applied is 50.673 percent.

    c. CY 2017 Update to the Outlier Policy

    Section 1881(b)(14)(D)(ii) of the Act requires that the ESRD PPS include a payment adjustment for high cost outliers due to unusual variations in the type or amount of medically necessary care, including variability in the amount of erythropoiesis stimulating agents (ESAs) necessary for anemia management. Some examples of the patient conditions that may be reflective of higher facility costs when furnishing dialysis care would be frailty, obesity, and comorbidities such as cancer. The ESRD PPS recognizes high cost patients, and we have codified the outlier policy in our regulations at 42 CFR 413.237. The policy provides the following ESRD outlier items and services are included in the ESRD PPS bundle: (i) ESRD-related drugs and biologicals that were or would have been, prior to January 1, 2011, separately billable under Medicare Part B; (ii) ESRD-related laboratory tests that were or would have been, prior to January 1, 2011, separately billable under Medicare Part B; (iii) medical/surgical supplies, including syringes, used to administer ESRD-related drugs, that were or would have been, prior to January 1, 2011, separately billable under Medicare Part B; and (iv) renal dialysis service drugs that were or would have been, prior to January 1, 2011, covered under Medicare Part D, excluding oral-only drugs used in the treatment of ESRD.

    In the CY 2011 ESRD PPS final rule (75 FR 49142), we stated that for purposes of determining whether an ESRD facility would be eligible for an outlier payment, it would be necessary for the facility to identify the actual ESRD outlier services furnished to the patient by line item (that is, date of service) on the monthly claim. Renal dialysis drugs, laboratory tests, and medical/surgical supplies that are recognized as outlier services were originally specified in Attachment 3 of Change Request 7064, Transmittal 2033 issued August 20, 2010, rescinded and replaced by Transmittal 2094, dated November 17, 2010. Transmittal 2094 identified additional drugs and laboratory tests that may also be eligible for ESRD outlier payment. Transmittal 2094 was rescinded and replaced by Transmittal 2134, dated January 14, 2011, which was issued to correct the subject on the Transmittal page and made no other changes.

    Furthermore, we use administrative issuances and guidance to continually update the renal dialysis service items available for outlier payment via our quarterly update CMS Change Requests, when applicable. We use this separate guidance to identify renal dialysis service drugs that were or would have been covered under Part D for outlier eligibility purposes and in order to provide unit prices for calculating imputed outlier services. In addition, we also identify through our monitoring efforts items and services that are either incorrectly being identified as eligible outlier services or any new items and services that may require an update to the list of renal dialysis items and services that qualify as outlier services, which are made through administrative issuances.

    Our regulations at 42 CFR 413.237 specify the methodology used to calculate outlier payments. An ESRD facility is eligible for an outlier payment if its actual or imputed MAP amount per treatment for ESRD outlier services exceeds a threshold. The MAP amount represents the average incurred amount per treatment for services that were or would have been considered separately billable services prior to January 1, 2011. The threshold is equal to the ESRD facility's predicted ESRD outlier services MAP amount per treatment (which is case-mix adjusted) plus the fixed-dollar loss amount. In accordance with § 413.237(c) of our regulations, facilities are paid 80 percent of the per treatment amount by which the imputed MAP amount for outlier services (that is, the actual incurred amount) exceeds this threshold. ESRD facilities are eligible to receive outlier payments for treating both adult and pediatric dialysis patients.

    In the CY 2011 ESRD PPS final rule, using 2007 data, we established the outlier percentage at 1.0 percent of total payments (75 FR 49142 through 49143). We also established the fixed-dollar loss amounts that are added to the predicted outlier services MAP amounts. The outlier services MAP amounts and fixed-dollar loss amounts are different for adult and pediatric patients due to differences in the utilization of separately billable services among adult and pediatric patients (75 FR 49140). As we explained in the CY 2011 ESRD PPS final rule (75 FR 49138 through 49139), the predicted outlier services MAP amounts for a patient are determined by multiplying the adjusted average outlier services MAP amount by the product of the patient-specific case-mix adjusters applicable using the outlier services payment multipliers developed from the regression analysis to compute the payment adjustments.

    For the CY 2017 outlier policy, we used the existing methodology for determining outlier payments by applying outlier services payment multipliers that were developed for the CY 2016 ESRD PPS final rule (80 FR 68993-68994, 69002). We used these outlier services payment multipliers to calculate the predicted outlier service MAP amounts and projected outlier payments for CY 2017.

    For CY 2017, we proposed that the outlier services MAP amounts and fixed-dollar loss amounts would be derived from claims data from CY 2015. Because we believe that any adjustments made to the MAP amounts under the ESRD PPS should be based upon the most recent data year available in order to best predict any future outlier payments, we proposed that the outlier thresholds for CY 2017 would be based on utilization of renal dialysis items and services furnished under the ESRD PPS in CY 2015. We recognize that the utilization of ESAs and other outlier services have continued to decline under the ESRD PPS, and that we have lowered the MAP amounts and fixed-dollar loss amounts every year under the ESRD PPS. We continue to believe that since the implementation of the ESRD PPS, data for CY 2015 are reflective of relatively stable ESA use, in contrast with the relatively large initial declines in the use of both EPO and darbepoetin in the first 2 years of the ESRD PPS. In 2015, there were both decreases in the use of EPO and increases in the use of darbepoetin based on estimates of average ESA utilization per session, suggesting a relative shift towards the use of darbepoetin between 2014 and 2015.

    i. CY 2017 Update to the Outlier Services MAP Amounts and Fixed-Dollar Loss Amounts

    For CY 2017, we did not propose any change to the methodology used to compute the MAP or fixed-dollar loss amounts. Rather, we proposed to update the outlier services MAP amounts and fixed-dollar loss amounts to reflect the utilization of outlier services reported on 2015 claims. For this final rule, the outlier services MAP amounts and fixed-dollar loss amounts were updated using 2015 claims data. The impact of this update is shown in Table 1, which compares the outlier services MAP amounts and fixed-dollar loss amounts used for the outlier policy in CY 2016 with the updated estimates for this final rule. The estimates for the final CY 2017 outlier policy, which are included in Column II of Table 1, were inflation adjusted to reflect projected 2017 prices for outlier services.

    Table 1—Outlier Policy: Impact of Using Updated Data To Define the Outlier Policy Column I
  • final outlier policy
  • for CY 2016 (based
  • on 2014 data price inflated to 2016) *
  • Age
  • <18
  • Age
  • ≥18
  • Column II
  • final outlier policy
  • forCY 2017
  • (based on 2015 data price
  • inflated to 2017)
  • Age
  • <18
  • Age
  • ≥18
  • Average outlier services MAP amount per treatment $40.20 $53.29 $38.77 $47.00 Adjustments Standardization for outlier services 0.9951 0.9729 1.0078 0.9770 MIPPA reduction 0.98 0.98 0.98 0.98 Adjusted average outlier services MAP amount $39.20 $50.81 $38.29 $45.00 Fixed-dollar loss amount that is added to the predicted MAP to determine the outlier threshold $62.19 $86.97 $68.49 $82.92 Patient months qualifying for outlier payment 5.8% 6.5% 4.6% 6.7%

    As demonstrated in Table 1, the estimated fixed-dollar loss amount per treatment that determines the CY 2017 outlier threshold amount for adults (Column II; $82.92) is lower than that used for the CY 2016 outlier policy (Column I; $86.97). The lower threshold is accompanied by a decline in the adjusted average MAP for outlier services from $50.81 to $45.00. For pediatric patients, there is an increase in the fixed-dollar loss amount from $62.19 to $68.49, and a decrease in the adjusted average MAP for outlier services from $39.20 to $38.29.

    We estimate that the percentage of patient months qualifying for outlier payments in CY 2017 will be 6.7 percent for adult patients and 4.6 percent for pediatric patients, based on the 2015 claims data. The pediatric outlier MAP and fixed dollar loss amounts continue to be lower for pediatric patients than adults due to the continued lower use of outlier services (primarily reflecting lower use of ESAs and other injectable drugs).

    ii. Outlier Percentage

    In the CY 2011 ESRD PPS final rule (75 FR 49081), in accordance with 42 CFR 413.220(b)(4), we reduced the per treatment base rate by 1 percent to account for the proportion of the estimated total payments under the ESRD PPS that are outlier payments. Based on the 2015 claims, outlier payments represented approximately 0.93 percent of total payments, close to the 1 percent target. Recalibration of the thresholds using 2015 data is expected to result in aggregate outlier payments close to the 1 percent target in CY 2017. We believe the update to the outlier MAP and fixed-dollar loss amounts for CY 2017 will increase payments for ESRD beneficiaries requiring higher resource utilization and move us closer to meeting our 1 percent outlier policy. We note that recalibration of the fixed-dollar loss amounts in this final rule would result in no change in payments to ESRD facilities for beneficiaries with renal dialysis items and services that are not eligible for outlier payments, but would increase payments to ESRD facilities for beneficiaries with renal dialysis items and services that are eligible for outlier payments. Therefore, beneficiary co-insurance obligations would also increase for renal dialysis services eligible for outlier payments.

    The comments and our responses to the comments for the proposal to update the outlier thresholds using CY 2015 data are set forth below.

    Comment: A national industry organization stated they were pleased that CMS has refined the outlier pool to align the dollars paid out more closely with the estimated amount used to create the outlier pool. However, they noted that the alignment has not yet addressed the fact that the outlier pool is consistently paying out less than the amount removed from the base rate. Commenters estimate the outlier pool underpaid $0.68 per treatment in 2015. Other Medicare payment systems at times pay out less than the estimate and at other times pay out more. This fluctuation above and below the estimate indicates that the outlier pool amount is appropriate. The organization strongly encouraged CMS to further refine the outlier policy so that it is more consistent with how outlier policies in other Medicare payment systems work.

    Other industry organization indicated that, since the outlier threshold has not been met since the implementation of the ESRD PPS and continues to fall short of 1 percent, CMS should propose a 0.5 outlier percentage for CY 2018. This 0.5 percent outlier percentage would reduce the offset to the base rate yet continue to provide payment for extraordinary costs. An MDO would prefer that CMS remove the outlier provision from the payment system, however, they understand that an outlier policy is statutorily required. Since CMS does not have the authority to remove the provision, they also suggested that the outlier percentage be reduced to 0.5 percent.

    A professional association stated that they appreciate the efforts of CMS to recognize that the needs of all patients are not universally equal, and that a minority of patients will require treatments that carry markedly higher costs than the average ESRD patient. They support the concept of an outlier policy to sufficiently reimburse dialysis facilities for implementing necessary dialysis-related treatments to meet the needs of these patients and established therapeutic goals. However, in their view the outlier payments amount should equal the withhold amount.

    As CMS continues to assess the outlier policy in future years, they suggested that future adjustments to the threshold for outlier payments be done annually to fully expend the withholding or adjust the withholding based on the running average expenditures from the prior 3 years (not to exceed 1 percent).

    Response: We appreciate the commenters' support for the outlier policy. As we explained above, our analysis of ESRD PPS claims show that outlier payments reached 0.93 percent of the 1.0 percent outlier target in 2015. Specifically, outlier payments were made for 200,544 patient months, totaling $82,419,791 ($103,024,739 when including patient or secondary insurer obligations). For these patient months, outlier payments represented 17.2 percent of total Medicare ESRD payments. About 6,540 facilities received at least one outlier payment. Eighteen percent of outlier payments in dollars were received by independent facilities and another 16 percent were received by facilities that were part of a multi-facility organization other than the three largest chains. As we stated in the CY 2016 ESRD PPS final rule (80 FR 69010), outlier payments are particularly important for small dialysis organizations and independent dialysis facilities because they often lack the volume of patients necessary to offset the high cost of certain patients. The 1.0 percent outlier target is small compared to outlier policies in other Medicare payment systems and was not designed to cover a large number of claims. As indicated in Table 1, we estimate that the percentage of patient months qualifying for outlier payments in CY 2017 will be 6.7 percent for adult patients and 4.6 percent for pediatric patients, based on the 2015 claims data.

    Also discussed in the CY 2016 ESRD PPS final rule (80 FR 69010 through 69011) we acknowledge that the 1.0 percent target has not been achieved since 2011 primarily because our annual update of the fixed-dollar loss amounts and MAP amounts could not keep up with the continued decline in the use of outlier services (primarily ESAs). That is, facilities incurred lower costs than anticipated, and those savings accrued to facilities more than offsetting the extent to which the consequent outlier payments fell short of the 1.0 percent target. In last year's rule we stated that we believed that decline was leveling off, which would make our projections of outlier payments more accurate. Using the most recent data, we found outlier payments to come close to the 1 percent target (at 0.93 percent). Outlier payments may not have reached 1 percent during 2015 primarily due to patterns in ESA utilization. There is evidence in the 2015 claims of increased use of epoetin beta, which may have been used as a lower cost substitute for other ESAs (at a clinically equivalent dose) and contributed to a decrease in the average outlier service MAP amounts for 2015.

    With regard to the suggestion that we annually adjust the withholding based on the running average of the expenditure from the prior three years, with the total withholding not to exceed 1.0 percent, as we explain above, each year we simulate payments under the ESRD PPS in order to set the outlier fixed-dollar loss and MAP amounts for adult and pediatric patients to try to achieve the 1.0 percent outlier policy. We would not increase the base rate to account for years where outlier payments were less than 1.0 percent of total ESRD PPS payments and, more importantly we would not reduce the base rate if the outlier payments exceed 1.0 percent of total ESRD PPS payments. Rather than increasing and decreasing the base rate, we re-estimate the fixed-dollar loss threshold and MAP amounts so that outlier payments in the following year are 1.0 percent of total ESRD PPS payments. This is the approach used in other Medicare payment systems that include an outlier policy, such as the Inpatient Psychiatric Facility PPS. As we have done since 2011, we will continue to monitor outlier payments and assess annually the extent to which adjustments need to be made in the fixed-dollar loss and MAP amounts in order to achieve outlier payments that are 1.0 percent of total ESRD PPS payments.

    Final Rule Action: After consideration of the public comments, we are finalizing the updated outlier thresholds based on CY 2015 data.

    d. Update of the ESRD PPS Base Rate for CY 2017 i. Background

    In the CY 2011 ESRD PPS final rule (75 FR 49071 through 49083), we discussed the development of the ESRD PPS per treatment base rate that is codified in the Medicare regulations at §§ 413.220 and 413.230. The CY 2011 ESRD PPS final rule also provides a detailed discussion of the methodology used to calculate the ESRD PPS base rate and the computation of factors used to adjust the ESRD PPS base rate for projected outlier payments and budget neutrality in accordance with sections 1881(b)(14)(D)(ii) and 1881(b)(14)(A)(ii) of the Act, respectively. Specifically, the ESRD PPS base rate was developed from CY 2007 claims (that is, the lowest per patient utilization year as required by section 1881(b)(14)(A)(ii) of the Act), updated to CY 2011, and represented the average per treatment Medicare Allowable Payment (MAP) for composite rate and separately billable services. In accordance with section 1881(b)(14)(D) of the Act and regulations at § 413.230, the ESRD PPS base rate is adjusted for the patient specific case-mix adjustments, applicable facility adjustments, geographic differences in area wage levels using an area wage index, as well as applicable outlier payments or training payments.

    ii. Payment Rate Update for CY 2017

    The ESRD PPS base rate for CY 2017 is $231.55. This update reflects several factors, described in more detail below.

    Market Basket Increase: Section 1881(b)(14)(F)(i)(I) of the Act provides that, beginning in 2012, the ESRD PPS payment amounts are required to be annually increased by the ESRD market basket percentage increase factor. The latest CY 2017 projection for the ESRDB market basket is 2.1 percent. In CY 2017, this amount must be reduced by 1.25 percentage points as required by section 1881(b)(14)(F)(i)(I) of the Act, as amended by section 217(b)(2)(A) of PAMA, which is calculated as 2.1−1.25 = 0.85 percent. This amount is then reduced by the productivity adjustment described in section 1886(b)(3)(B)(xi)(II) of the Act as required by section 1881(b)(14)(F)(i)(II) of the Act. The final multi-factor productivity adjustment for CY 2017 is 0.3 percent, yielding an update to the base rate of 0.55 percent for CY 2017 (0.85−0.3 = 0.55 percent). Therefore, the ESRD PPS base rate for CY 2017 before application of the wage index and training budget-neutrality adjustment factors would be $231.66 ($230.39 × 1.0055 = $231.66).

    Wage Index Budget-Neutrality Adjustment Factor: We compute a wage index budget-neutrality adjustment factor that is applied to the ESRD PPS base rate. For CY 2017, we did not propose any changes to the methodology used to calculate this factor which is described in detail in CY 2014 ESRD PPS final rule (78 FR 72174). The CY 2017 wage index budget-neutrality adjustment factor is 0.999781. Therefore, the ESRD PPS base rate for CY 2017 before application of the training budget-neutrality adjustment factor would be $231.61 ($231.66 × 0.999781 = $231.61).

    Home and Self-Dialysis Training Add-on Budget-Neutrality Adjustment Factor: Also, as discussed in section II.B.2.e of this final rule, we are establishing an increase in the home dialysis training add-on in a budget-neutral manner. The home dialysis training add-on budget-neutrality factor ensures that the increase in the training add-on payment adjustment does not affect aggregate Medicare payments. Therefore, we are finalizing a home dialysis training add-on payment adjustment budget-neutrality adjustment factor of 0.999737, which is applied to the CY 2017 ESRD PPS base rate. This application yields a CY 2017 ESRD PPS base rate of $231.55 ($231.61 × 0.999737 = $231.55).

    In summary, the final CY 2017 ESRD PPS base rate is $231.55. This amount reflects a payment rate update of 0.55 percent, the CY 2017 wage index budget-neutrality adjustment factor of 0.999781, and the home dialysis training add-on payment adjustment budget-neutrality adjustment of 0.999737.

    The comments and our responses to the comments for the base rate proposals are set forth below:

    Comment: Generally, commenters were supportive of the CY 2017 proposed base rate. One commenter contended CMS should increase the proposed ESRD base rate for 2017 positing that, as proposed, the base rate is too low for dialysis facilities—particularly small and medium facilities—working to provide high-quality, patient-centered care to this highly vulnerable adult and pediatric patient population. Another commenter supported CMS' continued labor-related share of 50.673 percent that recognizes the enhanced role of registered dietary nutritionists and other providers in improving outcomes and promoting therapy adherence, including dialysis treatments, dietary recommendations, and medication regimes.

    Response: We appreciate the commenters' support of the CY 2017 proposed base rate. We also thank the commenter's support of the labor-related share and the perspective that it supports interdisciplinary staff roles in enhancing patient care. With regard to the comment on the base rate being too low for dialysis facilities, as discussed in section II.A.3, the base rate is updated annually by the ESRD bundled market basket. For CY 2017, CMS is mandated by legislation to reduce this increase by two factors. The first factor is the multi-factor productivity adjustment discussed in section II.B.3.d.ii. The second factor is a specified reduction amount determined in section 217(b)(2)(A) of PAMA. For CY 2017, this reduction is 1.25 percentage point. For CY 2018, the reduction will be 1.00 percentage point.

    Final Rule Action: As stated above the final CY 2017 ESRD PPS base rate is $231.55.

    4. Miscellaneous Comments

    We received many comments from Medicare beneficiaries, family members, ESRD facilities, nurses, physicians, professional organizations, renal organizations, and manufacturers related to issues that were not specifically addressed in the CY 2017 ESRD PPS proposed rule. Some of these comments are discussed below.

    Comment: A pharmaceutical company believes that the transitional drug add-on payment adjustment (TDAPA) should be paid for innovative therapies for at least 2 years so that innovation will not be stifled and ESRD patients will not be denied access to the benefits of improved clinical outcomes. This commenter also states that CMS should revisit and refine the drug designation process finalized in the 2016 ESRD PPS final rule and provide transitional add-on payment for new innovative products that are neither generic nor biosimilar to products already included within the ESRD PPS bundle. Another pharmaceutical company believes that CMS should use the TDAPA to incentivize the development of products that will prevent catheter-related bloodstream infections and clarify the anti-infective functional category to ensure that new drugs qualify for the TDAPA.

    A congressional delegation also submitted a comment regarding the application of the TDAPA for an injectable drug that replaces iron and maintains hemoglobin in dialysis patients. An industry organization, an MDO, and a pharmaceutical company had similar concerns, adding that the benefits of new injectable drugs must be accounted for as an increase in the bundle, and specifically pointed to an injectable calcimimetic that has not received FDA approval to date.

    An LDO and an MDO stressed that the drug designation policy is a critical issue for ESRD providers and urges CMS to confirm and clarify how the drug designation policy will be implemented. These commenters also asked for clarification regarding how payment for oral-only drugs that will be transitioned into the bundle as well.

    Response: We appreciate and understand how important the implementation of this policy is and have begun developing the administrative guidance for the TDAPA which will be forthcoming. In the 2016 Final Rule (80 FR 69023), we explained that we anticipate that there may be new drugs that do not fall within the existing ESRD PPS functional categories and therefore, are not reflected in the ESRD PPS bundled payment. Where a new injectable or intravenous product is used to treat or manage a condition for which there is not a functional category, we would pay for the new injectable or intravenous product using a transitional drug add-on payment adjustment under the authority of section 1881(b)(14)(D)(iv) of the Act. We proposed that the transitional drug add-on payment adjustment would be based on the ASP pricing methodology and would be paid until we have collected sufficient claims data for rate setting for the new injectable or intravenous product, but not for less than 2 years.

    With regard to the application of the TDAPA for an injectable anemia management drug, the anemia management functional category is one of the drug categories for which we have included dollars in the base rate and that has been updated with the annual ESRD market basket percentage increase factor. As a result, there is no separate transitional drug-add-on payment adjustment available for drugs and biologicals that manage an ESRD beneficiary's anemia. As we stated above, the transitional drug add-on adjustment payment is intended to capture those drugs and biologicals that are not reflected in the base rate. We note that drugs and biologicals that are accounted for in the ESRD PPS base rate could qualify as an outlier service when the manufacturer reports the Average Sales Price to CMS.

    Comment: One patient expressed concern that copays for dialysis can be expensive on Medicare Part B, and the commenter would prefer to have a Medicare Advantage plan because of the out-of-pocket maximum. Another patient commented that his facility has told him that they are doing too many blood tests related to his polycystic kidney disease and that he may have to pay for them himself because Medicare will not. This commenter also states that he or she believes their treatment is not about patient care, but is about money and that his care team does not have compassion toward him.

    Response: We are saddened to hear of these situations that beneficiaries have shared with us. We thank commenters for sharing their experience regarding the dialysis care they receive at their facilities, and we note that when care is less than desirable we encourage beneficiaries to reach out to their ESRD Network or Quality Improvement Organization (QIO) for their state. ESRD Networks were mandated by the Congress and are accountable for, among other things, assuring the effective and efficient administration of benefits, improving quality of care for ESRD patients, collecting data to measure quality of care, providing assistance to ESRD patients and facilities, and evaluating and resolving patient grievances. More information on the ESRD Networks is available on the CMS Web site: https://www.cms.gov/Medicare/End-Stage-Renal-Disease/ESRDNetworkOrganizations/index.html. QIOs are groups of health quality experts, clinicians, and consumers organized to improve the care delivered to people with Medicare. QIOs work under the direction of the CMS to assist Medicare providers with quality improvement and to review quality concerns for the protection of beneficiaries and the Medicare Trust Fund. We value each of our beneficiaries and want them to receive the best care experience. We urge any beneficiary who requires assistance or has a grievance to contact the ESRD Networks for help. The ESRD Network can also ensure that beneficiaries receive the care they need for their specific condition. With regard to joining a Medicare Advantage plan, they are open to ESRD beneficiaries under specific circumstances: (1) If you're already in a Medicare Advantage Plan when you develop ESRD, you may be able to stay in your plan or join another plan offered by the same company; (2) If you're already getting your health benefits (for example, through an employer health plan) through the same organization that offers the Medicare Advantage Plan; (3) If you had ESRD, but have had a successful kidney transplant, and you still qualify for Medicare benefits (based on your age or a disability), you can stay in Original Medicare, or join a Medicare Advantage Plan; and (4) You may be able to join a Medicare Special Needs Plan (SNP) for people with ESRD if one is available in your area.

    Comment: An industry organization suggested refinements to the low-volume payment adjustment to address the rare change of ownership instance wherein the new owner accepts the provider agreement but the ownership change results in a new provider number because of provider type classifications. In this example, due to the issuance of a new provider number, this facility would be deemed ineligible for the Low-Volume Payment Adjustment (LVPA).

    Response: We appreciate the commenter bringing this scenario to our attention; we will consider updating our policies and regulations to address this specific instance in the future.

    Comment: A health system recommended that other professional specialties be allowed to bill for their services from the ESRD facility site of service. Because ESRD patients spend hours each week immobile while they receive their treatment, this would be an opportune time for patients to receive care from other specialists (cardiologists, psychiatrists, endocrinologists, vascular surgeons, etc.).

    Response: We appreciate the commenter's suggestion for providing other specialties of care to beneficiaries while they receive dialysis. This is an interesting perspective that would require changes across programs, but it is one we will consider exploring in the future.

    Comment: Several commenters expressed concern that the inaccuracy of the case-mix adjusters causes leakage from the ESRD PPS. Another commenter recommended that case-mix adjusters included in the payment system should be selected based on the policy goal of improving patient access and that some adjusters may work together while others may cancel each other out. The commenter encourages CMS to ensure that the adjusters truly cover the costs of providing care for those patients with more health care needs. Commenters also suggest that CMS eliminate the remaining four comorbid case-mix adjusters for the same reason that bacterial pneumonia and monoclonal gammopathy were removed. Additionally, another commenter suggested that CMS discard the changes made to the age categories in the CY 2016 final rule by returning to the CY 2015 methodology. These same commenters stated that CMS should address the way that the body size (that is, the low body mass index (BMI) and body surface area (BSA)) adjusters cancel each other out and ultimately benefit very few beneficiaries. Another commenter believes that using the age range of 70-79 as the reference age group is inappropriate since facilities would not receive an adjustment for this age range, however, they would receive an adjustment for patients between the ages of 60 and 69. This commenter also had concerns about the rationale for using both a BSA and a BMI adjustment and encourages CMS to adopt a BMI adjustment for overweight and underweight patients that will better account for costs of treatment.

    Finally, another commenter urges CMS to reevaluate and update the pediatric case mix adjuster utilizing the most recent data available. This commenter elaborates that pediatric patients have an increased level of acuity of nursing care when compared to adult dialysis patients, these patients often need developmental or behavioral specialists, social workers or school-based specialists to assist with optimizing school performance, as well as increased assessments from dietitians to adjust formulas and diet for the patient's growth and nutrition requirements. The array of dialysis supplies required by these patients is also broader.

    Response: With regard to the comments regarding the ESRD PPS refinement implemented in CY 2016, as we stated in the CY 2016 ESRD PPS final rule (80 FR 68974) we continue to believe that the CY 2016 updated model aligns with our goals for the prospective payment system in establishing accurate payments and safeguarding access for Medicare beneficiaries. We modeled the ESRD PPS using methodologies that were tested under the Basic Case-Mix Adjusted (BCMA) composite rate payment system and in using the most recently available data, we made our best estimate for predicting the payment variables that best reflect cost variation among ESRD facilities for furnishing renal dialysis services to a vulnerable population of patients. This refinement uses data that illustrates a fully bundled prospective payment system and reflects the practice patterns under such environment. We continue to believe that it would not be appropriate to both perpetuate certain payment adjusters into the future that were developed using pre-PPS data and update the other adjusters using ESRD claims data and cost reports from 2012 and 2013. We thank the commenters for their views about the pediatric case mix adjustment. We describe in the detail how we reevaluated and updated the pediatric case mix adjusters utilizing the most recent data available in the CY 2016 Final Rule (80 FR 69001).

    Comment: One commenter expressed support for the ESRD PPS refinement based upon an updated regression analysis and established in the CY 2016 ESRD PPS final rule (80 FR 68973) and the low-volume and rural payment adjustments. This commenter agrees that these adjustments are necessary to ensure beneficiaries' access to services where they may otherwise lack dialysis options. This commenter also urged CMS to ensure that stagnation in the base rate does not negatively impact patient care, specifically with regard to payments to rural ESRD facilities and for facilities that treat pediatric patients. This commenter appreciates CMS' consideration of the potentially disproportionate impact of the ESRD PPS on those facilities. Another commenter stated that CMS should eliminate the rural adjuster and add a second tier, low-volume adjuster for facilities with 4,001-6,000 treatments per year. An industry organization expressed their concern that there is an incentive for facilities to limit access to specific locations in order to meet the requirements for the LVPA.

    Response: We appreciate the support and agree that our diligence with regard to the base rate needs to be ongoing. We appreciate the useful suggestions for refining the LVPA from the commenters. However, significant changes to the eligibility criteria would need to be adopted through notice and comment rulemaking. We believe that the finalized CY 2016 policy changes represent improvement in the targeting of the payment adjustments. We will certainly consider these suggestions for future refinement. We plan to continue to monitor the utilization of renal dialysis services furnished in low-volume and rural facilities.

    Comment: An LDO commented that increasing costs and utilization of certain clinical diagnostic laboratory services have not yet been recognized through a corresponding adjustment to the base rate, which undermines the integrity of the ESRD PPS bundled payment.

    Another LDO urged CMS to repair the underlying methodology of the ESRD PPS, which, based on their analysis, results in millions of dollars intended by CMS for patients' care to leak from the system. The organization stated that returning resources to the ESRD base rate will improve treatment for all Medicare dialysis beneficiaries, including home dialysis patients.

    An industry organization commented that the ESRD PPS has underpaid providers by over $1 billion since 2011 and are predicting negative profit margins through 2018. The organization provided the same critique of the ESRD PPS regression methodology that they provided in response to the CY 2016 ESRD PPS proposed rule, reiterating their view that the ESRD PPS refinement regression methodology used by CMS violates the core assumptions for a valid analysis.

    Response: As we stated in the CY 2011 ESRD PPS final rule (75 FR 49054), we included payments for all laboratory tests billed by ESRD facilities and independent laboratories for ESRD patients in calculating the final base rate in order to appropriately account for such tests as renal dialysis services. The ESRD PPS base rate is updated annually (as discussed in section II.B.3. of this final rule) by the ESRD bundled market basket. Therefore, we believe the base rate reflects price increases for laboratory renal dialysis services. With respect to increases in utilization of laboratory renal dialysis services, we continue to monitor utilization of laboratory services under the ESRD PPS and encourage ESRD facilities to report all laboratories services that they furnish. With regard to repairs to the ESRD PPS, we received comments of this nature last year and responded to them in the CY 2016 ESRD PPS final rule. As we stated in the CY 2016 final rule (80 FR 68974), we thoroughly reviewed these comments in consultation with our research team and other internal experts. We examined the outcomes of the current ESRD PPS specifically looking at access and quality of the PPS and based on our comprehensive monitoring of health outcomes and access under the ESRD PPS, we believe the current payment model has been successful in allocating payments across facilities and patients while supporting access and quality. While we recognize there can be theoretically optimal approaches to addressing payment model design, the availability of data is often an important factor in the approach ultimately undertaken. This is true with the ESRD PPS and the use of a two-equation model that relies on both claims and cost report data, as other payment systems do under Medicare.

    Comment: One commenter expressed concern about the lack of transparency in the use of data regarding the factors used in calculating payments. Although they appreciate that CMS has made more data available, the commenters stated that there continue to be differences in the calculations between what providers believe is the correct amount to adequately care for ESRD patients and the ESRD PPS base rate. The best way to resolve the differences would be through full transparency by releasing all data and calculations used in development of payment rates and adjusters.

    Response: Transparency is important to us. Therefore, we make the Limited Data Set (LDS) available with each rule. More information is located: https://www.cms.gov/research-statistics-data-and-systems/files-for-order/limiteddatasets/standardanalyticalfiles.html. We believe the data provided and the availability of technical reports explaining the methodology is sufficient to enable stakeholders to provide meaningful feedback, however, we have asked industry partners to identify specific instances in which the results of the calculations vary from what we have developed so that the CMS contractors can reconcile the variance.

    Comment: Several commenters provided information on the barriers that they believe minimize the growth of home dialysis and gave suggestions on how to increase the utilization of home modalities. Commenters expressed concern about medical staff providing misinformation on home dialysis in an effort to keep new patients coming in-center for treatment rather than choosing home dialysis. They attributed this to poor patient education and improperly incentivized facilities. Other commenters suggested creating payment incentives to encourage home dialysis and stated whatever needs to be done to encourage people to take their dialysis home, should be done even if that means increasing payments to clinics for training. These commenters suggested that CMS fund wages and salaries for nurses and technicians to train because there is confusion and misinformation coming from medical professionals that scares patients away from home dialysis when they should be doing just the opposite.

    One commenter noted that the U.S. Food and Drug Administration approvals for dialysis machines for home use require that the patient have a care partner who can assist in emergencies. This requirement prevents people who live alone (or whose care partner is temporarily absent) from doing home HD, and may place an undue burden on the family unit. The commenter believes that a dialyzer should be able to choose to perform home HD with or without a care partner, as their training and comfort level dictates. The ESRD facility should discuss with the patient the risks of dialyzing alone, assess the dialyzer's ability to perform his or her own treatments without assistance, and discuss alternate safety precautions available to the patient if the patient chooses to forego having a care partner.

    One LDO expressed concern that some home HD machines are designed in such a way that the patient must dialyze more frequently than three times per week and has found that a significant number of patients “burn out.” That is, they begin therapy on home HD but later decide they cannot effectively manage such a complex task at home and choose to dialyze in-center instead. The LDO's own data indicate that the average year-over-year “burn out” rate for home HD is 42 percent, compared to 24 percent for their PD patients. The primary cause for the drop-off among home HD patients is the burden on the patient's care partner.

    Another commenter suggested that CMS standardize the elements of the training manuals across dialysis machine manufacturers for patients. The commenter noted that they appreciated having a professionally written training manual, which was provided by one manufacturer, and believes that similar manuals would enhance dialyzer's confidence in what they were learning. Another improvement the commenter suggested is to require that training clinic managers be more experienced. The commenter described their experience of having a training clinic that only required 3 months of training experience for their clinical nurse managers. The commenter believes that this amount of training experience does not seem sufficient for them to manage their staff and know how to evaluate and improve their work. The commenter also suggested that CMS implement a requirement for ongoing home dialysis training because in the commenter's experience when some training clinics re-write their procedures, the only people that find out about the changes, besides the nurses, are the new patients and the long-term dialyzers are not informed of things that could make their treatments more efficient or safer. The commenter also suggested an increase in training dollars for clinics expressing that in the long run, it is money well spent since the cost of home dialysis is less than the cost of dialyzing in center.

    Response: The goal of our policy with regard to the treatment of ESRD is for ESRD facilities to provide the most appropriate care available for the beneficiary, whether in home or in-center. With the increased training add-on finalized in this rule, we hope that facilities will encourage home dialysis for beneficiaries who can benefit from it. Not all ESRD facilities are appropriately certified to provide training for home dialysis but we expect that if a beneficiary would like to receive home dialysis, the facility would refer the beneficiary to a home dialysis training facility. We encourage all ESRD facilities to be knowledgeable in all aspects of dialysis in order to educate beneficiaries. We appreciate the comments regarding barriers to home dialysis and will consider them for future policy changes, as appropriate.

    Comment: One commenter stated that although patients often receive pre-dialysis education in group settings, they know of no one who has been trained to perform home HD in a group setting in recent years. The commenter expressed concern that CMS has received comments to the contrary, and wanted to indicate that such instances should be extremely rare in light of the Conditions of Participation and should not affect the calculation of the costs of home HD training.

    Response: We appreciate the commenter's concern about the utilization of group training for home dialysis. As the commenter indicates, we have received many comments to the contrary and with this mixed information from the industry, we find that more analysis needs to take place in order for us to develop an appropriate methodology for computing the home dialysis training add-on based on updated cost report data.

    Comment: We received comments from SDOs, healthcare investment companies, and a nursing facility company indicating the benefits of Skilled Nursing Facility (SNF)/Nursing Facility (NF) patients receiving their home HD in the SNF/NF. They highlight lower readmission rates, decreased lengths of stay, and improved social outcomes when patients receive dialysis in the SNF/NF as opposed to being transported to an ESRD facility. One commenter stated that their patients benefit greatly from staff-assisted, more frequent HD within their SNF.

    Response: We recognize that receiving renal dialysis services in a SNF or NF can be beneficial to the patient. As we stated in the CY 2011 ESRD PPS final rule (75 FR 49057), nursing home patients are regarded as home dialysis patients because they are considered residents of the nursing home and receive dialysis treatments at the nursing homes and not at dialysis facilities. In addition, we note that the Medicare Benefit Policy Manual (Pub 100-02, chapter 11, section 40.D (https://www.cms.gov/Regulations-and-Guidance/Guidance/Manuals/downloads/bp102c11.pdf)) indicates that Medicare ESRD beneficiaries who permanently reside in a nursing home or long term care facilities and who meet the home dialysis requirements set forth under 42 CFR 494.100 are considered home dialysis patients. All home dialysis items and services will be paid under the ESRD PPS and no separate payment will be made to the facility. Also in the Medicare Benefit Policy Manual we indicated in section 30.1.C that staff-assisted home dialysis using nurses to assist ESRD beneficiaries is not included in the ESRD PPS and is not a Medicare covered service. We appreciate the commenter's suggestions for furnishing renal dialysis services in a SNF or NF and will consider them for future rulemaking.

    Comment: One dietician and nutritionist organization supports the “implementation of the outlier statute” and notes that registered dietitian nutritionists are able to assist in addressing the patient conditions that may increase facility costs when furnishing dialysis care and recommends that CMS make available the reimbursement for these services.

    Response: Response: We appreciate the commenters bringing these services to our attention. We agree that dietary needs are very important in the multidisciplinary care for ESRD beneficiaries and will consider these comments for future policy refinement; however, it's unclear what the commenters mean by the “implementation of the outlier statute”.

    Comment: One dialysis equipment supplier commented on the Kidney Disease Education benefit and suggested that we allow regional training centers to have management contracts with ESRD facilities to provide the home dialysis training in a centralized location. They also recommended defining a minimally adequate form of modality education as well as a minimally acceptable frequency of administration, and link this to eligibility for the payment model. In addition they noted that programs focusing on educational efforts have historically been very effective. Studies of focused, unbiased ESRD modality education, offered in the months prior to dialysis initiation have demonstrated that nearly one third of patients begin home dialysis when they have completed a balanced education program. In the field of diabetes, the American Diabetes Association, the Association of Diabetes Educators, and other organizations have developed extensive tools, assessments, and professional standards to deliver the education required by CMS in the provision of Diabetes Self-Management Education. Unfortunately, this success has not generally extended to the education of kidney patients, where the Kidney Disease Education Benefit is historically underutilized and too narrow in scope to meet the needs of patients approaching dialysis. Thus, incident dialysis patient awareness and knowledge of self-management (home dialysis) treatment modalities is highly variable. The commenters believe that, without minimal standards, dialysis modality education will fall victim to provider priority conflicts or short-term economic disincentives. With demonstration of a balanced and effective chronic kidney disease education program as a baseline requirement, and with the percentage target of home dialysis utilization described above, the market will make training better and more consistent, allowing patients to make truly informed decisions and increasing the likelihood that patients choose and remain on a home dialysis therapy option.

    Another commenter noted that home dialysis innovations are limited by the local scale of the provider census and the resultant experience of providers' training programs. In the current ESRD market, home dialysis training is a small percentage of the activity at any single center; therefore, the level of expertise needed to develop certain skills and cost benefits is unattainable for many. As an alternative to the current model, many have identified the need for regional home training centers that service a network of traditional dialysis centers. Yet regional training centers are not the norm because centers do not want to refer patients to other programs for fear of losing the patient and their corresponding revenue. The commenter stated that CMS should strive to eliminate barriers to establishment of regional training centers. For example, modification of ESRD facility certification processes to allow for a CMS certified management service organization that provides transitional care, home dialysis training, and home dialysis ongoing management under a traditional management services construct could dramatically improve scale, skill, etc. The outsourcing of training and transitional care of incident patients or those moving from one modality to another would allow the “home and transition care” to be done in specialized programs that are contracted by the patients' originating centers. Coordination of care would occur naturally, as training centers could focus exclusively on the best means of providing home training and transitional care, without threatening the interests of patients' originating center in retaining home patients. Smaller centers, unable to support the requirements of home training services mandated by the Conditions for Coverage would likely be willing to refer patients for training, without fearing that their patients will be lost to another center. Under this paradigm, patients benefit by getting access to true centers of excellence for home dialysis training and support, physicians benefit by placing the care of their patients in the most expert hands, and providers benefit by having access to therapy services that may otherwise be economically infeasible due to scale, geography or other limiting factors.

    Response: We appreciate the suggestions with regard to regional training centers and other training delivery models. While these comments are out of scope of this final rule, we will consider them for future rulemaking.

    III. Final Coverage and Payment for Renal Dialysis Services Furnished to Individuals With Acute Kidney Injury (AKI) A. Background

    On June 29, 2015, the Trade Protection Extension Act of 2015 (TPEA) (Pub. L. 114-27) was enacted. In the TPEA, the Congress amended the Act to include coverage and provide for payment for dialysis furnished by an ESRD facility to an individual with AKI. Specifically, section 808(a) of the TPEA amended section 1861(s)(2)(F) of the Social Security Act (the Act by including coverage for renal dialysis services furnished on or after January 1, 2017 by a renal dialysis facility or provider of services currently paid under section 1881(b)(14) of the Act to an individual with AKI. In addition, section 808(b) of TPEA amended section 1834 of the Act by adding a new subsection (r). Subsection (r)(1) of section 1834 of the Act provides that in the case of renal dialysis services (as defined in subparagraph (B) of section 1881(b)(14) of the Act) furnished under Part B by a renal dialysis facility or a provider of services paid under such section during a year (beginning with 2017) to an individual with acute kidney injury, the amount of payment under Part B for such services shall be the base rate for renal dialysis services determined for such year under such section, as adjusted by any applicable geographic adjustment applied under subparagraph (D)(iv)(II) of such section and may be adjusted by the Secretary (on a budget neutral basis for payments under section 1834(r) of the Act) by any other adjustment factor under subparagraph (D) of section 1881(b)(14) of the Act. Section 1834(r)(2) of the Act defines “individual with acute kidney injury” to mean an individual who has acute loss of renal function and does not receive renal dialysis services for which payment is made under section 1881(b)(14) of the Act.

    B. Summary of the Proposed Provisions, Public Comments, and Responses to Comments on the Coverage and Payment for Renal Dialysis Services Furnished to Individuals With Acute Kidney Injury (AKI)

    The proposed rule, titled “End-Stage Renal Disease Prospective Payment System, Coverage and Payment for Renal Dialysis Services Furnished to Individuals with Acute Kidney Injury, End-Stage Renal Disease Quality Incentive Program, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program Bid Surety Bonds, State Licensure and Appeals Process for Breach of Contract Actions, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program and Fee Schedule Adjustments, Access to Care Issues for Durable Medical Equipment; and the Comprehensive End-Stage Renal Disease Care Model” (81 FR 42802 through 42880), was published in the Federal Register on June 30, 2016, with a comment period that ended on August 23, 2016. In that proposed rule, for the Coverage and Payment for Renal Dialysis Services Furnished to Individuals with Acute Kidney Injury (AKI), we proposed several payment policies in order to implement subsection (r) of section 1834 of the Act and the amendments to section 1881(s)(2)(F) of the Act. We received approximately 30 public comments on our proposals, including comments from ESRD facilities; national renal groups, nephrologists and patient organizations; patients and care partners; manufacturers; health care systems; and nurses.

    In this final rule, we provide a summary of each proposed provision, a summary of the public comments received and our responses to them, and the policies we are finalizing for the Coverage and Payment for Renal Dialysis Services Furnished to Individuals with AKI. Comments related to the impact analysis are addressed in the “Economic Analyses” section in this final rule.

    C. Final Payment Policy for Renal Dialysis Services Furnished to Individuals With AKI 1. Definition of “Individual With Acute Kidney Injury”

    Consistent with section 1834(r)(2) of the Act, we proposed to define an individual with AKI as an individual who has acute loss of renal function and does not receive renal dialysis services for which payment is made under section 1881(b)(14) of the Act. Section 1881(b)(14) of the Act contains all of the provisions related to the ESRD PPS. We interpret the reference to section 1881(b)(14) of the Act to mean that we would pay renal dialysis facilities for renal dialysis services furnished to individuals with acute loss of kidney function when the services furnished to those individuals are not payable under section 1881(b)(14) of the Act because the individuals do not have ESRD. We proposed to codify the statutory definition of individual with acute kidney injury at 42 CFR 413.371 and we solicited comments on this definition.

    The comments and our responses to the comments for this proposal are set forth below.

    Comment: Many individual commenters as well as dialysis nursing associations, dialysis industry associations, and a large dialysis organization supported the legislation allowing the coverage of and payment for renal dialysis services furnished to individuals with AKI in an ESRD facility. The commenters believe that it will decrease inpatient hospital lengths of stay and hospital-acquired infections, utilize the resources available in the outpatient setting, and that this access will be paramount to the care of beneficiaries with multiple co-morbidities, frequent procedures or diagnostics, and specialist visits. These commenters also believe that access to these services in ESRD facilities for beneficiaries with AKI is important in the management of patients with delayed graft function post-kidney transplant when patients may need dialysis until the transplant begins to function. One individual commenter expressed gratitude that these policies will assist patients if their kidney disease progresses and they ultimately must make the emotional and clinical transition to maintenance dialysis at the ESRD facility.

    Response: We appreciate the support and agree that these policies, described in detail below, provide individuals with AKI the option to receive dialysis in either the hospital outpatient department or, if able, in their community ESRD facility. We would like to note that this benefit is for beneficiaries already Medicare eligible, that have AKI and need dialysis. Specifically, needing dialysis for AKI does not entitle these individuals to Medicare and is not the same as being certified as ESRD and initiating life-sustaining maintenance dialysis.

    Comment: Many commenters, including dialysis industry organizations and a health system, support the proposed definition of an individual with AKI. Industry organizations commended CMS for its recognition and acknowledgement of the unique acute medical needs of the AKI population, noting that AKI dialysis patients are, by definition, in a transitory state. The commenters indicated that utilization of renal dialysis services furnished to beneficiaries with AKI may substantially differ from that of patients with ESRD in other ways.

    One industry organization commented that CMS should reaffirm the distinct needs of AKI patients and support the flexibility for physicians to determine the classification, frequency of treatment, and types of services provided to these patients. A dialysis organization stated that the most meaningful definition for an AKI patient would be “a patient needing dialysis who does not require acute inpatient care for whom the nephrologist believes that there is a reasonable chance of kidney function recovery, and for whom the nephrologist therefore declines to sign the form 2728 (the physician's certification that a patient has reached stage 5 chronic kidney disease, or end-stage renal disease)”. A patient advocacy group recommended that CMS convene a technical expert panel of dialysis clinicians, nephrologists, and beneficiary organization to discuss how AKI patients can have guaranteed access to this new benefit.

    Response: We appreciate commenter's support of the CMS definition of AKI. We also acknowledge the alternative definitions suggested. We continue to believe that the definition set forth in the statute provides an appropriate way to distinguish an individual with AKI from an individual with ESRD. We believe the broad nature of the definition ensures access to renal dialysis services in an ESRD facility to those beneficiaries that have an acute loss of renal function.

    Final Rule Action: After review and consideration of our proposal, the statute, and the comments, we are finalizing § 413.371 as proposed in the regulation text to define an individual with AKI as an individual who has acute loss of renal function and does not receive renal dialysis services for which payment is made under section 1881(b)(14) of the Act.

    2. The Payment Rate for AKI Dialysis

    Section 1834(r)(1) of the Act, as added by section 808(b) of TPEA, provides that the amount of payment for AKI services shall be the base rate for renal dialysis services determined for a year under section 1881(b)(14) of the Act. We proposed to interpret this provision to mean the ESRD PPS per treatment base rate as set forth in 42 CFR 413.220, which is updated annually by the market basket less the productivity adjustment as set forth in 42 CFR 413.196(d)(1), and adjusted by any other adjustment factor applied to the ESRD PPS base rate. The ESRD PPS per-treatment base rate is established on an annual basis through rulemaking and finalized in the CY ESRD PPS final rule. We recognize that there could be rulemaking years in which legislation or policy decisions could directly impact the ESRD PPS base rate because of changes to ESRD PPS policy that may not relate to the services furnished for AKI dialysis. For example, for CY 2017 we are applying a training add-on budget-neutrality adjustment factor to the otherwise applicable base rate. In those situations, we would still consider the ESRD PPS base rate as the payment rate for AKI dialysis. We believe that the statute was clear in that the payment rate for AKI dialysis shall be the ESRD PPS base rate determined for a year under section 1881(b)(14) of the Act, which we interpret to mean the finalized ESRD PPS base rate and not to be some other determined amount. As described below, ESRD facilities will have the ability to bill Medicare for non-renal dialysis items and services and receive separate payment in addition to the payment rate for AKI dialysis. For example, beneficiaries with AKI may require certain laboratory tests so that their practitioner can gauge organ function and accurately adjust the dialysis prescription that would be optimal for kidney recovery. These beneficiaries would require laboratory tests specific to their condition which would not be included in the ESRD PPS and thus, would be paid for separately. For instance, an individual with AKI might need to be tested for a biochemical indication of a urea cycle defect resulting in hyperammonemia. We proposed to codify the AKI dialysis payment rate in our regulations at 42 CFR 413.372 and solicited comment on this proposal.

    The comments and our responses to the comments for this proposal are set forth below.

    Comment: A health system and an industry group support the proposed payment rate but believe that the AKI payment rate should not include legislative and policy decisions that directly impact ESRD PPS services, but not AKI services.

    Response: We believe that the statute was clear in that we would pay ESRD facilities for renal dialysis services furnished to beneficiaries with AKI in the amount of the ESRD PPS base rate. Specifically, we believe the statute requires us to utilize the wage-adjusted ESRD PPS base rate as the payment rate for AKI. As discussed below, ESRD facilities will receive payment based on Part B fee schedules for other items and services that are not considered to be renal dialysis services. In addition, and also discussed below, there is no weekly limit on the number of treatments that will be paid. We continue to believe that these payment considerations are sufficient for Medicare payment of renal dialysis services furnished to beneficiaries with AKI and as these services evolve we can address any changes in future rulemaking.

    Final Rule Action: Therefore, for CY 2017 and subsequent years, we are finalizing the AKI dialysis payment rate as set forth in § 413.372 as proposed.

    The CY 2017 final ESRD PPS base rate is $231.55. Accordingly, we are finalizing a CY 2017 payment rate for renal dialysis services furnished by ESRD facilities to individuals with AKI as $231.55.

    Comment: An industry organization commented that the ESRD Network fee should not be removed from the AKI payments since Networks focus on ESRD, not AKI.

    Response: Thank you for bringing the ESRD Network fee portion of payment to our attention. We agree with the commenter that section 1834(r) of the Act, as added by section 808(b) of TPEA does not give CMS the authority to reduce the AKI payment rate by the 50 cent network fee. Specifically, section 1881(b)(7) of the Act provides that “[t]he Secretary shall reduce the amount of each composite rate payment under this paragraph for each treatment by 50 cents . . . and provide for payment of such amount to the organizations (designated under subsection (c)(1)(A) of this section) for such organizations' necessary and proper administrative costs incurred in carrying out the responsibilities described in subsection (c)(2) of this section”. This language provides that (1) the reduction can only be taken from the payment provided for in section 1881(b)(7) of the Act—the composite rate—a payment system that was later subsumed by the ESRD PPS and (2) the reduction can only be used for the costs incurred in carrying out the network organization's responsibilities in (c)(2), which pertain to the ESRD population. After consideration of the comment and review of the statutory provision, we will not apply the per treatment reduction of $0.50 to the AKI dialysis payment rate.

    Comment: MedPAC expressed concern regarding the payment rate variance for furnishing outpatient dialysis to AKI beneficiaries in a hospital outpatient department as compared to the ESRD facility and suggested that this variance may cause Medicare and beneficiaries to pay more than necessary. MedPAC suggested that CMS should not pay more in one setting versus another for the same treatment.

    Response: We appreciate MedPAC's comments regarding site-neutral payment, however, section 808(b) of TPEA did not address payments to hospital outpatient departments for dialysis furnished to beneficiaries with AKI.

    3. Geographic Adjustment Factor

    Section 1834(r)(1) of the Act further provides that the amount of payment for AKI dialysis services shall be the base rate for renal dialysis services determined for a year under section 1881(b)(14) of the Act, as adjusted by any applicable geographic adjustment factor applied under section 1881(b)(14)(D)(iv)(II) of the Act. We interpret the reference to “any applicable geographic adjustment factor applied under section (D)(iv)(II)” of such section to mean the geographic adjustment factor that is actually applied to the ESRD PPS base rate for a particular facility. Accordingly, we proposed to apply the same wage index that is used under the ESRD PPS that is based on the most recent pre-floor, pre-reclassified hospital wage data collected annually under the inpatient prospective payment system that are unadjusted for occupational mix to the AKI dialysis payment rate. The ESRD PPS wage index policy was finalized in the CY 2011 ESRD PPS final rule (75 FR 49117) and codified at 42 CFR 413.231. We explained in the CY 2017 ESRD PPS proposed rule (81 FR 42821) that the AKI dialysis payment rate would be adjusted by the wage index for a particular facility in the same way that the ESRD PPS base rate is adjusted by the wage index for that facility. Specifically, we would apply the wage index to the labor-related share of the ESRD PPS base rate that we will utilize for AKI dialysis to compute the wage-adjusted per-treatment AKI dialysis payment rate. We proposed that for CY 2017, the AKI dialysis payment rate would be the CY 2017 ESRD PPS base rate (established in the CY 2017 ESRD PPS final rule), adjusted by the ESRD facility's wage index. In proposed 42 CFR 413.372(a), we refer to the ESRD PPS wage index regulation at 42 CFR 413.231 as an adjustment we will apply to the ESRD PPS base rate.

    The comments and our responses to the comments for these proposals are set forth below.

    Comment: Several commenters supported the proposal to apply the same wage index that is used under the ESRD PPS to the AKI dialysis payment rate.

    Response: We appreciate the commenters' support.

    Final Rule Action: We are finalizing application of the wage index to the AKI dialysis payment rate and the accompanying regulation at § 413.372(a) as proposed.

    4. Other Adjustments to the AKI Payment Rate

    Section 1834(r)(1) of the Act also provides that the payment rate for AKI dialysis may be adjusted by the Secretary (on a budget neutral basis for payments under section 1834(r) of the Act) by any other adjustment factor under subparagraph (D) of section 1881(b)(14) of the Act. For purposes of payment for AKI dialysis, we did not propose to adjust the AKI payment rate by any other adjustments at this time. Therefore, for at least the first year of implementation of the AKI payment rate, we did not propose to apply any of the optional payment adjustments under subparagraph (D) of section 1881(b)(14) of the Act. We proposed to codify our authority to adjust the AKI payment rate by any of the adjustments under section 1881(b)(14)(D) of the Act in our regulations at 42 CFR 413.373.

    The comments and our responses to the comments for these proposals are set forth below.

    Comment: A large dialysis organization and dialysis industry associations supported CMS' decision not to apply ESRD-based case-mix adjusters to the AKI dialysis payment rate. Another dialysis industry group explained that the ESRD case-mix adjusters were not designed to target the costs involved in treating individuals with AKI.

    A health system disagreed with the CMS' proposal of paying the ESRD base rate with no adjustments and expressed that the AKI patients cost substantially more than ESRD patients. The commenter suggested that CMS develop an AKI adjustor to be applied to the ESRD PPS base rate. A dialysis industry association suggested that in the future, CMS apply patient and facility-level adjustments to the AKI dialysis payment rate, similar to how CMS adjusts for ESRD beneficiaries.

    Response: We appreciate the thoughtful comments on the adjustments to the ESRD PPS base rate applicable to the AKI dialysis payment rate and we will consider the suggestions for future rulemaking. As discussed above, the AKI dialysis payment rate will be the finalized ESRD PPS base rate adjusted by the wage index that is used under the ESRD PPS. We are not adjusting the payment amount by any other factors at this time, but may in future years.

    With regard to the higher costs associated with AKI patients as compared to ESRD patients, we are finalizing a policy of paying for all treatments provided to a patient, without applying the monthly treatment limits applicable under the ESRD PPS. We are also finalizing a policy to pay separately for all items and services that are not part of the ESRD PPS base rate. Once we have substantial data related to the AKI population and its associated utilization, we will determine the appropriate steps toward further developing the AKI payment rate.

    Final Rule Action: After consideration of the comments we are finalizing our authority to adjust the AKI dialysis payment in the regulations text at § 413.373 as proposed.

    Comment: One individual commenter asked CMS to clarify how treatments for patients with AKI would count toward the attestation for the Low-Volume Payment Adjustment (LVPA) and asked if the 4,000 limit should be increased to account for the impact of this new policy.

    Response: Since the implementation of the LVPA, we have indicated that for purposes of determining eligibility for the LVPA (defined in § 413.232(b)), “treatments” mean total hemodialysis equivalent treatments, that is, Medicare and non-Medicare. Since the total treatment count includes all treatments furnished by the ESRD facility regardless of payer, we believe that AKI dialysis treatments also count toward the number of treatments furnished by an ESRD facility and should be reported to the MAC in the facility's attestation for the LVPA. More information regarding the eligibility criteria of the LVPA is available in the Medicare Benefit Policy Manual (Pub 100-02, chapter 11, section 60.B.1 (https://www.cms.gov/Regulations-and-Guidance/Guidance/Manuals/downloads/bp102c11.pdf)). At this time, we do not believe that the eligibility criteria for the LVPA need to be changed, however we will monitor utilization of the LVPA for future refinements. Facilities should include AKI dialysis treatment in their counts for purposes of the LVPA.

    5. Renal Dialysis Services Included in the AKI Payment Rate

    Section 1834(r)(1) of the Act provides that the AKI payment rate applies to renal dialysis services (as defined in subparagraph (B) of section 1881(b)(14) of the Act) furnished under Part B by a renal dialysis facility or provider of services paid under section 1881(b)(14) of the Act. We proposed that drugs, biologicals, laboratory services, and supplies that are considered to be renal dialysis services under the ESRD PPS as defined in 42 CFR 413.171, would be considered to be renal dialysis services for patients with AKI. As such, no separate payment would be made for renal dialysis drugs, biologicals, laboratory services, and supplies that are included in the ESRD PPS base rate when they are furnished by an ESRD facility to an individual with AKI. We proposed to codify this policy in the regulations at 42 CFR 413.374(a).

    However, we recognize that the utilization of items and services for beneficiaries with AKI receiving dialysis may differ from the utilization of these same services by ESRD beneficiaries. This is because we expect that individuals with AKI will only need dialysis for a finite number of days while they recover from kidney injury, while ESRD beneficiaries require dialysis indefinitely unless they receive a kidney transplant. We recognize that the intent of dialysis for patients with AKI is curative; therefore, we proposed to pay for all hemodialysis treatments furnished to beneficiaries with AKI in a week, even if the number of treatments exceeds the three times-weekly limitation we apply to HD treatments furnished to beneficiaries with ESRD.

    Other items and services furnished to beneficiaries with AKI that are not considered to be renal dialysis services as defined in 42 CFR 413.171, but that are related to their dialysis treatment as a result of their AKI and that an ESRD facility might furnish to a beneficiary with AKI, would be separately payable. In particular, an ESRD facility could seek separate payment for drugs, biologicals, laboratory services, and supplies that ESRD facilities are certified to furnish and that would otherwise be furnished to a beneficiary with AKI in a hospital outpatient setting. Therefore, we proposed to pay for these items and services separately when they are furnished to beneficiaries with AKI receiving dialysis in ESRD facilities. We proposed to codify this policy at 42 CFR 413.374(b).

    The comments and our responses to the comments for these proposals are set forth below.

    Comment: Generally, commenters agreed with the proposal to consider renal dialysis services as defined in § 413.171 to be renal dialysis services for AKI patients. However, some commenters expressed concern that over time the adequacy of the ESRD PPS base rate for such services may be questionable. Specifically, dialysis nursing organizations, an individual, and an LDO commented that it is important for CMS to recognize that AKI patients utilize treatments, drugs, labs, and other services differently than ESRD beneficiaries. For example, AKI patients may require more frequent laboratory services, antibiotic administration, and infection monitoring. The commenter further warned that these patients may be more likely to miss treatments due to recurrent illnesses, hospital-based treatments, or debility. The commenters suggested that CMS work with the dialysis community to determine if the AKI payment rate should be adjusted for adequacy as a result of more frequent utilization in the future.

    The commenters cautioned CMS that when analyzing historic utilization that the data may not be representative of the actual prevalence of AKI patients who require dialysis. A dialysis industry association urged CMS to closely track the utilization of items and services that patients with AKI dialysis receive that are in the bundle because the utilization could be higher.

    A dialysis industry organization supported CMS' decision not to modify payment until there is more experience with these patients in the ESRD facility setting. Another dialysis industry organization concurred with CMS' intent to monitor separately billable services for appropriate utilization and urges CMS to strike a careful balance between monitoring and recognizing that utilization will be higher. A different dialysis industry organization commented that CMS should reaffirm the distinct needs of AKI patients and be supportive of flexibility for physicians to determine AKI versus ESRD classification, frequency of treatment, and the types of services provided.

    Response: We appreciate the comments from stakeholders regarding the utilization of drugs, labs, and other services by patients with AKI. We continue to believe that since the basis of payment is the ESRD PPS base rate, payment for renal dialysis services is accounted for through the per treatment AKI dialysis payment rate. Additionally, as discussed below, other items and services furnished to beneficiaries with AKI are separately payable.

    We acknowledge the commenters' concerns regarding AKI patients' more frequent use of renal dialysis services when compared to ESRD beneficiaries. We encourage the reporting of all items and services furnished to beneficiaries with AKI. We also expect ESRD facilities to continue to report all services that are furnished to ESRD beneficiaries. We plan to monitor the utilization of these items and services to support any necessary changes in future rulemaking.

    With regard to the flexibility for physicians to determine when an AKI patient has regained kidney function, or whether the transition must be made to ESRD treatment, we agree that this is a medical decision that should be supported by lab tests and a dialysis scheduling protocol, including withdrawing dialysis to determine the extent of recovery of renal function. The goal of AKI should be to have the kidneys return to normal functioning.

    Comment: Several commenters, including dialysis industry associations and large dialysis organizations, are supportive of the CMS proposal to pay separately for items and services furnished to beneficiaries with AKI that are not considered to be renal dialysis services as defined in 42 CFR 413.171, but that are related to their dialysis treatment as a result of their AKI and that an ESRD facility might furnish to a beneficiary with AKI.

    Response: We appreciate the support on this issue. We continue to believe what commenters have explained, that AKI patients have various treatment needs and outcomes that may not be the same as an ESRD patient. We acknowledge that this distinction between the two populations is important and will monitor the utilization of items and services along with health outcomes.

    Final Rule Action: After consideration of public comments, we are finalizing in § 413.374(a) that drugs, biologicals, laboratory services, and supplies that are considered to be renal dialysis services under the ESRD PPS as defined in 42 CFR 413.171, would be considered to be renal dialysis services for patients with AKI. As such, no separate payment would be made for renal dialysis drugs, biologicals, laboratory services, and supplies that are included in the ESRD PPS base rate when they are furnished by an ESRD facility to an individual with AKI. We are also finalizing in § 413.374(b) that other items and services furnished to beneficiaries with AKI that are not considered to be renal dialysis services as defined in 42 CFR 413.171, but that are related to their dialysis treatment as a result of their AKI and that an ESRD facility might furnish to a beneficiary with AKI, would be separately payable.

    D. Applicability of ESRD PPS Policies to AKI Dialysis 1. Uncompleted Dialysis Treatment

    Generally, we would pay for only one treatment per day across all settings. However, similar to the policy applied under the ESRD PPS for treatments for patients with ESRD, in the interest of fairness and in accordance with Chapter 8, section 10.2 of the Medicare Claims Processing Manual, if a dialysis treatment is started, that is, a patient is connected to the machine and a dialyzer and blood lines are used, but the treatment is not completed for some unforeseen, but valid reason, for example, a medical emergency when the patient must be rushed to an emergency room, both the ESRD facility and the hospital would be paid. We consider this to be a rare occurrence that must be fully documented to the A/B MAC's satisfaction.

    2. Home and Self-Dialysis

    We do not expect that beneficiaries with AKI will receive dialysis in their homes due to the duration of treatment and the unique needs of AKI. Specifically, it is our understanding that these patients require supervision by qualified staff during their dialysis and close monitoring through laboratory tests to ensure that they are receiving the necessary care to improve their condition and get off of dialysis. Therefore, we did not propose to extend the home dialysis benefit to beneficiaries with AKI.

    3. Vaccines and Their Administration

    Section 1881(b)(14)(B) of the Act specifically excludes vaccines covered under section 1861(s)(10) of the Act from the ESRD PPS. However, ESRD facilities are identified as an entity that can bill Medicare for vaccines and their administration. Therefore, we proposed to allow ESRD facilities to furnish vaccines to beneficiaries with AKI and bill Medicare in accordance with billing requirements in the Medicare Claims Processing Manual (Pub. 100-04, Chapter 18 Preventive and Screening Services, section 10.2 which is located on the CMS Web site: https://www.cms.gov/Regulations-and-Guidance/Guidance/Manuals/downloads/clm104c18.pdf). We solicited comment on the proposal for ESRD facilities to administer vaccines to beneficiaries with AKI.

    The comments and our responses to the comments for these proposals are set forth below.

    Comment: Many commenters, including dialysis nursing organizations, dialysis organizations, and dialysis industry associations applauded CMS for proposing to pay for all treatments provided to AKI patients in a week and suggested that we finalize the policy as proposed. One dialysis physician association and a couple of dialysis organizations requested that CMS clarify that both peritoneal dialysis (PD) and hemodialysis (HD) modalities will be available to these patients and that the beneficiaries should be allowed to complete their PD treatment at home.

    Response: We thank the commenters for their support. We continue to believe and expect to continue to see through monitoring initiatives that individuals with AKI will only need dialysis for a finite number of days while they recover from kidney injury. As we stated above, we recognize that the intent of dialysis for patients with AKI is curative as opposed to long term. Therefore, we are finalizing the policy to provide payment for all hemodialysis treatments furnished to beneficiaries with AKI in a week, even if the number of treatments exceeds the 3 times-weekly limitation we apply to HD treatments furnished to beneficiaries with ESRD.

    With regard to the commenter's concern regarding modalities, we agree with commenters that individuals with AKI should have the ability, if they are candidates, for other modalities of dialysis while they are in the facility. Therefore, in response to commenters we will apply our policy of payment for AKI dialysis to both in-center PD and HD. We are finalizing payment for both of these dialysis modalities furnished to individuals with AKI in a week, including peritoneal dialysis when clinically appropriate, when the dialysis is furnished in the ESRD facility. Further discussion regarding home dialysis is below.

    Comment: Many commenters supported the policy proposals regarding uncompleted dialysis treatments and vaccine administration. One dialysis industry organization requested additional clarification in regard to the ESRD policies that do not apply to AKI. Another dialysis industry group encouraged CMS to work with the community to understand the specific treatment needs of this population.

    Response: We thank the commenters for their support regarding our policies on vaccine administration and uncompleted treatments. We are finalizing these policies as proposed.

    With regard to the commenter's suggestion to clarify the ESRD policies that do not apply to AKI, as we stated below, we anticipate that most of the policies laid out in Chapter 8 of the Medicare Claims Processing Manual will also apply to claims for dialysis furnished to individuals with AKI. In the timeframe available for the implementation of the payment for dialysis furnished to individuals with AKI, we believe that it is prudent to move into CY 2017 with payment policies that ESRD facilities are accustomed to following. As we monitor utilization of renal dialysis services and other items and services that the ESRD facilities furnish to individuals with AKI, we plan to engage the dialysis community to determine through rulemaking the continuation or discontinuation of certain policies which are or are not applicable to this population.

    Comment: One dialysis industry association urged CMS to consider adding renal dialysis services furnished to individuals with AKI to the list of telehealth eligible services.

    Response: Telehealth services are Part B benefits that are outside of the scope of the ESRD PPS, and therefore, outside of the scope of this final rule. We note that telehealth dialysis services are limited to renal dialysis services for home dialysis patients. For more information on telehealth services, we refer readers to the Medicare Claims Processing Manual Chapter 12, section 190. (https://www.cms.gov/Regulations-and-Guidance/Guidance/Manuals/downloads/clm104c12.pdf). As discussed below, we do not believe at this time that it is appropriate for individuals with AKI to be trained to perform home dialysis. The dialysis industry has repeatedly shared with us that this population of patients is unstable and needs close physician supervision while they receive renal dialysis services. The literature characterizes this population as needing meticulous attention to fluid, acid-base, and electrolyte balance, as well as the removal of uremic toxins (http://www.uptodate.com/contents/use-of-peritoneal-dialysis-for-the-treatment-of-acute-kidney-injury-acute-renal-failure).

    Comment: A dialysis industry association suggested that CMS use the data when dialysis is initiated for individuals with AKI for purposes of determining transplant wait-list priority status and Medicare entitlement for patients who transition from AKI to ESRD. This commenter urged CMS to explicitly include the transplant recipients who develop AKI and need dialysis after having a functional allograft, in the rules governing delivery of care, reporting, and conditions for coverage for individuals with AKI and on dialysis as they believe the restoration of allograft function in transplant recipients with AKI dialysis is a critical outcome.

    Response: We appreciate the comments related to individuals with AKI dialysis and kidney transplantation as well as the request for clarification. If an individual has had a kidney transplant and is just receiving temporary dialysis for AKI, then facilities could receive payment for their services under the AKI benefit, provided the beneficiary meets the criteria for being an AKI patient. If however, the beneficiary is a kidney transplant recipient and they're beginning a regular course of dialysis because their ESRD has returned, then they'd be entitled to the ESRD benefit. Dialysis furnished to kidney transplant recipients would be covered, whether the dialysis is necessary because of AKI or ESRD. With regard to AKI beneficiaries who develop AKI after having a functional allograft and need dialysis. We note that payment would be made for dialysis furnished to these beneficiaries under this policy.

    Comment: An individual commenter believes that CMS should not restrict renal dialysis services furnished to individuals with AKI to the ESRD facility and should allow for home dialysis. They believe that this particularly impacts patients with ambulation problems, with an immunosuppressed status, or those that reside in a long term care facility. This comment is in direct contrast to a comment received from a patient advocacy organization, a large health system, a dialysis industry association, and dialysis nursing organizations who agree with our proposal to limit AKI dialysis to in-center treatments since most AKI patients will not use home dialysis because the modality takes time to initiate. An LDO suggested that CMS specifically define requirements for patients that reside in a facility that could be designated as a home. A dialysis industry organization requested that CMS reconsider a blanket rejection of home dialysis care pointing out that PD, initially begun in the facility, could be appropriate in the home and would be particularly helpful to patients for whom transportation is a challenge.

    Response: We appreciate the feedback regarding allowing AKI patients to dialyze at home. This policy decision is one that we will monitor for future changes. Multiple sources in the industry, however, including, physicians, patient advocacy groups, and dialysis organizations of all sizes, have communicated to us that this population of patients is unstable. Some commenters stated that patients require close attention while they receive their dialysis, which is why alternatively the service was primarily available in the hospital outpatient setting prior to the TPEA amendments. In addition, based on the data we have received, at this time we believe that this population will dialyze primarily in an ESRD facility. Therefore we are finalizing as proposed. However, as we gather data on the AKI population and the extent of home training necessary to safely self-administer PD in the home, we may consider the use of PD in the home for the AKI patient in the future as we may find that there are be subsets of patients whose injury may lend itself, after an initial treatment period, to PD in the home. (http://www.uptodate.com/contents/use-of-peritoneal-dialysis-for-the-treatment-of-acute-kidney-injury-acute-renal-failure).

    Final Rule Action: We will keep this option as one to consider in the future.

    E. Monitoring of Beneficiaries With AKI Receiving Dialysis in ESRD Facilities

    Because we are aware of the unique acute medical needs of the AKI population, we plan to closely monitor utilization of dialysis and all separately billable items and services furnished to individuals with AKI by ESRD facilities. For example, stakeholders have stated that beneficiaries with AKI will require frequent labs to monitor renal function or they will be at risk for developing chronic renal failure. Another recurrent concern is the flexibility necessary in providing dialysis sessions to beneficiaries with AKI. Stakeholders have told us that these patients may need frequent dialysis, but will also require days with no dialysis to test for kidney recovery. Consequently, we will closely monitor utilization of dialysis treatments and the drugs, labs and services provided to these beneficiaries.

    We met with both physician and provider associations with regard to the care of patients with AKI. Both have expressed concerns that physician oversight will be limited for these beneficiaries, based on current operational models used by ESRD facilities. They encouraged CMS to support close monitoring of this patient population, particularly with regard to lab values, in the interest of preventing these patients from becoming ESRD patients. A close patient-physician relationship is critical for the successful outcome of the AKI patient.

    The comments and our responses to the comments for this approach are set forth below.

    Comment: An LDO and dialysis industry associations encouraged CMS to consult with stakeholders regarding monitoring of these patients and to also be transparent regarding AKI utilization data collected for payment and delivery of AKI services. Another dialysis industry association appreciated that CMS recognizes the importance of monitoring and suggests that a monitoring add-on payment is appropriate. A third dialysis industry association commented that nephrologists and other dialysis caregivers should implement active measures to promote and to monitor renal recovery.

    Response: We appreciate the support on this issue. We will be developing formal monitoring programs for utilization to inform future payment policy. When we refer to monitoring, we are referring to data review based on claims data, not physician monitoring. Physician oversight for these beneficiaries would be included in the AKI dialysis payment rate or payable through the appropriate fee for service benefit, if not a renal dialysis service. We will develop public use files for the utilization of these services, but do not anticipate that this data will be available for at least 1 year. If stakeholders have data, we would welcome the receipt of that data.

    F. AKI and the ESRD Conditions for Coverage

    The ESRD Conditions for Coverage (CfCs) at 42 CFR part 494 are health and safety standards that all Medicare-participating dialysis facilities must meet. These standards set baseline requirements for patient safety, infection control, care planning, staff qualifications, record keeping, and other matters to ensure that all ESRD patients receive safe and appropriate care. We proposed a technical change to 42 CFR 494.1(a), statutory basis, to incorporate the changes to ESRD facilities and treatment of AKI in the Act as enacted by section 808 of the Trade Protection Extension Act of 2015 (Pub. L. 114-27, June 29, 2015) (TPEA), and are finalizing this change as proposed.

    We did not propose changes to the CfCs specific to AKI, but did request comment from the dialysis community as to whether revisions to the CfCs might be appropriate for addressing treatment of AKI in ESRD facilities. We received 11 timely comments addressing this issue and thank the commenters for their input. While we are not formally responding to the comments at this time, the comments are summarized (with some clarification on our part), below.

    All commenters agreed that we do not need to revise the ESRD CfCs to address AKI at this time. About half of the commenters recommended that we not revise the CfCs to directly address AKI at all, while the remaining commenters suggested we consider revisions to requirements addressing the comprehensive patient assessment, care planning, modality options, and transplantation. A few commenters recommended that we not revise the ESRD CfCs to address AKI because AKI and ESRD are different diseases. We understand the reasoning behind this statement but wish to clarify that the ESRD CfCs apply to ESRD facilities, not to ESRD patients, and note that the ESRD CfCs would be the appropriate regulatory location for standards addressing care provided to AKI patients in ESRD facilities.

    We thank the commenters, and will consider their comments for future rulemaking and regulatory guidance.

    G. ESRD Facility Billing for AKI Dialysis

    For payment purposes, claims for beneficiaries with AKI would be identified through a specific condition code, an AKI diagnosis, an appropriate revenue code, and an appropriate Common Procedural Terminology code. These billing requirements would serve to verify that a patient has AKI and differentiate claims for AKI from claims for patients with ESRD. ESRD facilities are expected to report all items and services furnished to individuals with AKI and include comorbidity diagnoses on their claims for monitoring purposes. We anticipate that with exceptions for separately billable items and services, most of the claims policies laid out in Chapter 8 of the Medicare Claims Processing Manual will also apply to claims for dialysis furnished to AKI beneficiaries. All billing requirements will be implemented and furnished through sub-regulatory guidance.

    The comments and our responses to the comments for these proposals are set forth below.

    Comment: Industry organizations, an LDO, and an MDO made claims processing and cost report modification suggestions. Another industry organization commented that reimbursement policy should be clearly and unequivocally conveyed to all MACs. Another industry organization agrees with the creation of a specific payment code and corresponding Current Procedural Terminology code to distinguish AKI patients from ESRD patients. Another industry organization made suggestions for modifications to the cost report. Yet another industry organization suggested the CMS develop an intake form, a treatment form, and a recovery form with data elements specific to AKI.

    Response: We appreciate the thorough and thoughtful responses provided in regards to claims processing and cost report changes. We have completed a similar analysis and administrative guidance will be forthcoming. The usage of other forms will be considered for future updates as well.

    H. Announcement of AKI Payment Rate in Future Years

    In future years, we anticipate announcing the AKI payment rate in the annual ESRD PPS rule or in a Federal Register notice. We will adopt through notice and comment rulemaking any changes to our methodology for payment for AKI as well as any adjustments to the AKI payment rate other than the wage index. When we are not making methodological changes or adjusting (as opposed to updating) the payment rate, however, we will announce the update to the rate rather than subjecting it to public comment every year. We proposed to announce the annual AKI payment rate in a notice published in the Federal Register or, alternatively, in the annual ESRD PPS rulemaking, and provide for that announcement at proposed 42 CFR 413.375. We welcomed comments on announcing the AKI payment rate for future years.

    The comments and our responses to the comments for this proposal are set forth below.

    Comment: We received several comments from industry organizations encouraging CMS to allow for notice and comment rulemaking when updating the AKI payment rate.

    Response: Because we believe we are required under section 1834(r) to utilize the ESRD PPS base rate as adjusted by the wage index, we do not believe it is necessary to adopt that rate through notice and comment rulemaking as we don't believe we have discretion to adopt an amount other than that, except to the extent that we apply other payment adjustments to that amount. As noted above, any methodology changes or payment adjustments that are applied to the AKI dialysis payment rate will be adopted through notice and comment rulemaking.

    Final Rule Action: We are finalizing the announcement of the AKI payment as proposed and revising the regulations text at § 413.375 to reflect this proposal.

    IV. End-Stage Renal Disease (ESRD) Quality Incentive Program (QIP) A. Background

    Section 1881(h) of the Act requires the Secretary to establish an End-Stage Renal Disease (ESRD) Quality Incentive Program (QIP) by (1) selecting measures; (2) establishing the performance standards that apply to the individual measures; (3) specifying a performance period with respect to a year; (4) developing a methodology for assessing the total performance of each facility based on the performance standards with respect to the measures for a performance period; and (5) applying an appropriate payment reduction to facilities that do not meet or exceed the established Total Performance Score (TPS). This final rule discusses each of these elements and our policies for their application to the ESRD QIP.

    B. Summary of the Proposed Provisions, Public Comments, and Responses to Comments on the End-Stage Renal Disease (ESRD) Quality Incentive Program (QIP)

    The proposed rule, titled “End-Stage Renal Disease Prospective Payment System, Coverage and Payment for Renal Dialysis Services Furnished to Individuals with Acute Kidney Injury, End-Stage Renal Disease Quality Incentive Program, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program Bid Surety Bonds, State Licensure and Appeals Process for Breach of Contract Actions, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program and Fee Schedule Adjustments, Access to Care Issues for Durable Medical Equipment; and the Comprehensive End-Stage Renal Disease Care Model” (81 FR 42802 through 42880), was published in the Federal Register on June 30, 2016, with a comment period that ended on August 23, 2016.

    In that proposed rule, for the ESRD QIP, we proposed updates to the ESRD QIP, including updates for the PY 2018 through PY 2020 programs. We received approximately 50 public comments on our proposals related to the ESRD QIP, including comments from large dialysis organizations, ESRD facilities; national renal groups, nephrologists, patient organizations, patients and care partners, manufacturers, health care systems; nurses, and other stakeholders.

    In this final rule, we provide a summary of each proposed provision, a summary of the public comments received and our responses to them, and the policies we are finalizing for the ESRD QIP. Comments related to the paperwork burden are addressed in the “Collection of Information Requirements” section in this final rule. Comments related to the impact analysis are addressed in the “Economic Analyses” section in this final rule.

    We received comments about general policies and principles of the ESRD QIP. The comments and our responses are set forth below.

    Comment: Many commenters expressed concern about CMS' continued reliance on process measures and recommended that CMS seek to use risk-adjusted outcome measures that capture the effective management of dialysis patients. Commenters stressed that CMS should strive to adopt evidence-based measures that promote the delivery of high-quality care and improved patient outcomes. Commenters also stressed the importance of working with stakeholders in the nursing community when developing and implementing measures because nephrology nurses in particular are integral to the collection and processing of quality improvement data and it is vitally important to represent their perspective during the measure development and implementation process.

    Many commenters raised particular concerns about the lack of measures in the QIP that adequately address the needs of the pediatric population or of home hemodialysis patients. They argued that the current measurement criteria do not take their unique needs into consideration. Commenters asked CMS to ensure that the reporting structure is viable for all providers, whether they service patients in-center or at home. Many of the smaller facilities enter data manually into CROWNWeb, and commenters argued that given the current structure of the QIP, many pediatric facilities in particular are unable to participate. They recommended that CMS focus its attention on aligning quality metrics and value-based programs with the goal of achieving a high quality of care for pediatric patients. One commenter argued that it is counter-productive to subject providers who care for unique populations to penalties for not achieving results which are unrealistic in their populations.

    Response: We appreciate the commenters' commitment to the adoption of evidence-based measures that address high-quality care and improved patient outcomes. We share this commitment, which is why we've made an effort to incorporate measures that address patient experiences of care, readmissions and hospitalizations, and bloodstream infections. We hope to continue this trend in the future. We are cognizant of the issues around adequately assessing the quality of care provided for pediatric and home hemodialysis patients and we continue to investigate options to more effectively incorporate measures relevant to those patient populations. We continue to believe that existing data sources used to capture data for calculating ESRD QIP measures, (that is, CROWNWeb and NHSN) are viable for facilities that provide home as well as in-center hemodialysis, because they utilize web-based applications that can be accessed with a personal computer. Facilities providing home dialysis should also not experience any undue burden using claims to report clinical data if they are also able to submit claims for reimbursement.

    Comment: One commenter questioned why CMS believed it was necessary to develop Dialysis Facility Compare in addition to the QIP, because the commenter believes having two quality systems may lead to confusion for beneficiaries and their families. The commenter recommended that CMS align measurement methodologies and reporting requirements across CMS ESRD quality programs or, in the alternative, move toward using one quality measurement system that could be based on a reasonable number of outcomes-based performance measures as this would reduce administrative costs and confusion.

    Response: The ESRD QIP and Dialysis Facility Compare program have different purposes, which in certain cases necessitates divergent measure specifications and scoring methodologies. However, we continuously review measure specifications and scoring methodologies across the programs and will continue to create alignments where appropriate. The recently developed ESRD Measures Manual may help ease some of the confusion for facilities because it provides a comprehensive list of detailed measure specifications. The ESRD Measures Manual can be found here: https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/ESRDQIP/Downloads/CMS-ESRD-Measures-Manual-Final-v1_0.pdf.

    Comment: One commenter sought additional information about whether any data collected under the ESRD QIP measure set shows the impact of these measures on patient outcomes or Medicare spending on patients with ESRD.

    Response: We thank the commenter for their question. Unfortunately, with so many interdependent factors influencing the quality of care provided at dialysis facilities (for example, payment policies in the prospective payment system, FDA labeling policies, and independent advancement in the treatment of ESRD), it is difficult to disentangle the impact of ESRD QIP policies from other policies and developments in the field. CMS is actively monitoring the impact of ESRD QIP measures on the quality of care received by patients with ESRD, and has yet to identify any unintended consequences caused by policies or measures implemented by the program. In the future, as more studies are conducted and results become available, we will consider releasing these types of monitoring studies for review by the community.

    One objective measure we can examine is the improvement of performance standards over time. Table 2 below shows that as the ESRD QIP has refined its measure set and as facilities have gained experience with the measures included in the program, performance standards have generally continued to rise. We view this as evidence that facility performance is objectively improving. It remains difficult to disentangle these results from the impact of the ESRD QIP policies or those of other policies and developments in the field, but they show a steady rise in the quality of care received by patients with ESRD.

    Table 2—Improvement of Performance Standards Over Time Measure PY 2015 PY 2016 PY 2017 PY 2018 PY 2019 Hemoglobin > 12 g/dL 1% 0% Vascular Access Type: % Fistula 60% 62.3% 64.46% 53.51% 53.72% % Catheter 13% 10.6% 9.92% 16.79% 17.06% Kt/V: Adult Hemodialysis 93% 93.4% 96.89% 91.08% Adult, Peritoneal Dialysis 84% 85.7% 87.10% 75.42% Pediatric Hemodialysis 93% 93% 94.44% 84.16% Pediatric Peritoneal Dialysis 43.22% Hypercalcemia 1.7% 1.30% 3.92% 4.21% NHSN Bloodstream Infection SIR 1.812% 1.812 Standardized Readmission Ratio 0.996 0.996 1.276 Standardized Transfusion Ratio 1.470 1.470

    Comment: One commenter expressed concerns that if the ESRD QIP continues to take payment reductions from facilities, some facilities may be forced to close. They added that accountability for the outcomes facilities can influence is appropriate but it is important that CMS not become overzealous in its implementation of new measures.

    Response: Section 1881(h) of the Act requires that we implement the ESRD QIP program each year. We have carefully constructed policies related to each of the requirements specified in Section 1881(h). Our policies related to payment reductions for the ESRD QIP have been constructed to ensure that the application of the scoring methodology results in an appropriate distribution of payment reductions across facilities, such that facilities achieving the lowest TPSs receive the largest payment reductions. The largest payment reduction the ESRD QIP applies is 2 percent of a facility's total payment for the year. Additionally, we finalized a Small Facility Adjuster which ensures that small facilities are not adversely impacted by their small number of patients or by any outlier patients who may adversely impact their scores on quality measures included in the program. We believe the ESRD QIP's scoring methodology combined with payment reductions is the best way to ensure that facilities are held accountable for the care that they provide and are only penalized for providing care to their beneficiaries which does not meet a certain threshold. For the PY 2020 ESRD QIP, a facility will not receive a payment reduction if it achieves a minimum TPS that is equal to or greater than the total of the points it would have received if it performed at the performance standard for each clinical measure and it received the number of points for each reporting measure that corresponds to the 50th percentile of facility performance on each of the PY 2018 reporting measures.

    Regarding commenter's concern that facilities may be forced to close based upon the ESRD QIP's payment reductions, we have reviewed data on facility closures from 2008 through 2013 and we have seen a steady decrease in the number of facilities that have closed from 80 in 2010 to 56 in 2013. We recognize that the absolute number rose slightly from 45 in 2012 to 56 in 2013. However these numbers must be looked at in context. As a percentage of the total number of dialysis facilities nationwide, the number of facilities closing each year is not significant. Additionally, facility closures cannot be definitively attributed to any single factor. The ESRD QIP policies may play a small role in these numbers, but many other factors, both within and outside of healthcare, have an impact. Table 3 below shows the number of facilities closed from 2008 through 2013.

    Table 3—ESRD Facility Closures, 2008 Through 2013 Closed facilities 2008 2009 2010 2011 2012 2013 50 82 80 72 45 56

    Comment: Several commenters expressed concerns about the number of measures included in the QIP and about the addition of more measures, and argued that too many measures dilute the impact of quality programs. One commenter suggested that with the current measure set, patients are no longer being held responsible for their own care and urged CMS to consider more measures that assess patient compliance with treatment and medication. Another recommended that CMS look into developing a system to allocate Medicare benefits for patients depending on their responsibility in their medical treatment and care. One commenter argued this dilution of measure impact is evidenced by a close examination of the measure weights CMS proposed for PY 2020. Specifically, the small percentage assigned to each measure means that critical measures such as reducing catheter use are weighted in a similar manner to measures of less importance, such as the hypercalcemia clinical measure, which is “topped out” under the criteria previously finalized by the ESRD QIP. Commenters encouraged CMS to refrain from continuing to develop more measures and instead to work on finding a small set of measures to use in the program on an ongoing basis. One commenter encouraged CMS to pause its measure-development efforts in favor of working with the entire kidney care community (as opposed to a small group of TEP members) in order to identify a small set of core measures that matter. Commenters recommended that new measures be limited to evidence-based outcomes measures that promote the delivery of high-quality care and improved patient outcomes, and that they should be the most impactful measures. One commenter also stressed that CMS should consider which measures might be ready to be retired from the program, and they pointed out that critically important measures, such as the VAT: Catheter measure, are competing for percentage points with other measures that have less clinical significance to patients. This work would likely require addressing some of the underlying problems with existing measures. For example, commenter urged CMS to focus on developing a new bone mineral metabolism measure before pursuing other measure development to make sure the statutory requirement in PAMA is met.

    In developing this core set of measures, commenters urged CMS to adopt a set of minimum global exclusions that would be automatically applied to all measures. Specifically, they recommended the following exclusions: (1) Beneficiaries who die within the applicable month; (2) Beneficiaries who receive fewer than 7 treatments in a month; (3) Beneficiaries receiving home dialysis therapy who miss their in-center appointments when there is a documented good faith effort to have them participate in such a visit during the applicable month; (4) Transient dialysis patients; (5) Pediatric patients (unless the measure is specific to pediatric patients); and (6) Kidney transplant recipients with a functioning graft. Additionally, commenter asked that CMS clarify that beneficiaries must have treatment for at least 60 days to be assigned to a facility. One commenter added that CMS should particularly consider the needs of small facilities, pediatric patients, and patients who have received a transplant when developing exclusions which would apply across the board.

    Response: We understand that there are a number of measures we proposed to be added to the ESRD QIP for PY 2019 and PY 2020. Although we recognize that adopting more measures in the ESRD QIP increases costs to facilities as well as CMS, we believe these increased costs are outweighed by the benefits to patients of incentivizing quality care in the domains that the measures cover. We are constantly re-examining the measures that are included in the program to ensure that they are capturing a variety of information about the care that patients receive, and we carefully consider whether measures should be retired from the program. In an effort to ensure that the impact of the program is not diluted and that each measure receives an appropriate weight, we are finalizing changes to the weighting of measures and of the measure domains for both PY 2019 and PY 2020. We believe the weights we are finalizing will preserve the program's strong incentives for facilities to achieve high scores on the clinical measures and to fully and accurately report data for the reporting measures. In future years of the program, we will consider the feasibility of including measures that assess patient compliance with treatment and medication.

    As we stated in the CY 2015 ESRD PPS Final Rule (79 FR 66164), we considered applying these six global exclusion criteria in response to comments on the CY 2014 ESRD PPS proposed rule (78 FR 72192). We agree with commenters that exclusion criteria for the ESRD QIP measures should be consistent, where feasible. We further believe, however, that exclusions also need to take into account the population to which a measure applies and the settings for which the measures were developed (for example, in-center hemodialysis as opposed to home hemodialysis). As stated in previous rules, we will continue to look for ways to align exclusion criteria for measures in the ESRD QIP, as long as there is evidence to support such consistency.

    Comment: One commenter made several recommendations regarding the preview period and the Performance Score Report (PSR) provided to facilities as part of the preview period. First, commenter recommended that CMS consider lengthening the preview period from 30 to 60 days, because smaller facilities find it difficult to review their scores in detail, research patients and labs, write up comments and questions and submit formal inquiries within 30 days. Second, commenter requested that the PSR be updated to include the number of eligible patients and patient-months for each measure and for each facility rather than just including the number of patient-months. Third, commenter requested that CMS consider including new measures in the PSR the first year the measure is included in the QIP without counting scores towards a facility's TPS so that facilities may see how they would be scored and how they would rate but they could be given time to work on internal improvement before the new measure is officially finalized. Commenter also noted this would also give facilities time to prepare and update necessary billing system changes, policies and procedures and record-keeping/patient forms. Fourth, commenter requested that CMS release summary statistics each year about the Preview Period—specifically, how many formal inquiries are received, how many are received from each dialysis organization, how many are overturned and how many result in score changes, and how many systemic changes are approved. Finally, commenter requested that the PSR be updated to include actual numerical percentages rather than “requisite percentages” because this would avoid many questions and would help personnel understand how measures are scored.

    Response: We thank the commenter for their suggestions on ways to improve the Preview Period experience for facilities as well as ways to ensure that the PSR provides as much helpful information to facilities as possible. We will consider the feasibility of implementing some of these recommendations in future years of the program.

    Comment: One commenter questioned why CMS must make so many changes each year to the ESRD QIP Program—specifically, why new measures must be added, why the scoring methodology is changed, why new exclusion and eligibility criteria are added each year, etc. and argued that these changes are overly demanding and burdensome for facilities.

    Response: As new policies are implemented and new measures are added to the program, we are continually evaluating the program to ensure that we are capturing a broad range of information about the care that dialysis facilities are providing to patients and to ensure that our policies are in line with the goals we are seeking to achieve. As measures undergo maintenance and are evaluated by measures developers and by the NQF, new exclusion and eligibility criteria are added to ensure that each measure is specified appropriately to include only those patients who should be included in the measure's numerator and denominator. As these changes are incorporated into the program, other changes must follow, but we seek to provide facilities with as much notice as possible through rulemaking and other means of communication so that they are given appropriate time to make necessary changes within their own programs and policies.

    Commenter: One commenter asked whether CMS will allow Calcium, Phosphorus, and Kt/V to be obtained from outside sources the way hemoglobin (Hgb) is able to be collected from outside sources.

    Response: In response to the commenter's question, Calcium, Phosphorus, and Kt/V can all be obtained from outside sources in the same way that Hgb can be collected from outside sources. In fact, in the CY 2013 ESRD PPS Final Rule (77 FR 67473), we finalized that if a patient is hospitalized or transient during a claim month, the facility could monitor the serum calcium and serum phosphorus readings for that patient for the month if a patient has labs drawn by another provider/facility, those labs are evaluated by an accredited laboratory (a laboratory that is accredited by, for example, Joint Commission, College of American Pathologists, AAB (American Association of Bioanalysts), or State or Federal Agency), and the dialysis facility reviews the serum calcium and serum phosphorus readings. The Kt/V can also be obtained from outside sources in the same way, provided those same conditions are met.

    C. Requirements for the Payment Year (PY) 2018 ESRD QIP 1. Small Facility Adjuster (SFA) Policy for PY 2018

    In the CY 2016 ESRD PPS Final Rule, we revised the calculation of the Small Facility Adjuster (SFA) (80 FR 69039). In that rule we proposed to correct our description of the SFA for payment year (PY) 2017 and future years. Our original proposal pegged the SFA to the national mean, such that small facilities scoring below the national mean would receive an adjustment, but small facilities scoring above the national mean would not. Several commenters supported the overall objectives of the proposed SFA modification but were concerned that too few facilities would receive an adjustment under our proposed methodology. They recommended that rather than pegging the SFA to the national mean, we peg the SFA to the benchmark, which is the 90th percentile of national facility performance on a measure, such that facilities scoring below the benchmark would receive an adjustment, but those scoring above the benchmark would not. In the process of updating the finalized policy to reflect public comment, we inadvertently neglected to update this sentence from our statement of finalized policy: “For the standardized ratio measures, such as the Standardized Readmission Ratio (SRR) and Standardized Transfusion Ratio (STrR) clinical measures, the national mean measure rate (that is, P is set to 1.” (80 FR 69039).

    Setting the ratio measures at the national mean in the SFA equation would have been inconsistent with our desired policy position and would have been unresponsive to the commenter's point. It was also inconsistent with another part of our statement on the finalized SFA methodology and was more punitive for facilities because it did not provide an adjustment for a number of small facilities that may have been adversely affected by a small number of outlier patients. Therefore, in this year's rule making we proposed to correct the description of the SFA methodology such that, for the standardized ratio measures such as the SRR and STrR clinical measures, P is set to the benchmark, which is the 90th percentile of national facility performance.

    We sought comments on this proposal. The comments and our responses to comments are set forth below.

    Comment: One commenter expressed concerns about the SFA, arguing that the inclusion of very small sample sizes leads to many facilities' scores being driven more by luck than by actual performance, and stressed that this effect is particularly exacerbated for the standardized ratio measures.

    Response: We thank the commenter for their concern regarding the SFA. We want to clarify that this adjuster provides a positive adjustment to eligible small facilities' measure scores which we believe is sufficient to counteract the negative effects of a small patient census on facility scores.

    Final Rule Action: We are finalizing our proposal to correct the description of the SFA methodology such that, for the standardized ratio measures such as the SRR and STrR clinical measures, P is set to the benchmark, which is the 90th percentile of national facility performance. The purpose of this policy change is to ensure that small facilities are not adversely impacted by outlier patients and that facilities are being fairly scored on their actual performance regardless of their size.

    2. Changes to the Hypercalcemia Clinical Measure

    During the measure maintenance process at National Quality Forum (NQF), two substantive changes were made to the Hypercalcemia clinical measure. First, plasma was added as an acceptable substrate in addition to serum calcium. Second, the denominator definition changed such that it now includes patients regardless of whether any serum calcium values were reported at the facility during the 3-month study period. Functionally, this means that a greater number of patient-months will be included in this measure, because patient-months will not be excluded from the measure calculations solely because a facility reports no calcium data for that patient during the entire 3-month study period.

    We proposed to update the measure's technical specifications for PY 2018 and future years to include these two substantive changes to the Hypercalcemia clinical measure included in the ESRD QIP. These changes will positively impact data completeness in the ESRD QIP because facilities' blood tests typically use plasma calcium rather than serum calcium. Including patients with unreported calcium values in the measure calculations will encourage more complete reporting of this data. Additionally, these changes will ensure that the measure aligns with the NQF-endorsed measure and can continue to satisfy the requirements of the Protecting Access to Medicare Act of 2014 (PAMA), which requires that the ESRD QIP include in its measure set measures (outcomes-based, to the extent feasible), that are specific to the conditions treated with oral-only drugs.

    We sought comments on this proposal. The comments and our responses are set forth below.

    Comment: Several commenters requested clarification regarding the technical specifications for the Hypercalcemia Clinical Measure, noting that there is an apparent discrepancy. Specifically, they asked whether the exclusion “patients without at least one uncorrected serum calcium value at that facility during the 3-month study period” should be applicable for PY's 2017 through 2020.

    Response: We understand why there may be some confusion, however there is no real discrepancy in the technical specifications published at the time of the proposed rule. The technical specifications for PY 2017 are correct, and do not include the exclusion “patients without at least one uncorrected serum calcium value at that facility during the 3-month study period” because the updates to the measure were proposed for PY 2018 and future years. The PY 2018 Technical Specifications published at the time of the proposed rules reflected the change that we proposed. We note below that we are now delaying implementation of this change until PY 2019, so updated Technical Specifications for PY 2018 are now published on the CMS Web site. The Technical Specifications proposed for PY 2019, published at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/ESRDQIP/Downloads/PY-2020-NPRM-NHSN-Dialysis-Event-tech-spec-for-PY-2019.pdf only included specifications for the measure being added to the program for PY 2019 (that is, the proposed NHSN Dialysis Event Reporting Measure's Specifications). The Technical Specifications proposed for PY 2020 included all measures previously finalized for inclusion in the ESRD QIP for PY 2020, as well as the substantive changes described above which we proposed for the Hypercalcemia Clinical Measure.

    Because we are now finalizing the changes proposed to the Hypercalcemia Clinical Measure for PY 2019, we have provided updated Technical Specifications for PY 2018 at https://www.cms.gov/Medicare/Quality-INitiatives-Patient-Assessment-Instruments/ESRDQIP/index.html. The Technical Specifications that we are finalizing for PY 2019 and PY 2020 already contain these changes to the measure.

    Comment: One commenter recommended that CMS consult with stakeholders to determine whether a different Performance Standard should apply to Home Dialysis patients for the Hypercalcemia Clinical Measure, because the commenter believes the standards established in the rule are difficult for home dialysis programs to achieve due to dependence upon patient adherence and compliance. While in-center hemodialysis patients are generally given their medication through IV while they are in the dialysis center, home dialysis patients need to pick up their medications and adjust dosing as directed.

    Response:We thank commenter for their recommendation. However, “hypercalcemia is usually an inadvertent complication of the management of CKD mineral and bone disorder, so therapy should be focused on preventing the development of sustained serum calcium greater than 10.2 mg/dL. The TEP felt that the measure's threshold (>10.2 mg/dL) addressed concerns about adverse events in patients that exceeded the upper limit of normal and therefore was a safety concern for all ESRD patients. That safety concern, we argue, is irrespective of whether patients are on in-center hemodialysis or home peritoneal dialysis therapies (home HD, or PD), and we note that the TEP did not consider for discussion separate thresholds based on modality. Based on the TEP's reasoning, we feel there is an expectation that facilities are responsible for ensuring home dialysis patients as well as in-center patients avoid elevated calcium levels “above the normal range” as per clinical practice guideline recommendations. [KDIGO 2009]. As such, we believe it is appropriate to include home dialysis patients in the denominator of the hypercalcemia measure.

    Comment: Several commenters expressed concerns that the Hypercalcemia clinical measure is not impactful and is not the best indicator of clinical care because it is topped out and recommended that CMS instead focus its measure development efforts on developing and testing a more appropriate measure to meet the statutory requirement of PAMA, particularly in light of NQF's conclusion that there is very little room for improvement and that the performance gap identified by the developer did not warrant a national performance measure. One commenter specifically argued that the Hypercalcemia measure should not be characterized as a measure specific to conditions treated with oral-only drugs because Hypercalcemia is not only treated with oral-only drugs and because it may sometimes be treated with a calcimimetic when calcium levels have risen due to treatment with active Vitamin D, which is typically given intravenously during hemodialysis.

    Commenters also asserted that the measure provides no value to the patient and does not relate to the provision of quality care. Despite these concerns, they expressed an understanding that maintaining this measure in the ESRD QIP measure set meets the statutory requirements of PAMA, and encouraged CMS to work with the kidney care community to find replacement measures. They added that CMS should continue to track hypercalcemia, but stated that linking hypercalcemia to specific medications without including the influence of active Vitamin D is problematic and unlikely to produce reliable data. In the interim, commenters expressed support for the proposed changes to the measure to ensure that the measure continues to satisfy NQF recommendations, but urged CMS to continue monitoring the Food and Drug Administration's (FDA's) approach to new injectables because that may require CMS to reconsider its approach.

    Response: We thank the commenters for their comments. Hypercalcemia is the only measure of which we are aware that meets the statutory requirements in PAMA for an NQF-endorsed quality measure of conditions treated with oral-only medications. The measure has been recommended for reserve status endorsement by the NQF in part because of its utility as an important safety measure for dialysis patients. The NQF recommends measures for “reserve status” when they are “highly credible, reliable, and valid measures that have high levels of performance due to quality improvement actions. The purpose of reserve status is to retain endorsement of reliable and valid quality performance measures that have overall high levels of performance with little variability so that performance could be monitored in the future if necessary to ensure that performance does not decline.” 1 While hypercalcemia (as defined in the measure's technical specifications, as the serum calcium level of 10.2) is not a common complication among ESRD patients, it is still associated with elevated risks for mortality, suggesting that when it occurs, it can have serious consequences for patients.

    1 Glossary of Terms, National Quality Forum, https://www.qualityforum.org/Measuring_Performance/.../NQF_Glossary.aspx.

    We recognize that the Hypercalcemia measure is not a comprehensive measure of all oral-only medications, but limitations in available evidence have prevented us from developing measures that might address oral-only medications more broadly used in the ESRD dialysis population. We will continue to work with the community to develop more comprehensively applicable measures that meet these requirements. Three TEPs have been convened in 2006, 2010, and 2013 to address the topic of mineral bone disease measures, but the limited clinical evidence available has prevented those panels from recommending any measures that identify elevated levels of parathyroid hormone (PTH) or phosphorus. We have consulted with the dialysis community on this matter and will continue to do so, but we are unaware of any other specified and NQF-endorsed measure that would meet the requirements in PAMA. As evidence evolves to support more comprehensive measures of conditions treated by and these measures earn consensus endorsement, we agree that it will be appropriate to carefully consider the role of the Hypercalcemia measure in the ESRD QIP.

    Comment: One commenter expressed concerns about the effect the proposed changes to the Hypercalcemia clinical measure may have on facilities' TPSs and requested that CMS evaluate the impact of these changes on facility scores to ensure that no facility is penalized due to a change in methodology.

    Response: We have conducted additional analyses, the results of which are published here: https://www.cms.gov/Medicare/Quality-INitiatives-Patient-Assessment-Instruments/ESRDQIP/index.html. An analysis of the effect the changes to the Hypercalcemia clinical measure will have on payment reductions shows that only 11 additional facilities would receive a payment reduction under the new methodology compared to the old methodology. Table 4 below shows simulated payment reductions for PY 2020 using the old Hypercalcemia methodology (on the left) and the new Hypercalcemia methodology (on the right).

    Table 4—PY 2020 Simulated Payment Reductions Comparing Prior Hypercalcemia Methodology to New Hypercalcemia Methodology Reduction Simulated payment reductions for PY 2020 using prior hypercalcemia
  • methodology
  • (N(%))
  • Simulated payment reductions for PY 2020 using new hypercalcemia
  • methodology
  • 0/5 3322 (55.2%) 3311 (55.0%). 0.5% 1552 (25.8%) 1538 (25.5%). 1.0% 823 (13.7%) 832 (13.8%). 1.5% 255 (4.2%) 269 (4.5%). 2.0% 69 (1.2%) 71 (1.2%).

    Comment: One commenter expressed concern that the ESRD QIP has not adopted a measure specific to bone mineral disorder. The commenter noted that CMS correctly identified calcimimetics and phosphate binders as two types of oral-only drugs but argued that CMS incorrectly identified the three conditions that are treated with these two classes of drugs, and encouraged CMS to continue looking at measures specific to Chronic Kidney Disease (CKD) Mineral Bone Disease (MBD) broadly. They specifically recommended a composite measure which would focus on the three biochemical parameters associated with Chronic Kidney Disease Mineral Bone Disease: Calcium, phosphorous, and PTH, rather than focusing on one individual biochemical parameter in isolation.

    Response: We thank the commenter for raising concerns about adopting measures specific to bone mineral disorder. At present, we have two measures that address mineral bone disorder (MBD). We finalized a measure of hypercalcemia (NQF #1454) beginning with the PY 2016 program and we are finalizing the implementation of a phosphorus reporting measure (NQF #0255) beginning with PY 2020.

    The 2013 Mineral and Bone Disorder TEP recognized the current limited evidence supporting development of a new MBD measure. They repeatedly raised the issue of the overall lack of evidence that was available due to the lack of randomized clinical trials that exist in order to inform recommendations for proposed measures, and meet the criterion of scientific acceptability. The TEP did discuss the strength of evidence regarding PTH as a risk factor in light of recent randomized trials including EVOLVE (2012) and the ADVANCE study (2011).2 The TEP lacked agreement over the strength of the evidence but also concluded that these two trials are the current strongest bodies of evidence that exist since the 2010 TEP was convened. The 2013 TEP recognized that the previously cited problem with PTH assay variability could be overcome if the same assay is used each time; and that given the normal physiologic oscillations in PTH, measurement should be conducted more often to minimize variability. To that end, the TEP recommended a process measure that included documenting measurement of PTH and documentation of assay used. This measure still needs to undergo testing once required data elements are available for collection from dialysis facilities via CROWNWeb, or another system.

    2 Raggi P, Chertow GM, Torres PU, et al. “The ADVANCE study: A randomized study to evaluate the effects of cinacalcet plus low-dose vitamin D on vascular calcification in patients on hemodialysis.” Nephrology, dialysis, transplantation: Official publication of the European Dialysis and Transplant Association—European Renal Association (2011) 26; 1327-39. PMID 21148030.

    EVOLVE Trial Investigators, Chertow GM, Block GA, et al. “Effect of cinacalcet on cardiovascular disease in patients undergoing dialysis.” The New England journal of medicine (2012) 367:2482-94. PMID: 23121374.

    The 2013 TEP members agreed that the combination of laboratory values (PTH with calcium and phosphorus) may be more predictive of mortality, but since each lab value changes individually, it would be very difficult to make a recommendation based on a combination. It should also be noted that, the kidney care community would more readily support such a composite measure if each constituent measure were NQF endorsed. Previously one PTH measure, and two phosphorous measures were submitted to NQF (in 2010). These measures, respectively, were not endorsed due to the lack of evidence supporting a PTH target or range, and similarly lack of evidence to support a target for phosphorous. The suggested composite measure may be conceptually satisfying, but we are concerned that we lack sufficient evidence to justify implementing such a measure at this time.

    Comment: One commenter objected to the continued inclusion of the hypercalcemia measure in the QIP and encouraged CMS to consult with stakeholders to develop a more appropriate measure specific to the conditions treated with oral-only drugs. One commenter added that until CMS develops and implements a more suitable measure, calcimimetic agents should not be included in the ESRD PPS base rate.

    Response: We continue to believe that the hypercalcemia measure most effectively meets current statutory requirements as defined by MIPPA to include measures of mineral metabolism, and by PAMA, to include measures specific to conditions treated with oral-only drugs that are NQF-endorsed. As far as we are aware, there are no other clinical performance measures that currently meet these criteria.

    Comment: One commenter opposed the implementation of technical changes to the Hypercalcemia Clinical Measure for PY 2018 and recommended a delay until PY 2019 because facilities are currently in the performance period for PY 2018. They argued that it is inappropriate to change the technical specifications half way through a performance period.

    Response: We thank the commenter for their suggestion, and we agree that it would be unfair to facilities to make this change for PY 2018, given that the changes were not proposed until over half way through the performance period. The substantive modifications to the Hypercalcemia clinical measure were made during the NQF measure maintenance process that concluded at the end of last year, and while we believe it is crucial to keep measures in the ESRD QIP measure set consistent with NQF-endorsed specifications, we also recognize that notice should be given to facilities prior to making such substantive changes. The changes to the Hypercalcemia Clinical Measure will not affect the way in which facilities provide care to their beneficiaries or the reporting requirements for the measure. Rather, this change will affect the way this measure is calculated because the denominator definition has changed such that it now includes patients regardless of whether any serum calcium values were reported at the facility during the 3-month study period. Eligible facilities that do not report data for 3 consecutive months will be included in both the numerator and denominator for this measure's calculations. Functionally, facilities do not need to make any changes in response to the changes proposed.

    Final Rule Action: In consideration of the comments received, we are finalizing the changes to the hypercalcemia measure's technical specifications for PY 2019 and future years, rather than for PY 2018 as proposed. We note that these changes will positively impact data completeness, as facilities typically use plasma calcium blood tests and including patients with unreported calcium values in the measure calculation will encourage more complete data. Lastly, these measure changes will ensure alignment with NQF and satisfy the statutory requirements set forth in PAMA.

    D. Requirements for the PY 2019 ESRD QIP 1. New Measures for the PY 2019 ESRD QIP a. Reintroduction of the Expanded NHSN Dialysis Event Reporting Measure

    We first adopted the National Healthcare Safety Network (NHSN) Dialysis Event Reporting Measure for the PY 2014 ESRD QIP. For that program year, we required facilities to (1) enroll in the NHSN and complete any training required by the Centers for Disease Control and Prevention (CDC); and (2) submit 3 or more consecutive months of dialysis event data to the NHSN (76 FR 70268 through 69). For PY 2015, we retained the requirement for facilities to enroll in the NHSN and complete any training required by the CDC, but expanded the reporting period to require facilities to report a full 12 months of dialysis event data (77 FR 67481 through 84). Beginning with PY 2016, we replaced the NHSN Dialysis Event Reporting Measure with the clinical version of the measure (78 FR 72204 through 07). As a result, facilities were scored for purposes of the ESRD QIP based on how many dialysis events they reported to the NHSN in accordance with the NHSN protocol. We introduced the clinical version of the measure because we believed that the measure would hold facilities accountable for monitoring and preventing infections in the ESRD population. We continue to believe it is vitally important to hold facilities accountable for their actual clinical performance on this measure.

    Since we introduced the NHSN Bloodstream Infection (BSI) Clinical Measure into the ESRD QIP, some stakeholders have expressed significant concerns about two distinct types of accidental or intentional under-reporting. First, these stakeholders believe that many facilities do not consistently report monthly dialysis event data for the full 12-month performance period. Second, these stakeholders believe that even with respect to the facilities that report monthly dialysis event data, many of those facilities do not consistently report all of the dialysis events that they should be reporting. (80 FR 69048). These public comments, as well as our thorough review of data reported for the PY 2015 NHSN Dialysis Event Reporting Measure and results from the PY 2014 NHSN data validation feasibility study, suggest that as many as 60 to80 percent of dialysis events are under-reported.3 4

    3 Duc B. Nguyen, et al. Completeness of Methicillin-Resistant Staphylococcus aureus Bloodstream Infection Reporting From Outpatient Hemodialysis Facilities to the National Healthcare Safety network, 2013. Infection Control & Hospital Epidemiology, http://journals.cambridge.org/abstract_S0899823X15002652.

    4 Nicola D. Thompson, Matthew Wise, Ruth Belflower, Meredith Kanago, Marion A Kainer, Chris Lovell and Priti R. Patel. Evaluation of Manual and Automated Bloodstream Infection Surveillance in Outpatient Dialysis Centers. Infection Control & Hospital Epidemiology, Available on CJO 2016 doi: 10.1017/ice.2015.336.

    We believe that there are delicate tradeoffs associated with incentivizing facilities to both report monthly dialysis event data and to accurately report such data. On the one hand, if we incentivize facilities to report monthly dialysis event data but do not hold them accountable for their performance, we believe that facilities will be more likely to accurately report all dialysis events. Complete and accurate reporting is critical to maintaining the integrity of the NHSN surveillance system, enables facilities to implement their own quality improvement initiatives, and enables the CDC to design and disseminate prevention strategies. Nevertheless, incentivizing full and accurate reporting without financial consequences for poor performance will not necessarily improve patient safety. On the other hand, if we incentivize facilities to achieve high clinical performance scores without also incentivizing them to accurately report monthly dialysis event data, we believe that facilities will be less likely to report complete and accurate monthly data, which could diminish the integrity of the NHSN surveillance system and the quality improvement efforts that it supports. Maintaining an incentive structure along these lines increases the financial consequences for not achieving high clinical scores, but jeopardizes the accuracy and completeness of the dialysis event data upon which those scores are based.

    In light of these considerations, we believe that the best way to strike the proper balance between these competing interests is to propose to reintroduce the expanded NHSN Dialysis Event Reporting Measure, beginning with PY 2019, and to include both this measure and the NHSN BSI Clinical Measure in the ESRD QIP measure set.

    In combination with other programmatic features described in the proposed rule (see sections IV.C.2. and IV.C.8. of the proposed rule (81 FR 42824)), we believe this reporting measure will bolster incentives for facilities to report complete and accurate data to NHSN, while the clinical measure will preserve incentives to reduce the number of dialysis events. We believe that including both of these measures in the ESRD QIP measure set will ensure that we hold facilities accountable for the frequency with which they report data to the NHSN and will address validation concerns related to the two distinct types of under-reporting of data, described above.

    Beginning with PY 2019, we proposed that facilities must enroll in NHSN and complete any training required by the CDC related to reporting dialysis events via NHSN, and that they must report monthly dialysis event data on a quarterly basis to the NHSN. We also proposed that each quarter's data would be due 3 months after the end of the quarter. For example, data from January 1 through March 31, 2017 would need to be submitted to NHSN by June 30, 2017; data from April 1 through June 30, 2017 would need to be submitted by September 30, 2017; data from July 1 through September 30, 2017 would need to be submitted by December 31, 2017; and data from October 1 through December 31, 2017 would need to be submitted by March 31, 2018. For further information regarding NHSN's dialysis event reporting protocols, please see http://www.cdc.gov/nhsn/pdfs/pscmanual/8pscdialysiseventcurrent.pdf. These requirements are the same ones that previously applied to the expanded NHSN Dialysis Event Reporting Measure when that measure was included in the ESRD QIP (77 FR 67481 through 84).

    Section 1881(h)(2)(B)(i) of the Act requires that, unless the exception set forth in section 1881(h)(2)(B)(ii) of the Act applies, the measures specified for the ESRD QIP under section 1881(h)(2)(A)(iii) of the Act must have been endorsed by the entity with a contract under section 1890(a) of the Act (which is currently NQF). Under the exception set forth in 1881(h)(2)(B)(ii) of the Act, in the case of a specified area or medical topic determined appropriate by the Secretary for which a feasible and practical measure has not been endorsed by the entity with a contract under section 1890(a) of the Act, the Secretary may specify a measure that is not so endorsed so long as due consideration is given to measures that have been endorsed or adopted by a consensus organization identified by the Secretary. The proposed NHSN Dialysis Event Reporting Measure is not endorsed by the NQF, but for the reasons explained above, we believe that it is appropriate to assess facilities solely based on whether they actually report full and accurate monthly dialysis event data to the NHSN. Although we recognize that the NHSN BSI Clinical Measure is currently included in the ESRD QIP measure set and that this measure and the proposed NHSN Dialysis Event Reporting Measure would be calculated using the same set of data, the two measures assess different outcomes. We believe that including both of these measures in the ESRD QIP measure set will collectively support our efforts to ensure that facilities report, and are scored based on, complete and accurate dialysis event data.

    We sought comments on this proposal. The comments and our responses are set forth below.

    Comment: Several commenters did not support the proposal to reintroduce the Expanded NHSN Dialysis Event Reporting Measure, calling into question the validity and reliability of the clinical measure. They argued that the 60-80 percent of under-reporting of dialysis events demonstrates that the NHSN BSI Clinical Measure is not valid, and added that with that lack of validity comes uncertainty about whether the measure results in accurate findings. They argued that CMS should not finalize the measure, because giving facilities extra credit will not move the needle in ensuring that all events are reported, nor will this change the difficulties facilities have in obtaining information from hospitals. Several commenters also urged us to include the NHSN BSI Measure as a Reporting Measure for PY 2018 and PY 2019, and discontinue the inclusion of the NHSN BSI Clinical Measure until reliability and validity testing of the Clinical Measure has been completed.

    Response: Although previous studies have suggested that 60-80% of bloodstream infections might be underreported to NHSN, these results must be considered in the proper context. First, it is important to note that these studies have largely attributed under-reporting to poor communication of reportable positive blood cultures (PBCs) from hospitals to dialysis centers when bloodstream infections are identified in hospitals. Second, these studies are based on small sample sizes. Although we are aware that underreporting can occur in all dialysis facilities, the degree of variation in underreporting across facilities is unknown and this is a truer reflection of the reliability of the ESRD QIP measure. Underreporting by itself does not lead to an unreliable measure.

    The NHSN BSI measure has been endorsed by the National Quality Forum (NQF). The quantitative centerpiece of the NQF-endorsed NHSN Dialysis Event Bloodstream Infection Measure is the Standardized Infection Ratio (SIR), which is the ratio of observed to predicted events. Because the SIR has withstood scrutiny from NQF, which explicitly considered the measure's reliability, we continue to believe that it is reliable enough to remain in the ESRD QIP measure set.

    We recognize that there are shortfalls in BSI ascertainment for purposes of reporting and that more needs to be done to improve the quality and completeness of data used in the NHSN BSI measure. Nevertheless, the measure itself remains an important tool for assessing the quality of care and closing performance gaps when and where they are identified, and there is no other measure available that would serve this purpose. We believe that further improvements in the reliability of NSHN data can be achieved through more complete communications between hospitals and dialysis facilities of relevant measure data, in particular the results of diagnostic microbiology testing by hospitals that are indicative of bloodstream infections in dialysis patients. We also believe that more robust validation of measure data, such as the validation approach we are finalizing, offer additional safeguards against incomplete case finding and shortcomings in measure data. Additionally, the CDC has encouraged dialysis providers, especially large dialysis organizations, to perform a validation of their own data. The CDC has provided a validation toolkit, available for any facility to use on its own. The goal of the validation, whether performed internally or by an external observer, is to improve the quality of the data. Taking all these considerations into account, we believe that on balance the ESRD QIP and patients' interests are best served by retaining the NHSN BSI measure in the ESRD QIP measure set.

    Comment: Several commenters supported the reintroduction of the NHSN Dialysis Event Reporting Measure, as well as the continued inclusion of the NHSN BSI Clinical Measure and creation of the NHSN BSI Measure Topic as BSIs are serious events in ESRD patients. They argued that the integrity of the data that is submitted is essential for accurate analysis and benchmarking to improve BSI prevention, and that underreporting can be a serious hindrance to the data accuracy. One commenter suggested that scoring should be modified to incentivize reporting only for 12 complete months of data, awarding no points for incomplete reporting. One commenter recommended that CMS ensure facilities that are accurately reporting are not singled out as having worse outcomes because of being engaged in quality improvement projects and to develop a process whereby CMS would provide monthly feedback to providers so they can identify inconsistencies in their own reporting. One commenter also recommended that both the CDC and CMS should validate the data in a timely manner, and that NHSN data should be bi-directional such that a facility could review submitted data, analyze it to determine why there are inconsistencies, and make any necessary corrections to their process.

    Response: We thank commenters for their support and we agree that this approach will appropriately address bloodstream infections in ESRD patients. We agree that the integrity of the data submitted is essential for accurate analysis and benchmarking and that is precisely the reason we have taken the approach proposed. We hope that by incentivizing complete reporting, we will obtain as much information as possible to accurately analyze and benchmark the data for the NHSN BSI Clinical Measure, and by incentivizing the reduction of infections among facilities' patients, we will encourage facilities to pay close attention to these important events. Similarly, we believe that the increased data validation study we are finalizing and our updated data validation methodology will help us to determine the extent and types of underreporting that are occurring. We disagree that the scoring methodology should be modified to incentivize reporting only for 12 complete months of data because there is still some value in reporting 6-11 months of data. We believe our scoring methodology makes it clear that 12 complete months are ideal, but we still value the effort facilities are making in reporting 6-11 months of data and we believe it is important to recognize that through the methodology. Regarding commenter's suggestion to institute a bi-directional data validation process, NHSN data are already bi-directional. The data are immediately available within NHSN to be viewed and edited. CDC encourages all facilities to review their data on a regular basis to identify and correct errors. A dialysis data review tool is available here: http://www.cdc.gov/nhsn/pdfs/dialysis/3-steps-to-review-de-data-2014.pdf. It can be found on the following page under “Analysis Resources to Create Reports”: http://www.cdc.gov/nhsn/dialysis/event/index.html.

    Final Rule Action: For the reasons stated above, we are finalizing our proposal to reintroduce the NHSN Dialysis Event Reporting Measure to the ESRD QIP beginning with PY 2019 as proposed.

    b. Scoring the NHSN Dialysis Event Reporting Measure

    With respect to the NHSN Dialysis Event Reporting measure, we proposed to score facilities with a CCN Open Date on or before January 1, 2017. Using the methodology described below, we proposed to assign the following scores for reporting different quantities of data:

    Scoring Distribution for the Proposed NHSN Dialysis Event Reporting Measure Number of Reporting Months: 12 months = 10 points. 6-11 months = 2 points. 0-5 months = 0 points.

    We selected these scores for the following reasons: First, due to the seasonal variability of bloodstream infection rates, we want to incentivize facilities to report the full 12 months of data and reward reporting consistency over the course of the entire performance period. We therefore proposed that facilities will receive 10 points for submitting 12 months of data. Second, we recognized, however, that from the perspective of national prevention strategies and internal quality improvement initiatives, there is still some value in collecting fewer than 12 months of data from facilities. We also stated that we would need at least 6 months of data in order to calculate reliable scores on the NHSN BSI Clinical Measure. For these reasons, we proposed that facilities will receive 2 points for reporting between 6 and 11 months of dialysis event data. Finally, in consultation with the CDC, we have determined that NHSN BSI Clinical Measure rates are not reliable when they are calculated using fewer than 6 months of data. For that reason, we proposed that a facility will receive 0 points on the NHSN Dialysis Event Reporting Measure if it reports fewer than 6 months of data.

    The proposed scoring methodology for the NHSN Dialysis Event Reporting Measure differs slightly from what we finalized for PY 2015. For that year of the program, facilities were awarded 0 points for reporting fewer than 6 months of data, 5 points for reporting 6 consecutive months, and 10 points for reporting all 12 months of data. We believe that it is appropriate to reduce the number of points facilities receive for reporting 6-11 months of data from 5 to 2 because by PY 2019, facilities will have had 3 more years of experience reporting data to NHSN than they had for PY 2015.

    We sought comments on this proposal. The comments and our responses are set forth below.

    Comment: One commenter supported CMS's proposed methodology for scoring the proposed NHSN BSI Measure Topic and the NHSN Reporting Measure because it rewards dialysis facilities that have made investments to support robust surveillance programs by allowing for monthly data input. The commenter added that the proposed scoring methodology strongly encourages facilities to report all 12 months of data, which serves to improve the integrity of the data.

    Response: We thank the commenter for its support, and we agree that our proposed scoring methodology will encourage facilities to report all 12 months of data and that this will in turn improve the integrity of the data.

    Comment: Several commenters did not support the proposal for scoring the proposed NHSN Dialysis Event Reporting Measure because it inappropriately penalizes facilities and, combined with the proposed weight of the measure for PY 2019, does not accurately distinguish among facilities that fail to report varying amounts of data. Commenters noted that missing one month of reporting is not the same as missing 5 months, yet the proposed scoring methodology treats these situations the same. One commenter expressed concern about facilities that may miss something very insignificant for 1 month and then lose 8 points, and recommended that the measure be scored in the same way that the Mineral Metabolism reporting measure is currently scored, because it would still encourage a facility to report 12 months. Two commenters argued that a sliding scale would me more appropriate. One commenter specifically recommended that CMS consider 0 points for 0 months of data, 1 point for 1-2 months of data, and so on. Another commenter recommended that CMS change the weight of the NHSN BSI Clinical Measure to make it one quarter the weight of the other clinical measures.

    Response: We thank commenters for their suggestions, however we disagree that the proposed scoring methodology for the NHSN Dialysis Event Reporting Measure inappropriately penalizes facilities. In fact, we believe the scoring methodology appropriately rewards facilities for complete reporting and for their efforts at preventing infections, and that this scoring approach is consistent with the ESRD QIP's goal of incentivizing complete and accurate reporting as well as successful efforts to prevent bloodstream infections appropriate given the goals we are trying to accomplish. Unlike the Mineral Metabolism Reporting measure, facilities need to report all twelve months of data to NHSN in order to appropriately score and baseline the NHSN BSI Clinical Measure because there is seasonal variability in bloodstream infection rates. A sliding scale would not appropriately incentivize facilities to report the full 12 months' worth of data, which is needed to accurately score the NHSN BSI clinical measure. Additionally, we do not believe that reporting 1-2 months' worth of data significantly contributes to national prevention campaigns and internal quality improvement initiatives, and we therefore do not believe that it is appropriate to allocate any points on the reporting measure for this level of reporting. We want to incentivize facilities to report the full 12 months of data because without this data, the surveillance program that the CDC has established to monitor bloodstream infections will not function to its fullest extent. Scoring the reporting measure on a sliding scale is therefore inconsistent with the need to provide strong incentives for facilities to report the full 12 months of data. We recognize that facilities occasionally have difficulty accessing the NHSN system and the CDC is diligently working to ensure that facilities have the information and training that they need to report successfully, but we believe that the system functions appropriately and does not impose impediments that would prevent facilities from reporting data on a monthly basis. Although the NHSN BSI clinical measure cannot be scored accurately on the basis of less than 12 months of data, from the perspective of national prevention strategies and internal quality improvement initiatives, there is still some value in collecting between 6 and 11 months of data. This is why we have proposed to give facilities that do so 2 points on the Reporting Measure, even though they will continue to receive a score of zero on the NHSN BSI clinical measure.

    Final Rule Action: After consideration of the comments above we are finalizing the proposal for scoring the NHSN Dialysis Event Reporting Measure, described above, beginning in PY 2019. We believe this is the best way to incentivize complete and accurate reporting of NHSN data.

    2. New Measure Topic Beginning With the PY 2019 ESRD QIP—NHSN BSI Measure Topic

    Beginning with PY 2019, we proposed to create a new NHSN BSI Measure Topic. We proposed that this measure topic would consist of the following two measures:

    (i) NHSN Bloodstream Infection (BSI) in Hemodialysis Patients, a clinical measure;

    (ii) NHSN Dialysis Event Reporting Measure.

    We stated our belief that it is appropriate to combine these two measures into one measure topic because data from the reporting measure will be used to score both that measure and the clinical measure, and combining both measures under the same measure topic will better enable us to precisely calibrate incentives for complete and accurate reporting and high clinical performance. The NHSN BSI Clinical Measure and the NHSN Dialysis Event Reporting Measure are mutually reinforcing because one measure encourages accurate reporting while the other uses the reported data to assess facility performance on preventing BSIs in their patients. Therefore, combining the reporting and clinical measures under the same measure topic will simplify the process of weighting each of the two measures, such that incentives from one measure can be simply reallocated to the other if new evidence suggests that the incentives are not properly balanced to optimize both reporting and prevention.

    We sought comments on this proposal. The comments and our responses are set forth below.

    Comment: Two commenters supported the proposed creation of the NHSN BSI Measure Topic because it encourages accurate reporting as well as the prevention of bloodstream infections, but one commenter recommended that in an effort to avoid confusion, the two measures that comprise the Measure Topic should be renamed to avoid referring to them as either “Clinical” or “Reporting” measures. They suggested instead that CMS change the “NHSN Bloodstream Infection in Hemodialysis Patients Clinical Measure” name to “NHSN Bloodstream Infection in Hemodialysis Patients” without referring to it as a Clinical Measure and suggested changing the name of the “NHSN Dialysis Event Reporting Measure” to “NHSN Dialysis Event Surveillance” or “NHSN Dialysis Event Participation” or even “NHSN Dialysis Event Data Entry”.

    Response: We thank commenters for their support of the proposed NHSN BSI Measure Topic. However, we disagree that the names of the measures should be changed as the commenter recommended. The NHSN BSI Clinical measure is correctly referred to as a Clinical Measure because it measures the Standardized Infection Ratio (SIR) of BSIs among patients receiving hemodialysis at outpatient hemodialysis centers and is therefore a measure of the care being provided to beneficiaries. Similarly, the NHSN Dialysis Event Reporting Measure is correctly referred to as a Reporting Measure because it measures the number of months for which facilities report NHSN Dialysis Event data to the CDC's NHSN system and is therefore a measure of the completeness of a facility's data reporting. We agree with commenter that the proposed Measure Topic is neither purely clinical nor purely reporting, which is why we have proposed to place it within its own Safety Domain. However, the two measures that make up the Measure Topic are still fundamentally different in that one is a Clinical Measure and one is a Reporting Measure.

    Comment: In light of reliability issues discussed above, commenters encouraged CMS to retain the NHSN BSI Measure as a Reporting Measure, and to not finalize the NHSN BSI Measure Topic or the proposed addition of the Safety Measure Domain in the QIP until CMS can resolve issues surrounding reliability and validity of the Clinical Measure before including it in the QIP's measure set.

    Response: We thank commenters for their suggestion, however we have decided to finalize the NHSN BSI Measure Topic and the Safety Measure Domain. As discussed above, the studies conducted on the reporting of bloodstream infections to NHSN were largely attributed to poor communication of reportable positive blood cultures from hospitals to dialysis centers and were based on small sample sizes. We do not believe they are generally indicative of any issues of reliability or validity with the NHSN BSI measures. And we continue to believe that it is essential to retain the NHSN BSI clinical measure because it is absolutely critical to evaluate facilities' efforts to prevent bloodstream infections. In light of this this need to retain the NHSN BSI clinical measure, we continue to believe that the introduction of the NHSN BSI Measure Topic and the addition of the Safety domain is the best way to ensure complete and accurate reporting of data, while at the same time hold facilities accountable for preventing bloodstream infections.

    Comment: Commenter offered support to work with CMS to address the validity issues in the NHSN BSI measure and stated that ensuring the appropriate sharing of patient information between hospitals and dialysis facilities is a priority, but until that problem is solved and the validity of the NHSN Bloodstream Infection measure has been affirmed, they cannot support the proposed approach to NHSN.

    Response: We agree that it is vitally important to ensure the appropriate sharing of patient information between hospitals and dialysis facilities. We have addressed commenter's concerns about the validity of the NHSN BSI measure above, in section IV(D)(1)(a). Regarding commenter's suggestions surrounding communication between dialysis facilities and hospitals, we encourage facilities to implement processes and procedures to ensure that they are best able to receive information from local hospitals and that they are coordinating the care of their patients in the most effective ways possible.

    Comment: One commenter expressed concerns that the data specifications for the NHSN BSI Clinical Measure require collection of events from dialysis center and non-dialysis outpatient laboratories. They added that this measure originated in the hospital setting where all cultures are sent to a single lab, but extra data collection efforts are needed in the dialysis setting because cultures are performed at a variety of sites of care. They requested additional data testing to show that this is actually occurring. They added that the providers who are complying with the data specifications will likely appear to have a higher infection rate as more infections will be captured, whereas those who are not collecting data from other providers may not be accurately reporting all infections.

    Response: We are aware that underreporting can occur, and in some studies, has been largely attributed to poor communication of reportable positive blood cultures (PBCs) from hospitals to dialysis centers. The measure did not originate in the hospital setting. It has always been an outpatient dialysis center measure. The reporting of PBCs within one calendar day of a hospital admission is a necessary element of the BSI measure. Without it, facilities could refer most or all patients to an ED or hospital for suspected BSI and the measure would be compromised. We recognize that obtaining this information from hospitals can be challenging, and requires knowledge and implementation of the NHSN protocol. However, CDC, CMS and other stakeholders in the dialysis community agree that good communication across care transitions is important for not just surveillance, but optimal clinical care of patients. ESA dose, hepatitis B status, and communication of antibiotics prescribed and planned duration of treatment are just a few examples of information that should routinely be shared across healthcare facilities. A positive blood culture and organism identification and susceptibility results are equally important to communicate. CDC hosts protocol trainings that users should attend yearly to ensure NHSN participants are aware of the protocol requirements. CDC has also made available data validation tools that facilities can use to assess their knowledge and adherence to the reporting protocol. Facilities are given 90 days from the end of a quarter (before the reporting deadline) to facilitate obtaining records from hospitals and EDs. CDC is working with ESRD Networks and others to try to improve hospital-to-dialysis center communication. Networks will target facilities that have challenges obtaining these data from hospitals to assist them in developing more effective communication strategies. Together, we are actively seeking best practice strategies that can be shared with other facilities.

    Comment: One commenter requested that the CDC and CMS address potential data quality issues before the NHSN BSI Clinical Measure is used in the QIP and specifically requested that the CDC produce a histogram of infection events to determine if a bimodal distribution exists, which would suggest data reporting issues. They also recommended that CMS update the data submission process for CROWNWeb to improve data accuracy and reduce costs. They suggested that one solution may be to enable dialysis providers to “copy and paste” their entire database to CMS and that CMS and CDC should release histograms to determine if the NHSN BSI metric is truly valid and should be used in the QIP as currently structured.

    Response: We thank commenter for their suggestions and we will consider developing histograms of this nature for future analysis. We are constantly seeking ways to improve data accuracy and to reduce costs for facilities. We will take commenter's ideas about improving the data submission process for CROWNWeb into consideration for future updates of the CROWNWeb system.

    Comment: A commenter requested that CMS establish a minimum threshold for data submission completeness before using CROWNWeb data for the ESRD QIP or for other purposes and suggested that this could be accomplished by comparing the number of Medicare beneficiaries at a given facility who have claims with the number of patients with accepted data in CROWNWeb. One commenter also recommended that CMS validate patient counts against provider Electronic Medical Records to determine when the minimum threshold for the use of both Medicare and non-Medicare CW data is met.

    Response: We thank commenter for their suggestions. At this time, we are not proposing to establish a minimum threshold for data submission completeness however, as we stated in the CY 2014 ESRD PPS Final Rule (78 FR 72210), we encourage facilities to ensure that their patient censuses are accurately reflected in CROWNWeb. In this way, facilities can compare for themselves the number of Medicare beneficiaries they have seen and who have claims with the number of patients with accepted data in CROWNWeb attributed to their facility. With regards to validation, we agree that updates should be made to CROWNWeb to ensure that accurate data passes validation testing while also ensuring that inaccurate data is not used to calculate scores on ESRD QIP clinical performance measures, and we are in the process of enhancing CROWNWeb to accomplish this task. Nevertheless, facilities are ultimately responsible for ensuring that patient data is accurately reflected in CROWNWeb.

    Comment: One commenter urged CMS to change the definition of “positive blood culture” for the NHSN BSI Clinical Measure to ensure that positive blood cultures are only counted toward the measure calculation if the suspected source of blood culture was “vascular access,” not any of the other three options.

    Response: As we stated in the CY 2014 ESRD PPS Final Rule (78 FR 72205), NQF endorsed a bloodstream infection measure (NQF #1460, the measure upon which the proposed NHSN BSI Clinical Measure is based) because BSIs can be objectively identified. NQF raised concerns about an access-related bloodstream infection measure because determining the source of infections (for example, determining whether an infection was related to vascular access) requires subjective assessments. The NHSN BSI Clinical Measure avoids this subjectivity by including all positive blood cultures. This makes it simpler and more reliable than an access-related bloodstream infection measure. While we recognize that the NHSN BSI Clinical Measure may occasionally misattribute BSIs to dialysis facilities, we believe that the measure's objectivity, simplicity, and reliability make it the most appropriate measure for assessing facility performance. NHSN relies upon use of standard definitions to ensure that infection events are reported in the same manner across facilities. The vast majority of reported bloodstream infection events represent true HAIs that are not the result of misclassification or misattribution. Therefore, considering the benefits to patients associated with strong incentives to reduce BSIs, we believe that these technical issues are not significant enough to warrant changing the definition of “positive blood culture” for purposes of this measure. CDC will continue to assess the possibility that certain facility-related factors could systematically overestimate infection rates, and it will consider risk-adjusting the measure to take these factors into account.

    Comment: Commenter argued that when entering data for NHSN, it would be more logical for facilities to report the number of patients who were treated on the last two working days of the month, not the first two. Growing clinics' census numbers can increase dramatically over the course of a month, and entering a small number on the first two days as opposed to a larger number on the last two days will cause the estimated amount of blood cultures to be lower. This then impacts facility scoring, because the denominator derived from the first two working days of the month is not representative of the patient population treated at the facility during that full month.

    Response: To reduce the burden of manual denominator data collection, the National Healthcare Safety Network (NHSN) uses the number of patients dialyzed at a clinic during the first two working days of a reporting month as a proxy measure for the total number of patient-days-at-risk during that month.

    In a small study, CDC compared the NHSN denominator to various denominator measures including the last 2 days of the month and the entire month using electronically captured data and found that the first two working days was a generally good estimate of the entire month denominator.

    Specifically, the results revealed a strong correlation between monthly total denominator and NHSN denominator and between the NHSN denominator and the other denominator methods [p< 0.0001].5

    5 Poster Abstract Session: HAI Surveillance and Public Reporting, October 10, 2014. https://idsa.confex.com/idsa/2014/webprogram/Paper46611.html.

    We note that although a “growing clinic” might have an NHSN denominator that is low in one month (if there is a drastic increase during that month), the denominator should be a good estimator of the number of patients at the facility for all subsequent months. If the growth is more gradual, then the NHSN denominator is still a relatively good estimator of the month census. The only way this would not be the case is if census fluctuated drastically within each month so that the first 2 days were always somehow different than the rest of the month (for example, patients always added in the middle of the month and then removed before the start of the next month). We have not encountered a systematically occurring example of this type of phenomenon.

    Comment: One commenter recommended that CMS add some patient-level exclusions to the NHSN BSI Clinical Measure, and specifically urged CMS to exclude positive blood cultures for transient patients. They also urged CMS to consider implementing a threshold for number of patient months for a facility to qualify for the NHSN BSI Clinical Measure.

    Response: NHSN is designed to capture dialysis events for all dialysis patients (including transient patients). BSIs are important in all patients, including transient patients and meeting the “transient” definition does not exclude the patient from having an infection that could have been acquired in the dialysis center. Measure inclusions and exclusions were considered by the NQF when they reviewed and endorsed the BSI measure. NHSN has a field facilities can use to identify dialysis events that occurred in transient patients. This information can be used to inform internal QI purposes. See dialysis event protocol here: https://www.cdc.gov/nhsn/pdfs/pscmanual/8pscdialysiseventcurrent.pdf. We use claims to determine whether facilities meet the 11-patient minimum to be eligible for the NHSN BSI Clinical Measure.

    Final Rule Action: After considering the comments received, we are finalizing our proposal to include the NHSN BSI Measure Topic in the ESRD QIP. This new Measure Topic will consist of the NHSN Dialysis Event Reporting Measure and the NHSN BSI Clinical Measure, as described above. We believe these two measures are mutually reinforcing in that one measure rewards reporting and the other uses reported data to assess facilities' efforts to prevent dialysis events.

    3. New Safety Measure Domain

    We currently use two domains in the ESRD QIP for purposes of scoring. The first domain, termed the Clinical Measure Domain, is defined as an aggregated metric of facility performance on the clinical measures and measure topics in the ESRD QIP, and we use subdomains within the Clinical Measure Domain for the purposes of calculating the Clinical Measure Domain score (79 FR 66213). Second is a Reporting Measure Domain, in which scores on reporting measures are weighted equally (79 FR 66218 through 66219).

    In section IV.C.2 of the proposed rule (81 FR 42825), we described the NHSN BSI Measure Topic. We believe that this measure topic, consisting of both the NHSN Dialysis Event Reporting Measure and the NHSN BSI Clinical Measure, is fundamentally different from the other measures and measure topics included in the ESRD QIP's measure set. The two measures included in this measure topic are inextricably linked because data from the reporting measure is used to calculate the clinical measure. No other reporting measures currently included in the ESRD QIP's measure set are used for this purpose. Placing these two measures together in a single measure topic to which we can assign a single measure topic score, creates the important linkage between the two measures and balances out the competing incentives involved: Incentivizing complete and accurate reporting of data to NHSN while also incentivizing facilities to achieve high clinical scores on the clinical measure. Therefore, it does not appropriately belong in either the Reporting Measure Domain or the Clinical Measure Domain.

    Because of these fundamental differences, we proposed to remove the Safety Subdomain from the Clinical Measure Domain for PY 2019 and future payment years. We proposed that the Safety Subdomain will instead be a new, third Domain, separate from and in addition to the existing Clinical and Reporting Measure Domains. Additionally, we proposed that facilities will receive a Safety Measure Domain score in addition to their Reporting Measure Domain and Clinical Measure Domain scores. We describe our proposed scoring methodology more fully in section IV.C.6 of our proposed rule (81 FR 42826), and note that these three Domain scores will be combined and weighted to produce a Total Performance Score (TPS) for each facility.

    We sought comments on these proposals. The comments and our responses are set forth below.

    Comment: Several commenters supported CMS' goals of reducing BSIs and specifically supported the proposed creation of the new Safety Domain separate from the Clinical and Reporting Domains because the NHSN BSI Measure Topic does not belong solely in either in the Reporting or the Clinical Domains. They added that inclusion of both the Clinical and the Reporting measures for NHSN will encourage improvement and provide additional incentives for complete reporting.

    Response: We thank the commenters for their support, and we agree that inclusion of both the Clinical and the Reporting Measures for NHSN will encourage improvement and provide additional incentives for complete reporting.

    Comment: One commenter did not support CMS's proposal to establish a safety measure domain due to the reliability and validity issues of the NHSN BSI measure. The commenter further stated they do not believe the reintroduction of the NHSN Dialysis Event Reporting Measure is appropriate or necessary, nor do they believe the Measure Topic is necessary and they therefore believe the creation of the Safety Measure Domain is also unnecessary.

    Response: We have addressed the concerns raised by the commenter about the reliability and validity of the NHSN BSI Clinical Measure above (see section IV.D.1.a.). We believe that combining the NHSN Dialysis Event Reporting Measure together with the NHSN BSI Clinical Measure in a single NHSN BSI Measure Topic, as proposed, within the proposed Safety Measure Domain is the best way to ensure that the incentives for complete and accurate reporting and for the prevention of BSIs are appropriately calibrated. Combining the clinical and reporting measure into a hybrid measure topic accomplishes this objective because it reflects aggregated performance and reporting requirements.

    Final Rule Action: After careful consideration of the comments received, we are finalizing our proposal to remove the Safety Subdomain from the Clinical Measure Domain for PY 2019 and future payment years, and to add a new third domain, the Safety Measure Domain, to the ESRD QIP's scoring methodology. We believe that this approach is the best way to ensure complete and accurate reporting, while also incentivizing facilities to lower the incidence of BSIs among their patients.

    4. Scoring for the NHSN BSI Measure Topic

    We proposed to assign significant weight to the NHSN Dialysis Event Reporting Measure in the overall NHSN BSI Measure Topic score. However, our proposed weighting scheme also reflects our goal to incentivize strong performance on the clinical measure. For these reasons, we proposed that the NHSN Dialysis Event Reporting Measure be weighted at 40 percent of the measure topic score and the NHSN BSI Clinical Measure be weighted at 60 percent of the measure topic score. The formula below depicts how the NHSN BSI Measure Topic would be scored.

    Proposed Formula to Derive NHSN BSI Measure Topic Score: [NHSN Dialysis Event Reporting Measure Score * 0.4] + [NHSN BSI Clinical Measure Score*0.6] = Measure Topic Score

    We sought comment on this proposal. The comments and our responses are set forth below.

    Comment: One commenter supported CMS's proposal for scoring the NHSN BSI Measure Topic and believes that the 40/60 split between the Reporting and Clinical Measures will encourage both accurate reporting and strong clinical performance.

    Response: We thank the commenter for their support, and we agree that assigning 40 percent of the Measure Topic Score to the NHSN Dialysis Event Reporting Measure and 60 percent of the Measure Topic Score to the NHSN BSI Clinical Measure is the best way to incentivize both strong performance on the clinical measure and thorough and accurate reporting.

    Final Rule Action: Based upon the comments received, we will finalize the scoring for the NHSN BSI Measure Topic as proposed. We will assign 40 percent of the measure topic score to the NHSN Dialysis Event Reporting Measure and 60 percent of the measure topic score to the NHSN BSI Clinical Measure.

    5. Performance Standards, Achievement Thresholds, and Benchmarks for the Clinical Measures Finalized for the PY 2019 ESRD QIP

    In the calendar year (CY) 2016 ESRD PPS final rule, we finalized that for PY 2019, the performance standards, achievement thresholds, and benchmarks for the clinical measures would be set at the 50th, 15th and 90th percentile, respectively, of national performance in CY 2015, because this will give us enough time to calculate and assign numerical values to the proposed performance standards for the PY 2019 program prior to the beginning of the performance period. (80 FR 69060). At the time the proposed rule was published, we did not have the necessary data to assign numerical values to the proposed performance standards, achievement thresholds, and benchmarks because we did not yet have complete data from CY 2015. Nevertheless, we were able to estimate these numerical values based on the most recent data available at the time. For the Vascular Access Type, Hypercalcemia, NHSN BSI and ICH CAHPS clinical measures, this data came from the period of January through December 2015. For the SRR and STrR clinical measures, this data came from the period of January through December 2014. In Table 5, we provided the estimated numerical values for all of the finalized PY 2019 ESRD QIP clinical measures.

    Table 5—Estimated Numerical Values for the Performance Standards for the PY 2019 ESRD QIP Clinical Measures Using the Most Recently Available Data Measure Achievement threshold Benchmark Performance standard Vascular Access Type %Fistula 53.72% 79.62% 66.04% %Catheter 17.06% 2.89% 9.15% Hypercalcemia 4.21% 0.32 1.85% NHSN Bloodstream Infection SIR 1.812 0 0.861 Standardized Readmission Ratio 1.276 0.629 0.998 Standardized Transfusion Ratio 1.470 0.431 0.923 Comprehensive Dialysis Adequacy Measure Set 86.85% 97.19% 92.53% ICH CAHPS: Nephrologists' Communication and Caring 56.41% 77.06% 65.89% ICH CAHPS: Quality of Dialysis Center Care and Operations 52.88% 71.21% 60.75% ICH CAHPS: Providing Information to Patients 72.09% 85.55% 78.59% ICH CAHPS: Overall Rating of Nephrologists 49.33% 76.57% 62.22% ICH CAHPS: Overall Rating of Dialysis Center Staff 48.84% 77.42% 62.26% ICH CAHPS: Overall Rating of the Dialysis Facility 51.18% 80.58% 65.13%

    In previous rulemaking, we have finalized policies to the effect that if final numerical values for the performance standard, achievement threshold, and/or benchmark were worse than they were for that measure in the previous year of the ESRD QIP, then we would substitute the previous year's performance standard, achievement threshold, and/or benchmark for that measure. We finalized this policy because we believe that the ESRD QIP should not have lower performance standards than in previous years. In light of recent discussions with CDC, we have determined that in certain cases it may be appropriate to re-baseline the NHSN BSI Clinical Measure, such that expected infection rates are calculated on the basis of a more recent year's data. In such cases, numerical values assigned to performance standards may appear to decline, even though they represent higher standards for infection prevention. For this reason, with the exception of the NHSN BSI Clinical Measure, we proposed to substitute the PY 2018 performance standard, achievement threshold, and/or benchmark for any measure that has a final numerical value for a performance standard, achievement threshold, and/or benchmark that is worse than it was for that measure in the PY 2018 ESRD QIP. We also proposed that the performance standards for the NHSN BSI Clinical Measure for PY 2019 will be used irrespective of what values were assigned to the performance standards for PY 2018.

    We sought comments on this proposal. The comments and our responses are set forth below.

    Comment: Several commenters supported our continued reliance on the methodology used to set the Performance Standard, Achievement Threshold, and Benchmark at the 50th, 15th and 90th percentiles respectively of national facility performance for PY 2019. One commenter requested that CMS clarify in Table 2 of the proposed rule (81 FR 42826) whether the Benchmarks, Achievement Thresholds and Performance Standards listed for the ICH CAHPS measures are the percent of responses or the percent of top box responses. Another commenter asserted that if the national average for the NHSN BSI Clinical Measure is 5.15, then the benchmark of an SIR of 0.0 cannot be correct.

    Response: We thank the commenter for their support. In Table 2 of the proposed rule (81 FR 42826), the Benchmarks, Achievement Thresholds and Performance Standards listed for the ICH CAHPS measures represent the percent of top box responses. Table 2 in the proposed rule (81 FR 42826) indicates that the Achievement Threshold for the NHSN BSI SIR is 1.812, the Benchmark is 0 and the Performance Standard (that is, the average national performance) is 0.861. These values were estimated numerical values using the most recently available data at the time the proposed rule was published, and we have ensured that they were calculated correctly.

    Final Rule Action: Since the time that the Proposed Rule was published, we have collected the data needed to calculate finalized performance standards for the PY 2019 ESRD QIP. After consideration of the comments, we will finalize the performance standards, achievement thresholds, and benchmarks for the clinical measures included in the PY 2019 ESRD QIP as updated below, using the most recently available data. Table 6 below lists the finalized numerical values for all of the finalized PY 2019 ESRD QIP clinical measures.

    Table 6—Finalized Numerical Values for the Performance Standards for the PY 2019 ESRD QIP Clinical Measures Using the Most Recently Available Data Measure Achievement threshold Benchmark Performance standard Vascular Access Type %Fistula 53.66% 79.62% 65.93% %Catheter 17.20% 2.95% 9.19% Kt/V Composite 87.22% 97.74% 93.16% Hypercalcemia 4.15% 0.32% 1.83% Standardized Transfusion Ratio 1.564 0.336 0.894 Standardized Readmission Ratio 1.289 0.624 0.998 NHSN Bloodstream Infection 1.738 0 0.797 SHR measure 1.244 0.665 0.967 ICH CAHPS: Nephrologists' Communication and Caring 56.41% 76.93% 65.87% ICH CAHPS: Quality of Dialysis Center Care and Operations 52.88% 71.15% 60.74% ICH CAHPS: Providing Information to Patients 72.10% 85.54% 78.54% ICH CAHPS: Overall Rating of Nephrologists 49.37% 76.54% 62.17% ICH CAHPS: Overall Rating of Dialysis Center Staff 48.63% 77.41% 62.24% ICH CAHPS: Overall Rating of the Dialysis Facility 51.10% 80.45% 65.02% Data sources: VAT measures: 2015 Medicare claims; SRR, STrR: 2015 Medicare claims; Kt/V: 2015 Medicare claims and 2015 CROWNWeb; Hypercalcemia: 2015 CROWNWeb; NHSN: CDC; SHR: 2014 Medicare claims, CAHPS: 2015 ICH CAHPS surveys. 6. Weighting for the Safety Measure Domain and Clinical Measure Domain for PY 2019

    As discussed in section IV.C.3 of the proposed rule (81 FR 42825), we proposed to remove the Safety Subdomain from the Clinical Measure Domain and establish it as a third domain alongside the Clinical Measure and Reporting Measure Domains for the purposes of scoring facilities and determining Total Performance Scores (TPSs).

    In light of stakeholder comments we have received about the prevalence of under-reporting for the NHSN BSI Clinical Measure, as well as the tradeoffs (discussed more fully in section IV.C.1.a. of the proposed rule (81 FR 42823) between our desire to maintain strong incentives for facilities to report bloodstream infections and to prevent those infections, and because the Safety Domain is comprised of a single measure topic, we believe it is necessary to reduce the weight of the Safety Measure Domain as a percentage of the TPS. However, we believe it is important to maintain as much consistency as possible in the ESRD QIP scoring methodology. Therefore, we proposed to gradually reduce the weight of the Safety Measure Domain to 15 percent of the TPS in PY 2019, and then reduce it further in PY 2020, as proposed below. We further proposed that the Clinical Measure Domain will be weighted at 75 percent of the TPS, and the Reporting Measure Domain will continue to be weighted at 10 percent of the TPS because we do not want to diminish the incentives to report data on the reporting measures.

    In the CY 2015 ESRD PPS final rule, we finalized the criteria we will use to assign weights to measures in a facility's Clinical Measure Domain score (79 FR 66214 through 66216). Under these criteria, we take into consideration: (1) The number of measures and measure topics in a subdomain; (2) how much experience facilities have had with the measures; and (3) how well the measures align with CMS' highest priorities for quality improvement for patients with ESRD.

    With respect to criterion 3, one of our top priorities for improving the quality of care furnished to ESRD patients includes increasing the number and significance of both outcome and patient experience of care measures because these measures track important patient outcomes, instead of focusing on the implementation and achievement of clinical processes that may not result in improved health for patients.6 We believe that a shift toward outcome measures will establish a sounder connection between payment and clinical results that matter to patients. We similarly believe that it is important to prioritize measures of patient experience because high performance on these measures improves clinical outcomes and patient retention. Accordingly, we believe that increasing the impact of outcome and patient experience of care measures in the ESRD QIP measure set will ensure that facilities that fail to perform well on these measures are much more likely to receive a payment reduction.

    6 CMS Quality Strategy, page 10, 2016. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/QualityInitiativesGenInfo/Downloads/CMS-Quality-Strategy.pdf.

    In light of the proposed addition of the Safety Measure Domain as well as the policy priorities discussed above, we proposed to change the Clinical Measure Domain weighting for the PY 2019 ESRD QIP. Specifically, we proposed to increase the weight of the Vascular Access Type, Dialysis Adequacy and Hypercalcemia measures by 1 percentage point each in the Clinical Measure Domain. This will result in a minor reduction of the weight that each of these measures receives as a percentage of the TPS, which is consistent with our policy to assign greater weight to outcome and experience of care measures. We also proposed to apportion six percent of the Clinical Measure Domain to the standardized readmission ratio (SRR) and In-center hemodialysis consumer assessment of healthcare providers and systems (ICH CAHPS) measures, and to apportion the remaining 5 percent to the standardized transfusion ratio (STrR) measure. We believe this is appropriate because it distributes points as equally as possible among the outcome and experience of care measures, with a slight preference for SRR and ICH CAHPS because facilities will have had more experience with these measures than they will have had with STrR.

    For the reasons discussed above, we proposed to use the following weighting system in Table 7 below, for calculating a facility's Clinical Measure Domain score for PY 2019. For comparison, in Table 8, we have also provided the Measure Weights we originally finalized for PY 2019 in the CY 2016 ESRD PPS Final Rule (80 FR 69063).

    Table 7—Proposed Clinical Measure Domain Weighting for the PY 2019 ESRD QIP Measures/Measure topics by subdomain Measure weight in the clinical
  • measure domain score
  • (proposed for PY 2019)
  • (%)
  • Measure weight as percent of TPS
  • (proposed for PY 2019)
  • (%)
  • Patient and Family Engagement/Care Coordination Subdomain 42 ICH CAHPS measure 26 19.5 SRR measure 16 12 Clinical Care Subdomain 58 STrR measure 12 9 Dialysis Adequacy measure 19 14.25 Vascular Access Type measure topic 19 14.25 Hypercalcemia measure 8 6 Note: For PY 2019, we proposed that the Clinical Domain will make up 75 percent of a facility's TPS. The percentages listed in this Table represent the measure weight as a percent of the Clinical Domain Score.
    Table 8—Finalized Clinical Measure Domain Weighting for the PY 2019 ESRD QIP [Finalized in the CY 2016 ESRD PPS Final Rule] Measures/Measure topics by subdomain Measure weight in the clinical
  • measure domain score
  • (finalized for PY 2019)
  • (%)
  • Measure weight as percent of TPS
  • (finalized for PY 2019)
  • (%)
  • Safety Subdomain 20 NHSN BSI Clinical Measure 20 18 Patient and Family Engagement/Care Coordination Subdomain 30 ICH CAHPS measure 20 18 SRR measure 10 9 Clinical Care Subdomain 50 STrR measure 7 6.3 Dialysis Adequacy measure 18 16.2 Vascular Access Type measure topic 18 16.2 Hypercalcemia measure 7 6.3

    In the CY 2016 ESRD PPS Final Rule, we finalized a requirement that, to be eligible to receive a TPS, a facility had to be eligible for at least one reporting measure and at least one clinical measure (80 FR 69064). With the proposed addition of the Safety Measure Domain for PY 2019, we proposed a change to this policy. Specifically, for PY 2019, we proposed that to be eligible to receive a TPS, a facility must be eligible for at least one measure in the Clinical Measure Domain and at least one measure in the Reporting Measure Domain. As such, facilities do not need to receive a score on a measure in the Safety Measure Domain in order to be eligible to receive a TPS. The NHSN BSI Clinical Measure and the NHSN Dialysis Event Reporting Measure have the same eligibility requirements (specifically they require that a facility treated at least 11 eligible patients during the performance period). We proposed this change in policy to avoid a situation in which a facility is eligible to receive a TPS when it only receives a score for a single measure topic. We did not propose any changes to the policy that a facility's TPS will be rounded to the nearest integer, with half of an integer being rounded up.

    We sought comments on these proposals. The comments and our responses for these proposals are set forth below.

    Comment: Two commenters did not support our proposal for weighting the proposed safety domain within the TPS or our proposal to change the weighting of the clinical measure domain for PY 2019. They suggested that CMS consider re-weighting the Subdomains in the Clinical Measure Domain and reduce the weight of the Patient and Family Engagement/Care Coordination Subdomain because the measures within this subdomain—Readmissions and ICH CAHPS—may not have any relation with clinical performance. Specifically, one commenter argued that the SRR measure accounts for readmissions due to foot ulcers or cancer treatment and may have nothing to do with facility performance. Likewise, the Patient Satisfaction survey scores may be skewed, commenter argued, due to end of life grief, loss, chronic illness, anger with diagnosis, organic brain diagnosis or other cognitive disabilities. For these reasons, the commenter urged CMS to reduce the weight of the Patient and Family Engagement/Care Coordination Subdomain to 20 percent or less of the Clinical Measure Domain score and give more weight to the Clinical Measures themselves. One commenter also argued that the current weighting proposal is not balanced and recommended that CMS either reduce the weight of the Patient and Family Engagement Subdomain back to 30 percent, consider adding another measure to the subdomain, or reduce the number of completed ICH CAHPS surveys needed to be eligible for that measure.

    Response: We thank commenters for their suggestions. We proposed the weighting structure for several reasons, outlined in more detail in the proposal. We carefully considered the criteria finalized in the CY 2015 ESRD PPS Final Rule (79 FR 66213 through 66216) to construct the proposed scoring methodology. Specifically, we considered the number of measures and measure topics within a subdomain, the experience facilities have had with the measures, and how well the measures align with CMS' highest priorities for quality improvement for patients with ESRD. We have weighted the SRR and ICH CAHPS measures as proposed because facilities will have had more experience with these measures than they will have had with the STrR measure, and because the focus on patient satisfaction and care coordination constitutes and important policy priority for CMS. Furthermore, we disagree with the commenters that the SRR measure does not have any relation with clinical performance. The SRR measure is carefully risk adjusted to account for comorbidities and patient characteristics relevant to the ESRD population. Additionally, while the causes of readmissions are multifactorial, our analyses demonstrate that facilities are able to exert an influence on readmissions that is roughly equivalent to that exerted by the discharging acute care hospital. We believe that coordination of care requires interaction between multiple providers, including those discharging the patient, and those continuing patient care following discharge. While cultural factors and patient noncompliance can lead to hospital admissions, this is no less true for the acute care hospitals, long-term care hospitals, inpatient rehabilitation facilities, nursing homes, and home health agencies, and it does not negate the deleterious consequences readmissions can have for those patients. At this time there are no additional measures that can appropriately be added to the Patient and Family Engagement Subdomain. However we are constantly working with the kidney care community to identify measures that are appropriate for the ESRD QIP program. Finally, the ICH CAHPS measure cannot be reliably scored on the basis of fewer than 30 completed surveys, so we do not believe it is appropriate to reduce this aspect of the minimum data requirements for the measure. It is important to note that the weight allocated to ICH CAHPS in the TPS will be distributed evenly throughout the measures on which a facility received a score, in the event that the facility does not obtain the 30 completed surveys needed to score the ICH CAHPS measure.

    Comment: Two commenters supported CMS's criteria for weighting measures but recommended adding three additional criteria: (1) Strength of Evidence; (2) Opportunity for Improvement; and (3) Clinical Significance.

    Response: We thank the commenters for their support. We agree with the commenters that these criteria encompass important considerations for evaluating measures. As stated in the CY 2015 ESRD PPS Final Rule with comment period (79 FR 66216) and the CY 2016 ESRD PPS Final Rule with comment period (80 FR 69063), we take these criteria into account when making decisions about whether to adopt a measure in the ESRD QIP, because it would be inappropriate to adopt a measure that did not meet these criteria. Based on this understanding, we developed the three criteria discussed above for determining subdomain weighting within the Clinical Measure Domain (80 FR 37849). We believe these criteria account for the programmatic and operational concerns associated with scoring facilities on the ESRD QIP while also reflecting our focus on improving the quality of care provided to ESRD patients. This analysis also implicitly includes a review of the strength of the clinical evidence supporting the measure, the opportunity for improvement among facilities, and the clinical significance of the measure because these issues are inextricably linked with an assessment of the measure's appropriateness and importance of measurement within the ESRD QIP. Because the additional criteria recommended by the commenter are used as a threshold for adopting ESRD QIP measures and are sub-components of the three previously finalized measure weighting criteria, we do not believe it would be appropriate to also factor these criteria into decisions about how much weight to give measures in a facility's Clinical Domain Score.

    Final Rule Action: After consideration of the comments, we will finalize the weighting structure for PY 2019 as proposed. We are also finalizing the new policy described above that to be eligible to receive a TPS, a facility must be eligible for at least one measure in the Clinical Measure Domain and at least one measure in the Reporting Measure Domain. This policy will ensure that facilities will not be eligible to receive a TPS if they only receive a score for a single measure topic.

    The weights we are finalizing appear in Table 9, below:

    Table 9—Final Clinical Measure Domain Weighting for the PY 2019 ESRD QIP Measures/measure topics by subdomain Measure weight in the
  • clinical measure
  • domain score
  • (proposed for PY 2019)
  • (%)
  • Measure weight as
  • percent of TPS
  • (proposed for PY 2019)
  • Patient and Family Engagement/Care Coordination Subdomain 42 ICH CAHPS measure 26 19.5 SRR measure 16 12 Clinical Care Subdomain 58 STrR measure 12 9 Dialysis Adequacy measure 19 14.25 Vascular Access Type measure topic 19 14.25 Hypercalcemia measure 8 6 Note: For PY 2019, the Clinical Domain will make up 75 percent of a facility's TPS. The percentages listed in this Table represent the measure weight as a percent of the Clinical Domain Score.
    7. Example of the Final PY 2019 ESRD QIP Scoring Methodology

    In this section, we provide examples to illustrate the proposed scoring methodology for PY 2019. Figures 1 through 4 illustrate how to calculate the Clinical Measure Domain score, the Reporting Measure Domain score, the Safety Measure Domain score, and the TPS. Figure 5 illustrates the full proposed scoring methodology for PY 2019. Note that for this example, Facility A, a hypothetical facility, has performed very well.

    Figure 1 illustrates the methodology used to calculate the Clinical Measure Domain score for Facility A.

    ER04NO16.304

    Figure 2 illustrates the general methodology for calculating the Reporting Measure Domain score for Facility A.

    ER04NO16.305

    Figure 3 illustrates the methodology used for calculating the Safety Measure Domain score for Facility A.

    ER04NO16.306

    Figure 4 illustrates the methodology used to calculate the TPS for Facility A.

    ER04NO16.307

    Figure 5 illustrates the full scoring methodology for PY 2019.

    ER04NO16.308 8. Payment Reductions for the PY 2019 ESRD QIP

    Section 1881(h)(3)(A)(ii) of the Act requires the Secretary to ensure that the application of the ESRD QIP scoring methodology results in an appropriate distribution of payment reductions across facilities, such that facilities achieving the lowest TPSs receive the largest payment reductions. In the CY 2016 ESRD PPS final rule, we finalized our proposal for calculating the minimum TPS for PY 2019 and future payment years (80 FR 69067). Under our current policy, a facility will not receive a payment reduction if it achieves a minimum TPS that is equal to or greater than the total of the points it would have received if: (i) It performs at the performance standard for each clinical measure; and (ii) it receives the number of points for each reporting measure that corresponds to the 50th percentile of facility performance on each of the PY 2017 reporting measures (80 FR 69067).

    We were unable to calculate a minimum TPS for PY 2019 in the CY 2016 ESRD PPS final rule because we were not yet able to calculate the performance standards for each of the clinical measures. We therefore stated that we would publish the minimum TPS for the PY 2019 ESRD QIP in the CY 2017 ESRD PPS final rule (80 FR 69068).

    Based on the estimated performance standards listed above, we estimated that a facility must meet or exceed a minimum TPS of 59 for PY 2019. For all of the clinical measures except the SRR and STrR, these data come from CY 2015. The data for the SRR and STrR clinical measures come from CY 2014 Medicare claims. For the ICH CAHPS clinical measure, we set the performance standard to zero for the purposes of determining this minimum TPS, because we are not able to establish a numerical value for the performance standard through the rulemaking process before the beginning of the PY 2019 performance period. We proposed that a facility failing to meet the minimum TPS, as established in the CY 2017 ESRD PPS final rule, will receive a payment reduction based on the estimated TPS ranges indicated in Table 10.

    Table 10—Estimated Payment Reduction Scale for PY 2019 Based on the Most Recently Available Data Total performance score Reduction
  • (%)
  • 100-59 0.0 58-49 0.5 48-39 1.0 38-29 1.5 28-0 2.0

    We sought comments on these proposals. The comments and our responses are set forth below.

    Comment: Two commenters did not support our proposed payment reductions for the PY 2019 ESRD QIP. One commenter expressed the following concerns with the proposed Scoring Methodology. First, they are concerned about the unresolved methodological issues surrounding the validity and reliability of the NHSN BSI Measure. Second, CROWNWeb data transmission issues remain a concern. Third, CMS seems to be pursuing a strategy of including ESRD QIP measures that are outside the dialysis facility's direct sphere of influence. One commenter argued that all three of these issues could result in an artificial deterioration in dialysis facility performance with respect to the ESRD QIP performance scoring, in the absence of a demonstrable change in the quality of care delivered. One commenter urged CMS to delay increasing the stringency of ESRD QIP scoring until these issues have been addressed. Another commenter argued that the current scoring methodology unfairly penalizes small facilities, particularly those that are affiliated with academic medical centers, and they were troubled by CMS's assertion that the care they provide to their patients is anything less than high quality. One commenter suggested that TPSs should not be calculated for low-volume dialysis programs because doing so may cause an inappropriate distribution of payments across facilities, which is contrary to Section 1881(h)(3)(A)(ii) of the Act.

    Response: We thank commenters for sharing their concerns. We have several policies in place designed to address the commenters' concerns. Specifically, the SFA is designed to ensure that small facilities, many of which are affiliated with academic medical centers, are not adversely affected by a small number of outlier patients. We have addressed concerns about the reliability and validity of the NHSN BSI Clinical Measure in section IV.D.1.a in this rule. We believe it is important to include even the low-volume dialysis facilities in the ESRD QIP and to calculate a TPS for them so that these facilities receive appropriate incentives to deliver high quality care to their patients. We are continually striving to improve the data submission process in CROWNWeb to make the process easier for facilities, and we note that low rejection rates achieved by certain batch-submitting organizations demonstrates that CROWNWeb is equipped to accept this mode of data submission. Additionally, we believe that all of the measures in the ESRD QIP measure set evaluate the quality of care that is within the dialysis facility's sphere of influence, included to SRR measure, because our analyses demonstrate that the facility exerts an influence on readmissions roughly equivalent to that exerted by the discharging acute care hospital. Finally, we are constantly examining our policies and methodologies to ensure that they fairly and accurately assess the quality of care provided by dialysis facilities, and we do not believe that the proposed payment reduction policies constitute increased stringency because this policy has remained constant since the PY 2014 program (76 FR 70282).

    Comment: Several commenters supported our continuation of the current policy for determining payment reductions, including the process for setting the minimum TPS. One commenter argued that it is critical to ensure that the ESRD QIP performance scoring is well thought-out and fair to all facilities, including low-volume facilities which service sicker-than average populations.

    Response: We thank the commenters for their support and we believe that the ESRD QIP's scoring methodology is fair to all facilities. We also note that we finalized the SFA specifically to ensure that low-volume facilities are not unfairly penalized for a few outlier patients who could significantly impact their measure scores.

    Final Rule Action: After careful consideration of the comments received and an analysis of the most recently available data, we are finalizing that the minimum TPS for PY 2019 will be 60. We are also finalizing the payment reduction scale shown in Table 11.

    Table 11—Payment Reduction Scale for PY 2019 Based on the Most Recently Available Data Total performance score Reduction
  • (%)
  • 100-60 0.0 50-59 0.5 40-49 1.0 30-39 1.5 0-29 2.0
    9. Data Validation

    One of the critical elements of the ESRD QIP's success is ensuring that the data submitted to calculate measure scores and TPSs are accurate. We began a pilot data validation program in CY 2013 for the ESRD QIP, and procured the services of a data validation contractor that was tasked with validating a national sample of facilities' records as reported to Consolidated Renal Operations in a Web-Enabled Network (CROWNWeb). For validation of CY 2014 data, our first priority was to develop a methodology for validating data submitted to CROWNWeb under the pilot data validation program. That methodology was fully developed and adopted through the rulemaking process. For the PY 2016 ESRD QIP (78 FR 72223 through 72224), we finalized a requirement to sample approximately 10 records from 300 randomly selected facilities; these facilities had 60 days to comply once they received requests for records. We continued this pilot for the PY 2017 and PY 2018 ESRD QIP, and proposed to continue doing so for the PY 2019 ESRD QIP. Under this continued validation study, we will sample the same number of records (approximately 10 per facility) from the same number of facilities (that is, 300) during CY 2017. If a facility is randomly selected to participate in the pilot validation study but does not provide us with the requisite medical records within 60 calendar days of receiving a request, then we propose to deduct 10 points from the facility's TPS. Once we have developed and adopted a methodology for validating the CROWNWeb data, we intend to consider whether payment reductions under the ESRD QIP should be based, in part, on whether a facility has met our standards for data validation.

    In the CY 2015 ESRD PPS final rule, we also finalized that there will be a feasibility study for validating data reported to the Centers for Disease Control and Prevention (CDC's) National Healthcare Safety Network (NHSN) Dialysis Event Module for the NHSN BSI Clinical Measure. Healthcare-Acquired Infections (HAI) are relatively rare, and we finalized that the feasibility study would target records with a higher probability of including a dialysis event, because this would enrich the validation sample while reducing the burden on facilities. This methodology resembles the methodology we use in the Hospital Inpatient Quality Reporting Program to validate the central line-associated BSI measure, the catheter-associated urinary tract infection measure, and the surgical site infection measure (77 FR 53539 through 53553).

    For the PY 2019 ESRD QIP, we proposed to randomly select 35 facilities to participate in an NHSN dialysis event validation study by submitting 10 patient records covering two quarters of data reported in CY 2017. A CMS contractor will send these facilities requests for medical records for all patients with “candidate events” during the evaluation period; i.e., patients who had any positive blood cultures; received any intravenous antimicrobials; had any pus, redness, or increased swelling at a vascular access site; and/or were admitted to a hospital during the evaluation period. Facilities will have 30 calendar days to respond to the request for medical records based on candidate events either electronically or on paper. If the contractor determines that additional medical records are needed to reach the 10-record threshold from a facility to validate whether the facility accurately reported the dialysis events, then the contractor will send a request for additional, randomly selected patient records from the facility. The facility will have 30 calendar days from the date of the letter to respond to the request. With input from CDC, the CMS contractor will utilize a methodology for reviewing and validating records from candidate events and randomly selected patients, in order to determine whether the facility reported dialysis events for those patients in accordance with the NHSN Dialysis Event Protocol. If a facility is selected to participate in the validation study but does not provide CMS with the requisite lists of positive blood cultures within 30 calendar days of receiving a request, then we propose to deduct 10 points from the facility's TPS. Information from the validation study may be used in future years of the program to inform our consideration of future policies that would incorporate NHSN data accuracy into the scoring process.

    We recognize that facilities have previously had 60 days to respond to these requests. However, in the process of implementing the pilot validation study for CY 2015 data, we recognized that the validation contractor did not have enough time to initiate requests, receive responses, validate data reported to NHSN, and generate a comprehensive validation report before the end of the contract cycle. Although facilities will have less time, the 30-day response requirement is consistent with validation studies conducted in the Hospital IQR Program, and we believe that 30 days is a reasonable amount of time for facilities to obtain and transmit the requisite medical records.

    We sought comments on this proposal. The comments and our responses for these proposals are set forth below.

    Comment: Several commenters supported our proposed changes to Data Validation in the ESRD QIP. One commenter specifically supported our proposed extension of the data validation pilot study as well as the proposal to validate NHSN data. They also supported our proposal to implement a penalty for failure to comply with the 30-day response window. One commenter specifically supported our proposed NHSN Data Validation methodology because providers do not always report dialysis events or do not report them in accordance with the CDC's NHSN Dialysis Event Protocol and they argued that this validation study, if done correctly, will better hold facilities accountable for the quality of care they provide to patients. One commenter added that validation, when coupled with meaningful accountability, is the best way to guarantee that the dialysis events of ESRD patients are reported accurately and appropriately.

    Response: We thank commenters for their support.

    Comment: Two commenters raised concerns that the two data validation studies are masked attempts at auditing quality data submissions and that CMS is actually conducting the study because the CROWNWeb validation study showed that CROWNWeb is not reliable or valid as a collection tool and because the NHSN BSI Measure has not been appropriately validated. They argued that if the actual goal of the validation studies is to audit facilities, then CMS should provide a mechanism to appeal adverse decisions before points are taken away from facilities' total performance scores. The commenter offered support in working with CMS to ensure the validity and reliability of the data submitted to NHSN but argued that the validation studies is not the appropriate way to address concerns that CMS has and asked that CMS state clearly in the final rule the reason that such studies are necessary and whether or not the purpose of them is to audit facilities.

    Response: As stated previously in the CY 2015 final rule with comment period (79 FR 66188) and the CY 2016 final rule with comment period (80 FR 69049), we agree that one of the purposes of the validation studies is to identify instances in which facilities are reporting invalid data either to CROWNWeb or to NHSN. However, we continue to believe it is inappropriate to designate the validation studies as “audits” of facility data, because the ultimate objective of the studies is to improve the validity of data reported to CROWNWeb and to NHSN, rather than to penalize facilities for reporting invalid data. We further note that we did not propose to penalize facilities for reporting invalid data for either of the validation studies. If we propose to do so in future rulemaking, we will consider implementing an appeal process that facilities can use to contest CMS determinations that invalid data was reported to either CROWNWeb or to NHSN. The purpose of these studies is not to audit facilities but to improve the validity of the data by identifying instances of intentional or unintentional under-reporting.

    Comment: One commenter suggested that CMS consider providing resources to state health departments so that they can conduct on-site data validation as this would also help with educating facility staff on surveillance, reporting, and infection prevention, identify areas of misunderstanding and improve communication, and provide technical assistance to facilities in reporting and data validation efforts. Another commenter requested that CMS release the results of the CROWNWeb validation study and that CMS stop using CROWNWeb as part of the ESRD QIP until it has been appropriately validated. Two commenters offered suggestions for expanding the Data Validation Studies. If financial barriers are a concern, the commenter suggested an alternative approach would be to require facilities to engage in a self-validation exercise module which would still be a burden of labor on the facility but would provide useful information to both CMS and the facility. They offered examples of such self-validation modules, available through the California Department of Public Health. One commenter recommended that CMS increase the size of the validation study to include at least 5 percent of facilities, arguing that a larger, more representative sample is needed for validation, especially considering that this data will soon be publicly available for the first time via Dialysis Facility Compare. Another commenter specifically recommended that CMS perform validation on at least one percent of (or at least 70) facilities. They also recommended increasing the number of records reviewed at each facility from the 10 proposed in the rule. They also encouraged CMS to conduct validations of facilities that do not report dialysis events or that report zero events, because these non-compliant facilities could be skewing national averages, negatively impacting those facilities that do comply with the measure requirements.

    Response: We thank commenters for their recommendations about ways to improve the NHSN BSI validation study and increase the size of the study. We appreciate the commenter's recommendation to require facilities to conduct a self-validation module as a means to overcome these resource limitations, and we will consider the feasibility of such an approach in the future. We also appreciate the recommendation to provide funding to state health departments to conduct validation studies; we agree that these agencies have conducted very successful studies of this nature and will consider the feasibility of this approach. We also appreciate the suggestion to selectively sample facilities that report zero dialysis events for validation, and we will investigate the utility of using a non-random sample in the future. Unfortunately, at this time, resource limitations prevent us from increasing the size of the NHSN BSI Validation Study, both respect to the number of facilities sampled, as well as the number of records from each facility that are validated. We believe the proposed study methodology will provide the CDC and CMS with greater insights than previous studies because this study will yield information about the types of under-reporting, the extent of under-reporting and the reasons for under-reporting to the NHSN system. We look forward to continuing to refine this study to ensure that we are collecting as much reliable and useful data about bloodstream infections as possible.

    CDC agrees that there are substantial benefits that occur when health departments conduct on-site assessments of facility data and direct education of staff to improve surveillance practices. The CDC supports the suggestion of providing state health departments with funds to conduct data validation activities. Few states are currently funded via the CDC cooperative agreement (Epidemiology and Laboratory Capacity grant) to conduct external HAI data validation. These states have conducted data validation of patient safety modules that resulted in an improvement in states' understanding of gaps in HAI reporting, commonly made errors, improved partnerships and communication between state health departments and healthcare facilities.

    Comment: Several commenters did not support our proposal to decrease the response time for the NHSN Data Validation Study for facilities from 60 to 30 days, and argued that the reduced response time, coupled with the penalty for non-response, is too harsh compared with the problem identified by the studies, particularly in light of a lack of due process for facilities that are found to be non-compliant.

    With respect to the proposed reduced response time, one commenter argued that facilities often do not receive the faxed or written request for records or they are lost, leaving them with less time to respond to the request, and recommended that CMS instead email the requests to all of the NHSN users within each facility to ensure that the request is received. Another argued that 30 days is simply too short a period of time to ensure the request is received and can be completed. One commenter also added that providers often must obtain documentation from other healthcare providers in order to respond to the request and that 60 days is simply not enough time to receive the request, coordinate with other providers, and send in the required documentation. One commenter suggested that while the data validation study is ongoing, CMS should not reduce a facility's TPS since the purpose of the study, as commenter sees it, is to assess future policies to ensure the accuracy of the data submitted to NHSN.

    With regards to the penalty for non-response, commenters urged CMS to eliminate the proposed 10-point reduction in a facility's TPS due to non-compliance with the NHSN Data Validation Study for two reasons. First, they argued that compliance with a data validation study is unrelated to the quality of care provided at a facility and therefore is inappropriate for inclusion in a facility's TPS. Second, they suggested that reducing a facility's TPS score confuses and misinforms patients, caregivers and families about the quality of care provided at a given facility.

    Response: Based upon the comments received, we are not going to finalize the 30-day response time. Instead, we will give facilities 60 days to respond to record requests. However, facilities should not need to collect records from other healthcare facilities solely for the purposes of the data validation record request.

    We disagree with the comment about deducting points from a facility's TPS for noncompliance with the CROWNWeb and NHSN validation studies. As stated previously at (79 FR 66189), our policy to deduct points from a facility's TPS is consistent with section 1881(h)(3)(A)(i) of the Act, because it is part of our methodology for assessing the total performance of each provider of services and renal dialysis facility based on performance standards with respect to the measures selected. The main purpose of these studies is to assess whether facilities are reporting accurate data, and we have determined that review of medical records is integral to that determination.

    Comment: One commenter pointed out that being admitted to a hospital should not qualify as a reportable Dialysis Event for purposes of the Data Validation Study.

    Response: The validation study includes positive blood cultures collected or identified in patients during the first day of a hospitalization because these events are included in the calculations for the NHSN BSI clinical measure. In order to report these events, facilities will need to obtain medical records from hospitals that capture these results.

    Final Rule Action: After careful consideration of the comments received, we are finalizing the methodologies for Data Validation with one change. Specifically, we are increasing the amount of time facilities will have to respond to record requests for the NHSN Data Validation Study from 30 days to 60 days. We believe this should give facilities ample time to collect and submit the required records.

    E. Requirements for the PY 2020 ESRD QIP 1. Replacement of the Mineral Metabolism Reporting Measure Beginning With the PY 2020 Program Year

    We consider a quality measure for removal or replacement if: (1) Measure performance among the majority of ESRD facilities is so high and unvarying that meaningful distinctions in improvements or performance can no longer be made (in other words, the measure is topped-out); (2) performance or improvement on a measure does not result in better or the intended patient outcomes; (3) a measure no longer aligns with current clinical guidelines or practice; (4) a more broadly applicable (across settings, populations, or conditions) measure for the topic becomes available; (5) a measure that is more proximal in time to desired patient outcomes for the particular topic becomes available; (6) a measure that is more strongly associated with desired patient outcomes for the particular topic becomes available; or (7) collection or public reporting of a measure leads to negative or unintended consequences (77 FR 67475). In the CY 2015 ESRD PPS final rule, we adopted statistical criteria for determining whether a clinical measure is topped out, and also adopted a policy under which we could retain an otherwise topped-out measure if we determined that its continued inclusion in the ESRD QIP measure would address the unique needs of a specific subset of the ESRD population (79 FR 66174).

    Subsequent to the publication of the CY 2016 ESRD PPS final rule, we evaluated the finalized PY 2019 ESRD QIP measures that would be continued in PY 2020 against all of these criteria. We determined that none of these measures met criterion (1), (2), (3), (4), (5) or (6). As part of this evaluation for criterion one, we performed a statistical analysis of the PY 2019 measures to determine whether any measures were “topped out.” The full results of this analysis can be found at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/ESRDQIP/061_TechnicalSpecifications.html and a summary of our topped-out analysis results appears in Table 12.

    Table 12—PY 2020 Clinical Measures Including Facilities With at Least 11 Eligible Patients per Measure Measure N 75th/25th Percentile 90th/10th Percentile Std error Statistically indistinguishable Truncated mean Truncated SD TCV TCV's 0.10 Kt/V Delivered Dose above minimum 6210 96.0 98.0 0.093 No 92.5 4.20 0.05 Yes. Fistula Use 5906 73.2 79.6 0.148 No 65.7 8.88 0.14 No Catheter Use 5921 5.43 2.89 0.093 No 190.1 5.16 <0.01 Yes. Serum Calcium >10.2 6257 0.91 0.32 0.049 No 197.8 1.48 <0.01 Yes. NHSN—SIR 5781 0.41 0.00 0.011 No 0.963 0.57 <0.01 Yes. SRR 5739 0.82 0.64 0.004 No 0.995 0.21 <0.01 Yes. STrR 5650 0.64 0.43 0.008 No 0.965 0.37 <0.01 Yes. SHR 6086 0.79 0.63 0.004 No 0.983 0.23 <0.01 Yes. ICH CAHPS: Nephrologists communication and caring 3349 71.8 77.1 0.159 No 65.7 7.11 0.11 No Quality of dialysis center care and operations 3349 66.2 71.2 0.134 No 60.9 6.20 0.10 No Providing information to patients 3349 82.4 85.6 0.101 No 78.4 4.61 0.06 Yes. Rating of Nephrologist 3349 69.9 76.6 0.204 No 62.0 9.29 0.15 No Rating of dialysis facility staff 3349 70.9 77.4 0.215 No 62.0 9.92 0.16 No Rating of dialysis center 3349 73.8 80.6 0.221 No 64.8 10.18 0.16 No (1) Truncated mean for percentage is reversed (100 percent−truncated mean) for measures where lower score = better performance.

    As the information in Table 12 indicates, none of these clinical measures are currently topped-out in the ESRD QIP. Accordingly, we did not propose to remove any of these measures from the ESRD QIP for PY 2020 because they are topped out.

    We consider the data sources we use to calculate our measures based on the reliability of the data, and we also try to use CROWNWeb data whenever possible. The Mineral Metabolism measure currently in the ESRD QIP measure set uses CROWNWeb data to determine how frequently facilities report serum phosphorus data, but it also uses Medicare claims data to exclude patients when they were treated at a facility fewer than seven times in a month. There is no evidence to suggest that the Mineral Metabolism reporting measure is leading to negative or unintended clinical consequences. However, we do not think it is optimal to use claims data to calculate the measure because that is inconsistent with our intention to increasingly use CROWNWeb as the data source for calculating measures in the ESRD QIP. There is also another available measure that can be calculated using only CROWNWeb data and that we believe is as reliable as the Mineral Metabolism Reporting Measure. The measure also excludes patients using criteria consistent with that used by other ESRD QIP measures. For these reasons, we proposed to remove the Mineral Metabolism Reporting Measure from the ESRD QIP measure set beginning with the PY 2020 program and to replace that measure with the proposed Serum Phosphorus Reporting measure, the specifications for which are described in section IV.D.2.c.i. of the proposed rule (81 FR 42838)

    We sought comments on this proposal. The comments and our responses for these proposals are set forth below.

    Comment: Many commenters supported the replacement of the Mineral Metabolism Reporting Measure with the Serum Phosphorus measure. They noted that NQF 0255 is topped out because of high facility performance and minimal room for improvement, so it's not the best indicator of quality, but they understand that CMS is required to comply with PAMA. They further encouraged CMS to work with the kidney care community to identify more appropriate measures to satisfy the statutory requirement.

    Response: We appreciate the commenters' support, and we agree that it would be desirable to have more robust measures on bone mineral metabolism. We note that neither the Mineral Metabolism nor the Serum Phosphorus measures can be topped out in the same sense as other clinical measures, because reporting measures are scored on the basis of how much data are reported, and clinical measures are scored on the basis of what the data represent. In the case of clinical measures, uniformly high performance indicates that the measure may no longer be necessary because high quality care is being delivered virtually across the board. In the case of reporting measures, by contrast, high levels of reporting do not obviate the need for the measure, because the measures are largely put in place to capture data on an ongoing basis.

    Comment: Commenters asked CMS for two clarifications regarding the proposed Serum Phosphorus Reporting Measure. First, commenters noted that plasma is absent from the measure title and from the measure's Technical Specifications, although it is mentioned in the “additional information” in the Serum Phosphorus Technical Specifications and recommended that the title of the measure be modified to clearly denote plasma as an acceptable substrate and that the specifications make this abundantly clear. Second, commenters requested that CMS review the measure's specifications and standardize the exclusions between the Mineral Metabolism Measure and the Serum Phosphorus Measure.

    Response: We thank commenter for their suggestion, however at this time we are not proposing to change the title of the proposed Serum Phosphorus Reporting Measure. This measure is based upon an NQF-endorsed measure, #0255 Measurement of Serum Phosphorus Concentration. The measure's technical specifications clearly indicate that plasma is an acceptable substrate and we do not believe it is necessary to indicate this in the title of the measure. The differences in the exclusions between the Mineral Metabolism Measure and the Serum Phosphorus measure appear in the technical specifications of the measures and pertain to the determination of patient eligibility (that is, Mineral Metabolism uses number of treatments in claims to determine this, but Serum Phosphorus uses days at the facility as indicated in CROWNWeb). As we indicated in the proposed rule, we proposed this change because of our intention to increasingly use CROWNWeb as the data source for calculating measures in the ESRD QIP and because this reporting measure is based upon an NQF-endorsed measure.

    Final Rule Action: After considering the comments received, we are finalizing our proposal to replace the Mineral Metabolism Reporting Measure with the Serum Phosphorus Reporting Measure beginning in PY 2020. This measure change is consistent with our intention to increasingly use CROWNWeb as the data source for calculating measures in the ESRD QIP, and it brings measure exclusion criteria into alignment with other measures used in the ESRD QIP program.

    2. Measures for the PY 2020 ESRD QIP a. PY 2019 Measures Continuing for PY 2020 and Future Payment Years

    We previously finalized 12 measures in the CY 2016 ESRD PPS final rule for the PY 2019 ESRD QIP, and these measures are summarized in Table 13. In accordance with our policy to continue using measures unless we propose to remove or replace them, (77 FR 67477), we will continue to use 11 of these measures in the PY 2020 ESRD QIP. As noted above, we proposed to replace the Mineral Metabolism Reporting Measure with the Serum Phosphorus Reporting Measure and we proposed to reintroduce the NHSN Dialysis Event Reporting Measure into the ESRD QIP measure set beginning with PY 2019.

    Table 13—PY 2019 ESRD QIP Measures Being Continued in PY 2020 NQF No. Measure title and description 0257 Vascular Access Type: AV Fistula, a clinical measure.
  • Percentage of patient-months on hemodialysis during the last hemodialysis treatment of the month using an autogenous AV fistula with two needles.
  • 0256 Vascular Access Type: Catheter ≥ 90 days, a clinical measure.
  • Percentage of patient-months for patients on hemodialysis during the last hemodialysis treatment of month with a catheter continuously for 90 days or longer prior to the last hemodialysis session.
  • N/A National Healthcare Safety Network (NHSN) Bloodstream Infection in Hemodialysis Patients, a clinical measure.
  • The Standardized Infection Ratio (SIR) of Bloodstream Infections (BSI) will be calculated among patients receiving hemodialysis at outpatient hemodialysis centers.
  • 1454 Hypercalcemia, a clinical measure.
  • Proportion of patient-months with 3-month rolling average of total uncorrected serum calcium greater than 10.2 mg/dL.
  • N/A Standardized Readmission Ratio, a clinical measure.
  • Standardized hospital readmissions ratio of the number of observed unplanned 30-day hospital readmissions to the number of expected unplanned readmissions.
  • N/A Standardized Transfusion Ratio, a clinical measure.
  • Risk-adjusted standardized transfusion ratio for all adult Medicare dialysis patients.
  • Number of observed eligible red blood cell transfusion events occurring in patients dialyzing at a facility to the number of eligible transfusions that would be expected.
  • 0258 In-Center Hemodialysis Consumer Assessment of Healthcare Providers and Systems (ICH CAHPS) Survey Administration, a clinical measure.
  • Facility administers, using a third-party CMS-approved vendor, the ICH CAHPS survey twice in accordance with survey specifications and submits survey results to CMS.
  • N/A Anemia Management Reporting, a reporting measure.
  • Number of months for which facility reports ESA dosage (as applicable) and hemoglobin/hematocrit for each Medicare patient.
  • N/A Pain Assessment and Follow-Up, a reporting measure.
  • Facility reports in CROWNWeb one of six conditions for each qualifying patient once before August 1 of the performance period and once before February 1 of the year following the performance period.
  • N/A Clinical Depression Screening and Follow-Up, a reporting measure.
  • Facility reports in CROWNWeb one of six conditions for each qualifying patient once before February 1 of the year following the performance period.
  • N/A NHSN Healthcare Personnel Influenza Vaccination, a reporting measure.
  • Facility submits Healthcare Personnel Influenza Vaccination Summary Report to CDC's NHSN system, according to the specifications of the Healthcare Personnel Safety Component Protocol, by May 15 of the performance period.
  • N/A Kt/V Dialysis Adequacy Comprehensive Clinical Measure.
  • Percentage of all patient months for patients whose average delivered dose of dialysis (either hemodialysis or peritoneal dialysis) met the specified threshold during the reporting period.
  • N/A NHSN Dialysis Event Reporting Measure (Proposed for PY 2019 in section IV.C.1.a. of the proposed rule (81 FR 42823)).

    We received general comments on the PY 2020 measure set. The comments and our responses for these proposals are set forth below.

    Comment: Commenter argued that the measures being proposed for inclusion in the ESRD QIP do not take a patient-centric approach to care because they do not take into consideration the fact that many of these patients have multiple comorbidities and that dialysis is just one treatment being offered to them. Commenter added that the patient's primary care physician should be at the center of the complex care plan model used for patients with ESRD.

    Response: We thank the commenter for sharing these concerns. The SRR, SHR, and STrR do consider patient comorbidities through standardized risk adjustment models that incorporate a variety of comorbidities that contribute to the risk of poor health outcomes. We agree that a patient's primary care physician should be involved in the complex care planning required for many ESRD dialysis patients, and coordination between the facility and the primary care physician is part of the responsibility of the interdisciplinary team. We also believe that the SRR and SHR epitomize our aim to include patient-centered measures in the ESRD QIP measure set, because these measures assess outcomes that deeply matter to patients, and because high performance on these measures requires a patient-centered orientation that emphasizes care coordination and special attention to patients in precarious situations (for example, those who are at-risk for a hospitalization and/or readmission).

    Comment: Two commenters argued that the technical specifications for the Kt/V measure, the hypercalcemia measure, and the phosphorus measure may be creating barriers to accessing home dialysis due to the ways in which they address patients who switch from hemodialysis to home dialysis. They recommended that CMS modify the exclusion criteria for these measures to remove these barriers. Specifically, commenter pointed out that under the current specifications, if a patient is on in-center HD for more than 90 days and then switches to home PD, the patient is included in the QIP calculation as soon as they have a PD-related Medicare claim. The patient who switches from in-center HD to PD and has no Kt/V during the month is viewed as not meeting the standard. However, if a new patient begins dialysis as a home PD patient, the specs provide a 90-day grace period during which no Kt/V data is expected. The current specifications therefore encourage facilities to perform a Kt/V on PD patients during training which is not clinically necessary. To address this concern, commenter recommended that CMS modify the exclusion criteria from “patients on dialysis for less than 90 days” to “patients on the PD modality for less than 90 days.”

    For Hypercalcemia and Phosphorus, commenter recommended that CMS modify the exclusion criteria to state: “home dialysis patients for whom a facility does not submit a claim during the claim month or PD patients with fewer than 15 billable days or home HD patients with fewer than seven treatments during claim month.” Commenter argued that the way the specifications are currently written, home patients are required to receive a lab result while in-center patients have a six-treatment grace period. Additionally, if a home patient receives a treatment on the first of the month and then goes to the hospital for the remainder of the month, the patient-month will be counted as not meeting the standard. Patients are therefore being treated to medically unnecessary tests, and the commenter argued that this modification to the specifications for these measures will address this problem for patients who sift from in-center HD to home PD in relation to the hypercalcemia and phosphorus measures.

    Response: We thank the commenters for their concerns. The Kt/V measure does provide a longer timeline for completion of Kt/V assessment for a new ESRD patient beginning dialysis on PD than it does for a patient who has previously been on In-center HD and subsequently switches modality. The commenter's suggestion to change the denominator exclusion to “patients on the PD modality for less than 90 days” would effectively provide similar timelines for completion of the first Kt/V assessment. However, it is not certain that this proposed approach is the most appropriate one. Patients new to dialysis whose initial modality is PD almost always have significant residual renal function that allows initiation of less aggressive PD prescriptions during and for several weeks after initial training. Since Kt/V for PD is defined as a combination of both residual renal function and dialytic Kt/V, the contribution of residual renal function is typically substantial in this situation. For patients having previously been treated with In-center HD who subsequently change modality, the likelihood of having persisting significant residual renal function is much lower. In this scenario, the clinical team may well need to provide more aggressive initial PD prescription to compensate for absent residual renal function in order to provide adequate PD. Whether or not allowing 120 days for the provider to assess delivered Kt/V in these very different scenarios has not been carefully evaluated. Prior to revising the current specifications, more study is needed to assess the safety impact of this revision. Finally, the comment that the current specifications encourage facilities to perform a Kt/V on PD patients during training is not necessarily correct. The current specifications encourage providers to perform Kt/V as soon as possible after initiation of PD in order to evaluate the adequacy of the initial dialysis prescription in this setting where residual renal function may be reduced.

    With regard to hypercalcemia and phosphorus, the commenter describes a claims-based exclusion paradigm that is not used for the hypercalcemia or phosphorus measures, nor is it consistent with the DFC specification of Kt/V. Irrespective of modality, patients are included in the measures' denominator based primarily on CROWNWeb admission and discharge data and not primarily on the number of Medicare Claims treatment events. In addition, assessment of calcium and phosphorus concentrations and avoidance of hypercalcemia apply equally to both In-center HD and home dialysis patients.

    Comment: Commenter expressed dismay at the fact that there is no health-related quality of life measure in the ESRD QIP and recommended that starting in CY 2018 (for PY 2020), each facility must report in CROWNWeb whether each eligible patient completed the KDQOL. Commenter argued that this is the most important measure because it is a patient-reported outcome measure which predicts hospitalization and survival in dialysis patients as strongly as dialysis dose and serum albumin.

    Response: We thank commenter for their suggestion. We agree that it is vitally important to examine the quality of life of patients with ESRD, and for that reason, we have included important measures such as the Pain Assessment and Follow-Up Reporting Measure and the Depression Screening and Follow-Up Reporting Measure. The CMS Dialysis Conditions for Coverage already requires, under Condition 494.90, that facilities complete an annual psychosocial evaluation for each patient, and facilities typically use the KDQOL survey for this purpose. Therefore, adding an additional measure on how many patients receive the KDQOL survey for the ESRD QIP would be unnecessarily duplicative and would unnecessarily dilute the significance of other measures in the ESRD QIP measure set. We will continue working with the community to identify appropriate patient-reported outcome measures for use in the ESRD QIP.

    Comment: Several commenters supported our proposal to study the impact of the SRR and STrR measures on quality of care.

    Response: We thank commenters for their support and we look forward to sharing the results of the study with the community when they become available.

    Comment: Commenters generally supported the continued inclusion of the ICH CAHPS measure in the ESRD QIP but expressed some concerns and made several recommendations for improving the measure as implemented in the program.

    The concerns expressed by commenters include: (1) Patients need to be involved with the survey in a meaningful way; (2) The ESRD National Coordinating Center (NCC) LAN Affinity Group is in the process of trying to address #1; (3) Patients remain concerned with inconsistencies in the administration and understanding of the survey; (4) Patients remain concerned that while a minority of patients may see benefits from the results of the survey, it will not improve the patient experience of care or have a meaningful impact on process change at the facility level as it currently exists; (5) In light of these concerns, the current weight being assigned to this metric appears to be excessive. They recommended reconsideration for the weighting assigned to the CAHPS measure until these concerns are addressed.

    The changes commenters recommended include: (1) Provide a specific list of the exclusions that would exclude homeless patients as well; (2) Expand the ICH CAHPS survey to include peritoneal dialysis and home hemodialysis patients in future rulemaking; (3) Administer the survey consistent with the AHRQ specifications, including by dividing it into three sections that were independently tested; (4) Require that the survey be administered only once each year, consistent with the findings of the American Institutes for Research/RAND et al.; (5) Coordinate with the ESRD Networks to reduce duplication in its administration; (6) Implement a mechanism for facilities to ensure that patients' contact information is as accurate and up-to-date as possible; (7) Review the lingual translations of the surveys to ensure that they are accurate.

    Response: We appreciate the concerns listed by the commenters. We will address each one separately. (1) A specific list of the exclusions from the ICH CAHPS survey is published in the In-Center Hemodialysis CAHPS® Survey Administration and Specifications Manual, which can be found on the survey technical Web site, https://ichcahps.org under the Survey and Protocols tab. We explicitly chose not to exclude homeless persons based on the advice of our technical expert panel, which indicated that some homeless persons can be contacted for survey research. (2) We are considering creating an ICH CAHPS survey for home and peritoneal dialysis patients. However, we do not currently have concrete plans for this expansion. (3) The commenter suggests using the AHRQ specifications for administering the ICH CAHPS Survey. The AHRQ specifications are not designed to support public reporting of survey data. The CMS specifications are much more detailed because they are to ensure, to the extent possible, that the survey is conducted the same way by all vendors. This improves the quality of the data for public reporting purposes. We do not understand the comment that the survey should be divided into three sections that were independently tested. The entire ICH CAHPS survey has been tested. (4) We considered the option of doing the survey once a year, but realized that a single administration could miss patients and that it would cover patient experiences for only part of the year. We decided to require that the survey be conducted twice a year to increase opportunities for patients to make their experiences known. (5) We are already working with the ESRD networks and are receptive to suggestions for reducing duplication. (6) We currently ask that survey vendors contact facilities for updated patient contact information. However, we ask that the vendor request updated information for all patients, not just those that are in the sample, in an effort to protect patient confidentiality. (7) We are currently reviewing translations of the questionnaires.

    Comment: Commenter appreciates that the current ICH CAHPS measure is not appropriate for assessing the care of home patients but urged CMS to invest in the development and adoption of a patient experience instrument validated for assessing the home dialysis population. Commenter added that it is extremely important for CMS to recognize that PD and HHD are distinct from each other and from in-center dialysis and to keep these important differences in mind when developing a survey instrument that would be more appropriate for the home dialysis population.

    Response: We thank the commenter for their comments and suggestions. We are considering the possibility of developing an additional CAHPS survey for home and peritoneal patients. However, we do not have specific plans for this survey at this time.

    Comment: One commenter opposed the continued use of the ICH CAHPS measure as a clinical measure and expressed concerns that the twice annual survey requirement does not allow sufficient time for facilities to make improvements based on the first survey responses before the second survey is due to be conducted. They added that the current required timing is contrary to the goal of improving the patient experience and urged CMS to reconsider the requirement for two annual surveys.

    Another commenter supported CMS's willingness to consider expanding the ICH CAHPS survey in future years to include peritoneal dialysis, home hemodialysis patients, and homeless patients. In the interim, they recommended that CMS consider certain modifications to the measure to make it less burdensome to facilities and patients. First, they recommended addressing concerns about the burden on patients by aligning the ICH CAHPS measure specifications with those AHRQ relied upon when testing the measure. Specifically, they recommended that CMS divide the survey into three sections, which were each independently tested, and they suggested reducing the requirement to a single administration of the survey each year. They also urged CMS to work with facilities to develop a mechanism to ensure that patients' contact information is accurate and up-to-date so that facilities are not penalized for non-response when the patient's address was incorrect and encouraged CMS to ensure that the ICH CAHPS survey is correctly translated for all foreign-language speakers, and that the translation is meaningful and accurate.

    Response: One of the goals of the ICH CAHPS survey is to encourage quality improvements. We are aware that some improvement efforts will take more than one survey period to be reflected in the data. This is particularly true for the publicly-reported data, which is reported for two survey administration periods. However this does not mean that the facility cannot or should not undertake quality improvement efforts.

    The AHRQ guidelines were not designed to support public reporting. They are, therefore, less detailed than the CMS guidelines, which are designed to improve data quality for public reporting. We conduct the survey twice a year in order to provide patients with multiple opportunities to report their experiences. We also report the data from two survey administrations to improve the possibility that the sample sizes will be large enough to provide useful information.

    Comment: Commenters generally supported the continued inclusion of the NHSN Healthcare Personnel Influenza Vaccination Reporting Measure in the ESRD QIP, as well as the elimination of the requirement for written documentation, but they made several recommendations for improving the measure. Most importantly, commenters recommended changing the Performance Period for the NHSN HPI Vaccination Reporting Measure to align with CDC guidelines and to set it as October 1 through March 31 so that facilities are not penalized for complying with established clinical guidelines and so that patients are not placed at increased risk early in the influenza season. Second, commenters recommended that exemptions should be in place for short-term visitors and that the performance period be extended to allow for early vaccination. Third, commenters expressed concerns about the third part of the denominator, requiring students/trainees and volunteers to be vaccinated. They argued that facilities often have such individuals on a very short-term basis and documenting their vaccination status is difficult, highly burdensome and diverts resources away from important clinical care. Finally, commenters recommended that CMS include a baseline reporting threshold for the measure, similar to what is required for inpatient rehab hospitals and other healthcare facilities.

    Response: The current performance period for NHSN's measure of healthcare personnel influenza vaccination is from October 1 through March 31. All personnel who physically work in a reporting facility for at least one day from October 1 through March 31 are eligible for inclusion in the measure denominator. The numerator of the measure begins “as soon as vaccine becomes available” for a given influenza season. Personnel who are working in the reporting facility during the denominator reporting period (October 1 through March 31) may be vaccinated as early as August or September and this vaccination would be included in the NHSN measure; therefore, there is no penalty for early vaccination built into the NHSN measure.

    Since short-term visitors can transmit or acquire influenza even when in a healthcare facility for a limited amount of time, all healthcare personnel working one day or more during the reporting period are included in the NHSN measure. Facilities are encouraged to develop tracking systems that will capture these data from short-term HCP when they come into the facility during the reporting period. Among short-term healthcare personnel, adult students/trainees and volunteers may be reasonably anticipated to have substantial contact with patients and/or other healthcare personnel in a healthcare facility, increasing the risk of acquiring or transmitting influenza infection during the influenza season. To alleviate the challenges associated with collecting data on groups that do not regularly work in a facility, CDC encourages facilities to devise tracking systems to reach these individuals. CDC developed an information sheet that lists methods and strategies on how this can be accomplished, based on interviews conducted with a sample of acute care facilities that collected these data during the 2012-2013 influenza season: http://www.cdc.gov/nhsn/PDFs/HPS/General-Strategies-HCP-Groups.pdf.

    Comment: Two commenters urged CMS to establish batch submission to NHSN as soon as possible for the NHSN HPI Vaccination Measure, arguing that it's very problematic that facilities are not yet able to do this.

    Response: One of CDC's goals is to minimize reporting burden. Due to the development time needed to support batch submission, CDC is not able to rapidly transition to this data collection system. Currently, CDC anticipates the batch submission of healthcare personnel influenza vaccination data will be available for the 2018/2019 influenza season (PY 2021 QIP).

    Comment: Several commenters expressed concerns about the effect the SRR measure is having on patient access to care, but they added that they are looking forward to seeing the results of the access to care study, to better understand the impact the SRR and STrR measures are having on access to care. One commenter recommended evaluating the effectiveness of these two measures at measuring the actual care provided in dialysis facilities, and urged CMS not to use the measures in the program until it has been determined whether they have a positive or negative impact on dialysis patients.

    Response: We thank commenters for sharing their concerns. We look forward to sharing the results of the access to care study with the community once they become available. We believe these two measures are vitally important to continue including in the ESRD QIP measure set because they measure important aspects of patient care. We are continually evaluating the effectiveness of all of the measures included in the program and we have policies in place to determine when a measure should be retired from the program (77 FR 67475). Neither of these measures meet the criteria established through rule-making.

    Comment: One commenter recommended that CMS exclude patients with an incomplete claims history from the SRR measure.

    Response: We considered excluding patients without a full 1-year Medicare history but decided in the end that this was not necessary. Many patients without a full year of claims history are not Medicare eligible when they begin dialysis. They subsequently become Medicare eligible and may experience a hospitalization and a readmission in the first year. In the event of a readmission, CMS has the data from the diagnoses of the index discharge, and these data provide substantial detail on comorbidities and are available for all patients. The availability of these data enables adequate risk adjustment. We additionally note that the SRR does make use of the hierarchical condition categories (HCCs) to capture comorbidities. Excluding such patients would eliminate much of the incentive to avoid readmissions in a highly vulnerable population during their first year of care. We believe care coordination is important in this population and strive to include assessment of appropriate populations where feasible.

    Comment: Commenter supports efforts to reduce hospital readmissions that are directly related to the care provided by dialysis facilities, but is concerned that the SRR measure does not provide actionable information that promotes quality improvement in facilities.

    Response: High readmission rates may indicate the facility may be missing opportunities to improve care transitions during and after hospital discharge. A few pilot studies have shown that better care coordination between the facility and the hospital can reduce readmissions. The SRR measure development TEP considered the possibility of constraining the assessment of readmissions to those directly related to the care provided by dialysis facilities, but could not reach a consensus defining such events. The TEP recommended moving forward with the development of the SRR as an all-cause readmission measure. We have met with kidney community stakeholders regarding methods that can make measure data more actionable, including the provision of patient-level quality data and more timely reporting. While we believe we have improved upon this, we also agree that we should work toward continuing enhancement of the quality information made available to facilities for this measure and others.

    Comment: One commenter recommended that CMS work to develop an appropriate risk model that accounts for hospital-specific patterns and adjusts for physician-level admitting patterns as there is great geographic variability in both of these factors that need to be accounted for. They also urged CMS to align the standardized risk measures methodology with that used for other Medicare programs and other providers such as MA plans, by using the CMS claims-data available for the hierarchical conditions categories (CMS-HCC).

    Response: The SRR risk adjustment model does adjust for hospital effects by including hospital-level random effects. Our methodology uses past-year comorbidities that are obtained from ICD-9/ICD-10 diagnoses codes from Medicare claims. These diagnoses are grouped using the HCC. This approach is aligned with the methodology for the CMS Hospital Wide Readmission measure. Our position on the adjustment for physician-level admitting patterns has not changed, however. The treating nephrologist is, by definition, part of the inter-disciplinary team that treats patients under the aegis of the dialysis facility, as outlined in the Conditions for Coverage. As a consequence, any component of care provided by the treating nephrologist that influences risk for readmissions is appropriately attributable to the dialysis facility, and not appropriate for risk adjustment.

    Comment: One commenter recommended that CMS consider adding a page in CROWNWeb for the patient's medical history with start and end dates in order to gather all the patient's medical history and to ensure that STrR excludes the correct patients. This medical history page would be a part of the patient's information, which would mean it would travel with them from facility to facility.

    Response: We are constantly evaluating the effectiveness and usability of CROWNWeb and we will consider adding a page for the patient's medical history with start and end dates in future updates of the system.

    Comment: One commenter expressed concerns that the STrR measure is flawed and that facilities could be unfairly penalized for transfusions they had no opportunity to avoid or control.

    Response: While we recognize most transfusions occur in the hospital, facilities are directly responsible for appropriate anemia management based on the Medicare Conditions for Coverage and Medicare payment policies. Since dialysis facilities have a direct role in determining achieved hemoglobin as a result of their anemia management practices, which influences the risk for transfusion in dialysis patients, dialysis facilities share responsibility with other providers for transfusion events. The responsibility of the dialysis facility for achieved hemoglobin outcomes (and transfusion risk related to achieved hemoglobin) is strengthened by applying an extensive list of exclusions for comorbid conditions that are associated with decreased ESA responsiveness, increased transfusion risk, and increased risk of ESA complication.

    Comment: Commenter suggested that the timely monitoring and reporting of transfusions for patients on dialysis are extremely important and recommended the ongoing collection of data and timely reporting on the percentage of patients with Hgb levels between 6 and 10. This data could be merged, they suggested, with an individual patient's transfusion history to determine the Hgb level or levels that are typically associated with a transfusion, and can be used to see whether low Hgb levels in a dialysis center are contributing to the increase in transfusions across all clinical settings. These data could also be used to develop future transfusion best practice guidelines for people on dialysis and for those hoping to get a kidney transplant.

    Response: We thank the commenter for offering this suggestion. Studies investigating this issue are available in the published medical literature. We note that dialysis facilities already monitor hemoglobin concentration for the patients they treat as part of their responsibility for anemia management under the Medicare ESRD Conditions for Coverage. The dialysis receives the results of the hemoglobin test results drawn in the outpatient setting and is able to respond with appropriate changes to the patients' medical needs.

    Comment: Commenter argued that a transfusion avoidance measure should be stratified to appropriately capture blood transfusions that could have been prevented by the dialysis facility and should exclude those that resulted from acute or chronic medical conditions outside the scope of practice of the facility or nephrologist caring for the patient. Commenter acknowledged that tracking blood transfusion data is critical to understand patient safety issues and that will be difficult because most transfusions are not provided in the dialysis setting, and they expressed concern that the STrR measure alone does not completely counteract the potential to under-treat anemia and may permit patients hemoglobin levels to fall below the range recommended in the KDOQI Anemia Management guidelines. Finally, commenter argued that the transfusion avoidance measure does not take into account patients' quality of life or cardiovascular risks associated with low hemoglobin levels.

    Response: We are not aware of data that allow us to directly distinguish between transfusion events that are preventable and those that are not. In lieu of this, the STrR includes an extensive list of patient comorbidity exclusions, based on Technical Expert Panel input. These exclude patients with malignancy, hereditary anemias and other bone marrow conditions that are associated with erythropoiesis stimulating agent (ESA) hyporesponsiveness and/or increased risk of ESA use. This exclusion approach excludes many patients with medical conditions that complicate anemia management by the treating nephrologist and dialysis facility. We agree that the STrR does not address all aspects of clinical anemia management, including patient quality of life related to anemia. However, it assesses an important outcome of anemia management provided by the dialysis facility and we believe its use encourages avoidance of unacceptably low hemoglobin levels.

    Comment: One commenter expressed concerns that the STrR Measure is not driving improvement in patient outcomes and is therefore not useful or appropriate for inclusion in the QIP. Instead, they recommended an alternative measure that would assess erythropoietin dosage levels compared to hemoglobin outcomes as a better measure to ensure that patients are receiving appropriate amounts of erythropoietin.

    Response: We believe that STrR contributes to quality of care in ESRD anemia management by reporting on dialysis facility results in the important area of transfusion avoidance, which is an area of substantial concern in the kidney community, as indicated by the numerous comments we received when removing the Hgb <10 measure from the ESRD QIP (79 FR 66172 through 66174). Blood transfusion in dialysis patients has been associated with increased HLA sensitivity and may adversely affect access to kidney transplantation. Additionally, it is not clear to us what evidence exists to establish requirements for particular dosage levels, or how comparing them to hemoglobin levels would be operationalized for a measure in the ESRD QIP.

    Comment: One commenter expressed concerns that the STrR measure is not the right measure to use for evaluating anemia management in the dialysis setting for several reasons, and they offered support to CMS to help identify a different measure for use in the QIP that would monitor anemia management in dialysis facilities, consistent with the changes in the FDA labeling for ESAs. Their first concern is that dialysis facilities do not provide or direct transfusions; rather, patients typically receive transfusions in the hospital setting. Second, the decision to provide a transfusion is typically based upon hospital protocols that rarely take into account the unique nature of dialysis patients. Finally, the NQF Renal Standing Committee echoed these concerns and added that this measure more accurately reflects transfusion practices and behaviors at the hospital level rather than at the dialysis facility level, and they identified the potential for such coding inconsistencies to be a threat to measure validity.

    Commenter explained that one of the most problematic aspects of the STrR measure is that dialysis facilities are not always able to obtain information from other providers about patient transfusions that they need to understand the metric and act upon it. If this measure is going to be of value, dialysis facilities need to obtain quarterly data bout the raw transfusion, hospitalization, readmissions, and mortality data using DFR calculations, and the six-month lagged data file. Without this important information, facilities have no insight on patients who may or may not be receiving transfusions.

    Response: We thank the commenter. We believe the STrR, developed after the 2011 changes to Food and Drug Administration labeling for ESAs, reflects those revised recommendations. The FDA position defines the primary indication of ESA use in the CKD population as transfusion avoidance, reflecting the assessment of the relative risks and benefits of ESA use versus blood transfusion.

    Dialysis providers are responsible for anemia management as part of the ESRD Conditions for Coverage. Best dialysis provider practice should include effective anemia management algorithms that focus on (1) prevention and treatment of iron deficiency, inflammation and other causes of ESA resistance, (2) use of the lowest dose of ESAs that achieves an appropriate target hemoglobin that is consistent with FDA guidelines and current best practices including transfusion avoidance, and (3) education of patients, their families and medical providers to avoid unnecessary blood transfusion so that risk of allosensitization is minimized, eliminating or reducing one preventable barrier to successful kidney transplantation.

    The STrR measures dialysis facility performance in avoidance of transfusions for their patients. We agree that the majority of blood transfusions occur during hospitalization. However, the results of pre-hospitalization anemia management, reflected in achieved hemoglobin concentration prior to hospitalization, are a significant contributor to transfusion risk. The decision to transfuse blood is intended to improve or correct the pathophysiologic consequences of severe anemia, defined by achieved hemoglobin or hematocrit, in a specific clinical context for each patient situation (8). Consensus guidelines in the U.S. and other consensus guidelines defining appropriate use of blood transfusions are based, in large part, on the severity of anemia (9-11). Given the role of hemoglobin as a clinical outcome that defines anemia as well as forms a basis for consensus recommendations regarding use of blood transfusion, it is not surprising that the presence of decreased hemoglobin concentration is a strong predictor of subsequent risk for blood transfusion in multiple settings, including chronic dialysis (12-21). For example, Gilbertson, et al. found a nearly four-fold higher risk-adjusted transfusion rate in dialysis patients with achieved hemoglobin <10 gm/dl compared to those with >10 gm/dl hemoglobin. (19) In addition to achieved hemoglobin, other factors related to dialysis facility practices, including the facility's response to their patients achieved hemoglobin, may influence blood transfusion risk in the chronic dialysis population (22, 25). In an observational study recently published by Molony, et al. (2016) comparing different facility level titration practices, among patients with hemoglobin <10 and those with hemoglobin >11, they found increased transfusion risk in patients with larger ESA dose reductions and smaller dose escalations, and reduced transfusion risk in patients with larger ESA dose increases and smaller dose reductions (25). The authors reported no clinically meaningful differences in all-cause or cause-specific hospitalization events across groups.

    We appreciate the offer to consider additional measures that might more comprehensively assess anemia management care provided by dialysis facilities and are willing to discuss this issue with stakeholders in the future. We are also aware of the desire within the community for more granular detail with regard to quality of care and we will look into ways to provide this level of detail. The recently released ESRD Measures Manual does provide a great amount of detail on technical microspecifications related to the ways in which measures are calculated and we are continuing to find ways to make the process more transparent for the community. The commenter mentioned the DFRs, and it may be that other quality programs, such as Dialysis Facility Compare and the DFR offer more opportunity for this type of quality improvement data.

    Comment: Many commenters generally supported the continued inclusion in the ESRD QIP of Dialysis Adequacy measures, but expressed concerns with the Comprehensive Dialysis Adequacy Measure finalized in the CY 2016 ESRD PPS Final Rule, and which they characterized as a “pooled” dialysis adequacy measure. Commenters argued that it is not appropriate to draw conclusions about quality from one group (the larger adult population) to quality for the pediatric population at that facility, and expressed concerns that the vast clinical differences between these two groups makes it difficult to accurately assess a facility's quality. Specifically, commenters are concerned that by combining pediatric and adult PD and HD patients into a single adequacy metric, the transparency provided for pediatric and home dialysis metrics will be lost and the larger adult and HD populations will mask actual facility performance for pediatric and PD patients. Commenters believe that because these categories of patients are clinically different, pooling of the measures is inappropriate. Additionally, they stated that the MAP supported the measure when it was characterized as a composite measure and they therefore did not review the issue of pooling. Furthermore, they stated that the NQF Renal Standing Committee recommended against endorsement of this measure and found that it failed on the performance gap criterion and the threshold requirement for further discussion on factors such as validity and reliability. Commenters recommended that rather than continuing to use the Comprehensive Dialysis Adequacy Clinical Measure in the program, CMS should return to the four individual dialysis adequacy measures as separate measures or that they should work to develop and implement a true composite measure.

    Response: As we stated in the CY 2016 ESRD PPS Final Rule (80 FR 69055), we acknowledge that there might have been some confusion surrounding our use of the term “composite” in the title of the Comprehensive Dialysis Adequacy Clinical Measure, especially because we are now aware that the NQF uses a specific set of criterion to determine whether a measure is a composite for endorsement purposes. However, as we noted in the CY 2016 ESRD PPS Final Rule, the measure specifications presented in the CY 2016 ESRD PPS proposed rule were identical to those submitted for review by the Measure Applications Partnership, and the calculation methodology uses a pooled approach.

    The Comprehensive Dialysis Adequacy Clinical Measure does not clinically co-mingle different groups of patients. Rather, peritoneal dialysis patients are assessed based on clinical standards appropriate for these patients, while hemodialysis patients are assessed based on clinical standards appropriate for them. Similarly, adult and pediatric patients are assessed based on clinical standards that are appropriate for each of those groups. We understand that patient groups that comprise a smaller percentage of a facility's total population will have less impact on the facility's performance score for the Comprehensive Dialysis Adequacy clinical measure. The alternative, however, is to implement individual measures for each subpopulation in the Comprehensive Dialysis Adequacy clinical measure, as we had done previously. This would reintroduce the problem of limiting our ability to assess dialysis adequacy for patients in facilities large enough to provide reliable assessments using the Comprehensive Dialysis Adequacy clinical measure, but also lacking enough patients within the individual subpopulations to provide reliable assessments using the more granular measures of dialysis adequacy previously implemented in the ESRD QIP.

    With regard to the question of whether the measure was described as “pooled” or “composite” at the Measures Application Partnership, we don't believe characterizing it as a composite measure at the time of MAP review changes the substance of what the MAP discussed; “pooled” was always part of the measure concept. The measure design and specifications are not substantively changed from those reviewed by the MAP.

    Finally, this measure was not endorsed due to a limited performance gap criterion. This was also identified for some the previously implemented Kt/V dialysis adequacy measures that had been previously endorsed and implemented on ESRD QIP, but exhibited limited variation in performance. These measures retained a “reserve” endorsed status, which reflects that while other NQF criteria are met, performance on the measure is extremely high. The Comprehensive Dialysis Adequacy measure is not eligible for this designation by NQF because it had not been previously endorsed. However, it is methodologically aligned with these “reserve” measures, leading us to conclude that it is methodologically sound. Returning to the use of the more granular measures of dialysis adequacy would not address the underlying concern reflected in this comment, which is that the performance gap is limited, as this is reflected by these measures current “reserve” status. Under MIPPA, we are required to assess dialysis adequacy as part of the QIP. Because the Comprehensive Dialysis Adequacy clinical measure allows us to assess dialysis adequacy among the greater number of dialysis patients, we believe its continued implementation is appropriate.

    Comment: Commenter disagreed with CMS's assertion in the CY 2016 ESRD PPS Final Rule that including the pediatric population into a pooled measure is more beneficial than having a separate measure because the “pooled” measure does not ensure that pediatric patients are receiving adequate dialysis since the pediatric population is not evaluated separately from the adult population.

    Response: The Comprehensive Dialysis Adequacy Clinical Measure assesses pediatric patients based on clinical standards that are appropriate for the respective pediatric PD and HD patient populations. To address the concerns about the combined measure that incorporates both adult and pediatric populations and modality types, CMS found that a significant number of facilities that have <11 pediatric patients would now be assessed for dialysis adequacy in the new combined measure. Currently these facilities are excluded from the individual pediatric specific measures due to small facility size. This leads to the systematic exclusion of these facilities from assessment on these measures because of the reporting requirements. We believe it is important that patients at these facilities also be included in the assessment of adequate dialysis. This provides a mechanism to assess adequate, with respect to these small patient subpopulations.

    Comment: Commenter argued that there are other tests which would be better indicators of dialysis adequacy than Kt/V. Specifically, commenter recommended the Beta-2 microglobulin or a 24-hour urine test when applicable, arguing that these tests, though more costly, would contribute more accurate information about the patient's dialysis adequacy.

    Response: Assessment of small solute clearance during dialysis using urea-based metrics has been the industry standard for decades. This statement is reflected in widely accepted standards of practice, evidenced by KDOQI clinical guidelines and multiple endorsed NQF quality metrics based on urea clearance and expressed as Kt/V. These standards are reflected in the Comprehensive Dialysis Adequacy Clinical Measure.

    Comment: One commenter noted that the evidence for the Kt/V targets for the hemodialysis population is based on three times per week dialysis, not four, and that therefore the dialysis adequacy goals may not be appropriate for patients who dialyze more than three times per week. Another commenter recommended that CMS revise the technical specifications for the Comprehensive Dialysis Adequacy Clinical Measure to include only the evidence-based Kt/V threshold because when the measure was reviewed by the NQF Renal Standing Committee, they recommended that the upper Kt/V threshold exclusions be removed from the measure's specifications due to insufficient evidence supporting the selected values.

    Response: The Kt/V measure included in ESRD QIP did not include an upper limit for the Kt/V value; the value only needs to be greater than the target value for the specific population to be included in the numerator. The measure is also limited to those who dialyze three times per week. Therefore, we believe the goal is appropriate.

    Comment: Two commenters supported the continued inclusion of the Vascular Access Type Measures in the QIP but asked that CMS adjust the weights to place more emphasis on reducing catheters in order to encourage the use of fistulas and grafts. One commenter recommended that CMS give credit for the fistula measure only if the catheter has been removed because the presence of a catheter increases the risk of infection even if it is not in use.

    Response: We thank the commenter for sharing concerns relating to the presence of a catheter increasing the risk of infection, even when not in use. We will assess this concern and consider its implications for future measurement in the ESRD QIP through our ongoing measure develop and maintenance process. We note that this issue was raised during the development of a new set of vascular access measures in 2015. These measures are currently being reviewed by the National Quality Forum Standing Renal Committee for consensus endorsement. Once these measures have completed the NQF endorsement process, we will consider whether they are appropriate for inclusion in the ESRD QIP. In the interim, we continue to believe that the weights associated with the Vascular Access Type measures, and their relative weighting within the Vascular Access Type measure topic, appropriate disincentivize the use of catheters and appropriately incentive the use of fistulae. Because existing measures on vascular access type do not include adjustments to take into account cases where grafts are more appropriate than fistulae, we believe the existing weights and measure specification are appropriately neutral with respect to the use of grafts.

    Comment: One commenter supported CMS's submission of changes to the NQF Renal Standing Committee for the Vascular Access Type Measures that modify the measure to address the small number of patients for whom a catheter may be the most appropriate vascular access type when life expectancy is limited. They also added that they would like the measure to include all patients with a catheter in place for the reporting period in the numerator, whether the catheter is in continuous use or not.

    Response: We thank you for your comment and note that the measures submitted to the NQF Renal Standing Committee this year are not part of the proposed rule.

    Comment: One commenter encouraged CMS to modify the depression screening measure to require that the same methodology for detecting depression be used across facilities, or at a minimum that facilities be required to report how they screened for depression.

    Response: We do not believe it is appropriate for CMS to dictate the depression screening tools that facilities use, and that facilities are in a better position to determine which tools are appropriate for their patient populations. We also appreciate the suggestion to require reporting of the screening tool used, and we will take this consideration into account in the future.

    Comment: Two commenters supported the pain and depression measures but expressed concern that pain in ESRD patients may be treated with medication when emotional pain is really the cause of the patient's pain, because emotional and physical pain are so closely related. One of the commenters also raised concerns that depression needs to be clearly differentiated from fatigue or fear and that appropriate identification of these issues is important to enable dialysis facility social workers to identify which patients and families might benefit from additional social and family support.

    Response: We thank commenter for their support and for sharing their concerns. The Pain and Depression measures are measures that assess how well facilities report rates of screening for these conditions. They are not designed to differentiate among different causes of pain or depression. Nor are they designed to evaluate the intensity and completeness of facilities' screening efforts.

    Comment: One commenter supported the continued inclusion of the Pain Assessment measure in the QIP along with the modification to the measure from the CY 2016 ESRD PPS Final Rule that based a facility's score solely on the percentage of eligible patients treated in one six-month period if the facility treated no eligible patients in the other six-month period.

    Response: We thank commenter for their support.

    Comment: One commenter argued that the Pain & Depression measures included in the ESRD QIP measure set are global measures of patient well-being which are not specific for dialysis and should be under the purview of the patients' primary care physician. They argued that nephrologists and dialysis care teams should not be held responsible for all medical conditions of the dialysis patients because often the nephrologist's only option is to inform the patient's PCP and refer out to appropriate specialists.

    Response: We thank commenter for sharing their concerns. These measures are designed to assess not the treatment of pain or depression but whether facilities report data on how and whether they screen their patients for these conditions, document an appropriate plan of care, and refer their patients to other healthcare providers when necessary. Nephrologists themselves are not being held responsible for these medical conditions, and we believe that dialysis facilities' close connections with patients (due to the regular need for dialysis treatment) often places them in a better position to provide such screenings and assessments, in comparison with primary care providers who typically see ESRD patient far less frequently.

    Comment: Two commenters requested an extension of the reporting deadline for the Pain Assessment Reporting Measure in CROWNWeb. They expressed that due to system downtime, they were unable to submit their data by the August 1, 2016 deadline, and they requested that CMS extend the submission deadline to September 16, 2016.

    Response: We thank commenters for their comments regarding the systems issues encountered during system downtime for CROWNWeb, and we appreciate that the fulfillment of ESRD QIP requirements is dependent upon facilities' ability to access CROWNWeb. In an effort to avoid similar issues in future years of the ESRD QIP, we are making updates to the reporting deadlines for all measures with CROWNWeb reporting deadlines beginning in PY 2019 (ICH CAHPS (76 FR 70269), Mineral Metabolism Reporting Measure (76 FR 70271), Anemia Management Reporting Measure (78 FR 72199), Pain Assessment and Follow-Up Reporting Measure (79 FR 66204), Clinical Depression Screening and Follow-Up Reporting Measure (79 FR 66200)) as well as those being finalized for PY 2020 (Serum Phosphorus Reporting Measure (81 FR 42838) and Ultrafiltration Rate Reporting Measure (81 FR 42839)). Rather than being required to submit data or attestations by a certain calendar date, facilities will now be required to submit data or attestations in CROWNWeb for the following measures before the clinical month closes in CROWNWeb: Hypercalcemia, ICH CAHPS, Mineral Metabolism/Proposed Serum Phosphorus Reporting Measure, Anemia Management Reporting Measure, Pain Assessment and Follow-Up Reporting Measure, Clinical Depression Screening and Follow-Up Reporting Measure, and Ultrafiltration Rate Reporting Measure.

    Comment: One commenter supported the anemia management reporting measure and requested that CMS require facilities to note the Hb level at the first treatment of the week before dialysis is initiated. They also requested that CMS work to establish an anemia clinical measure to protect those on dialysis.

    Response: Thank you for supporting the measure and for your recommendation.

    Comment: One commenter requested information about their specific NHSN BSI Data. Specifically, their center incurred 11 cases of BSI. Out of the 11 cases, 5 were access related. Of the remaining 6, 2 were related to foot gangrene, 1 to a UTI, 2 were due to infected sacral decubiti, and 1 was for a perforated abdomen. The facility requests clarification as to why BSI infections extend beyond access related bacteremia.

    Response: CDC submitted several NHSN Dialysis Event measures to the National Quality Forum (NQF), an independent organization that evaluates healthcare measures. This includes the NHSN BSI measure, and a measure of access-related BSI (ARBSI), which is also captured in NHSN. Determining the source of a positive blood culture is inherently challenging and introduces significant subjectivity to (and opportunity for gaming) any measure of ARBSI. NQF evaluated these measures, but only endorsed the BSI measure because of its standardization and objectivity, and only that measure is included in the ESRD QIP. Because BSI includes all positive blood cultures regardless of suspected source, it's an objective and more reliable measure, relatively easily captured with electronic data alone, and well suited for use in assessment and inter-facility comparisons.

    We thank commenters for their suggestions on improving the measures included in the program and we will consider the feasibility of making some of their recommended changes in future years of the program.

    b. New Clinical Measures Beginning With the PY 2020 ESRD QIP i. Standardized Hospitalization Ratio (SHR) Clinical Measure Background

    Hospitalization rates are an important indicator of patient morbidity and quality of life. On average, dialysis patients are admitted to the hospital nearly twice a year and spend an average of 11.2 days in the hospital per year.7 Hospitalizations account for approximately 40 percent of total Medicare expenditures for ESRD patients.8 Measures of the frequency of hospitalization have the potential to help control escalating medical costs, play an important role in identifying potential problems, and help facilities provide cost-effective health care.

    7 United States Renal Data System. 2015 USRDS annual data report: Epidemiology of kidney disease in the United States. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD 2015.

    8 USRDS Annual Data Report (2015).

    At the end of 2013 there were 661,648 patients being dialyzed, of which 117,162 were new (incident) ESRD patients.9 In 2013, total Medicare costs for the ESRD program were $30.9 billion, a 1.6 percent increase from 2012.10 Correspondingly, hospitalization costs for ESRD patients are very high with Medicare costs of over $10.3 billion in 2013.

    9 USRDS Annual Data Report (2015).

    10 United States Renal Data System. 2015 USRDS annual data report: Epidemiology of kidney disease in the United States. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD 2015.

    Hospitalization measures have been in use in the Dialysis Facility Reports (formerly Unit-Specific Reports) since 1995. The Dialysis Facility Reports are used by the dialysis facilities and ESRD Networks for quality improvement, and by ESRD state surveyors for monitoring and surveillance. In particular, the Standardized Hospitalization Ratio (SHR) for Admissions is used in the CMS ESRD Core Survey Process, in conjunction with other standard criteria for prioritizing and selecting facilities to survey. In addition, the SHR has been found to be predictive of dialysis facility deficiency citations in the past (ESRD State Outcomes List). The SHR is also a measure that has been publicly reported since January 2013 on the Centers for Medicare and Medicaid Services (CMS) Dialysis Facility Compare Web site.

    Overview of Measure

    The SHR measure is an NQF-endorsed all-cause, risk-standardized rate of hospitalizations during a 1-year observation window. The Measures Application Partnership supports the direction of this measure for inclusion in the ESRD QIP.

    We proposed to adopt a modified version of the SHR currently endorsed by NQF (NQF #1463). We have submitted this modified measure to NQF for endorsement consideration as part of the standard maintenance process for NQF #1463. When we previously proposed the SHR for implementation in the QIP, we received public comments urging us to not rely solely on CMS Medical Evidence Form 2728 as the only source of patient comorbidity data in the risk-adjustment calculations for the SHR measure. These comments correctly stated that incident comorbidity data are collected for all ESRD patients on CMS Form 2728 when patients first become eligible to receive Medicare ESRD benefits, regardless of payer. Although CMS Form 2728 is intended to inform both facilities and us whether one or more comorbid conditions are present at the start of ESRD, “there is currently no mechanism for either correcting or updating patient comorbidity data on CMS' Medical Evidence Reporting Form 2728” (76 FR 70267). Commenters were concerned that risk-adjusting the SHR solely on the basis of comorbidity data from CMS Form 2728 would create access to care problems for patients, because patients typically develop additional comorbidities after they begin chronic dialysis, and facilities would have a disincentive to treat these patients if recent comorbidities were not included in the risk-adjustment calculations (77 FR 67495 through 67496).

    In the CY 2013 ESRD PPS proposed rule, we noted that updated comorbidity data could be captured on the ESRD 72x claims form. Some public comments stated that, “reporting comorbidities on the 72x claim could be a huge administrative burden for facilities, including time associated with validating that the data they submit on these claims is valid” (77 FR 67496). In response to these comments, we stated that we would “continue to assess the best means available for risk-adjustment for both the SHR and Standardized Mortality Ratio (SMR) measures, taking both the benefits of the information and the burden to facilities into account, should we propose to adopt these measures in future rulemaking” (77 FR 67496). We proposed to adopt a Comorbidity Reporting Measure for the PY 2016 ESRD QIP. This measure would have allowed us to collect and analyze the updated comorbidity data “to develop risk adjustment methodologies for possible use in calculating the SHR and SMR measures” (78 FR 72208). We chose not to finalize the comorbidity measure “as a result of the significant concerns expressed by commenters (78 FR 72209).

    In response to the comments on the SHR when originally proposed, and subsequently the proposed comorbidity reporting measure, we have made revisions to the SHR specifications. The modified SHR that we have proposed to adopt beginning with the PY 2020 ESRD QIP includes a risk adjustment for 210 prevalent comorbidities in addition to the incident comorbidities from the CMS Medical Evidence Form 2728. The 210 prevalent comorbidities were identified through review by a Technical Expert Panel (TEP) first convened in late 2015. The details of how the 210 comorbidities were identified are described below. We proposed to identify these prevalent comorbidities for purposes of risk adjusting the measure using available Medicare claims data. We believe this approach allows us to address commenters' concerns about increased reporting burden, while also resulting in a more robust risk-adjustment methodology.

    Our understanding is that the NQF evaluates measures on the basis of four criteria: Importance, scientific acceptability, feasibility, and usability. The validity and reliability of a measure's risk-adjustment calculations fall under the “scientific acceptability” criterion, and Measure Evaluation Criterion 2b4 specifies NQF's preferred approach for risk-adjusting outcome measures (http://www.qualityforum.org/WorkArea/linkit.aspx?LinkIdentifier=id&ItemID=79434). Under this approach, patient comorbidities should only be included in risk-adjustment calculations if the following criteria are met: (1) Risk adjustment should be based on patient factors that influence the measured outcome and are present at the start of care; (2) measures should not be adjusted for factors related to disparities in care or the quality of care; (3) risk adjustment factors must be substantially related to the outcome being measured; and (4) risk adjustment factors should not reflect the quality of care furnished by the provider/facility being evaluated. As indicated in the “Inclusion and Exclusion Criteria” subsection below, as well as in the NQF-endorsed measure specifications, the proposed SHR clinical measure includes dialysis patients starting on day 91 of ESRD treatment. Accordingly, we believe that consistent with NQF Measure Evaluation Criterion 2b4, it is appropriate to risk adjust the proposed SHR measure on the basis of incident patient comorbidity data collected on CMS Form 2728 because these comorbidities are definitively present at the start of care (that is, on day 91 of ESRD treatment). The 210 prevalent comorbidities now included for adjustment were also selected with these criteria in mind. Specifically, in developing its recommendations, the TEP was asked to apply the same criteria that the NQF uses to assign risk-adjusters under the approach described above.

    Reflecting these criteria, the TEP evaluated a list of prevalent comorbidities derived through the following process. First, the ESRD Hierarchical Comorbidity Conditions (ESRD-HCCs) were used as a starting point to identify ICD-9 diagnosis codes that could be used for risk adjustment. Those individual ICD-9 conditions that comprised the respective ESRD HCCs, with a prevalence of at least 0.1 percent in the patient population, were then selected for analysis to determine their statistical relationship to mortality or hospitalization. This step resulted in 555 diagnoses for comorbidities (out of over 3000 ICD-9 diagnosis codes in the ESRD-HCCs). Next, an adaptive lasso variable selection method was applied to these 555 diagnoses to identify those with a statistically significant relationship to mortality and/or hospitalization (p < 0.05). This process identified 242 diagnoses. The TEP members then scored each of these diagnoses as follows:

    1. Very likely the result of dialysis facility care.

    2. Likely the result of dialysis facility care.

    3. May or may not be the result of dialysis facility care.

    4. Unlikely to be the result of dialysis facility care.

    5. Very likely not the result of dialysis facility care.

    This scoring exercise aimed at identifying a set of prevalent comorbidities are not likely the result of facility care and therefore potentially are risk adjusters for SHR and SMR. The TEP concluded that comorbidities scored as “unlikely” or “very unlikely the result of facility care” by at least half of TEP members (simple majority) were appropriate for inclusion as risk-adjusters. This process resulted in 210 conditions as risk adjustors. The TEP recommended incorporation of these adjustors in the risk model for the SHR, and CMS concurred.

    Section 1881(h)(2)(B)(i) of the Act requires that, unless the exception set forth in section 1881(h)(2)(B)(ii) of the Act applies, the measures specified for the ESRD QIP under section 1881(h)(2)(A)(iv) of the Act must have been endorsed by the entity with a contract under section 1890(a) of the Act (that entity currently is NQF). Under the exception set forth in section 1881(h)(2)(B)(ii) of the Act, in the case of a specified area or medical topic determined appropriate by the Secretary for which a feasible and practical measure has not been endorsed by the entity with a contract under section 1890(a) of the Act, the Secretary may specify a measure that is not so endorsed, so long as due consideration is given to measures that have been endorsed or adopted by a consensus organization identified by the Secretary. We have given due consideration to endorsed measures, including the endorsed SHR (NQF #1463), as well as those adopted by a consensus organization, and we proposed this measure under the authority of 1881(h)(2)(B)(ii) of the Act. Although the NQF has endorsed a hospitalization measure (NQF #1463), our analyses suggest that incorporating prevalent comorbidities results in a more robust and reliable measure of hospitalization.

    We have analyzed the measure's reliability, the results of which are provided below and in greater detail in the SHR Measure Methodology report, available at: https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/ESRDQIP/061_TechnicalSpecifications.html. The Inter-Unit Reliability (IUR) was calculated for the proposed SHR using data from 2012 and a “bootstrap” approach, which uses a resampling scheme to estimate the within-facility variation that cannot be directly estimated by the analysis of variance (ANOVA). A small IUR (near 0) reveals that most of the variation of the measures between facilities is driven by random noise, indicating the measure would not be a good characterization of the differences among facilities, whereas a large IUR (near 1) indicates that most of the variation between facilities is due to the real difference between facilities.

    Overall, we found that IURs for the 1-year SHRs have a range of 0.70 through 0.72 across the years 2010, 2011, 2012 and 2013, which indicates that two-thirds of the variation in the 1-year SHR can be attributed to the between-facility differences and one-third to within-facility variation. Table 14 below shows the IURs for the 1-year SHR.

    Table 14—IUR for 1-Year SHR, Overall and by Facility Size, 2010-2013 Facility size (number of patients) 2010 IUR N 2011 IUR N 2012 IUR N 2013 IUR N All 0.72 5407 0.71 5583 0.70 5709 0.70 5864 Small (<=50) 0.54 1864 0.51 1921 0.48 1977 0.46 2028 Medium (51-87) 0.65 1702 0.63 1785 0.58 1825 0.57 1930 Large (>=88) 0.81 1841 0.81 1877 0.81 1907 0.82 1906 We also tested the SHR for measure validity, assessing its association with established quality metrics in the ESRD dialysis population. The SHR measure is correlated with the SMR for each individual year from 2010 through 2013, where Spearman's correlation coefficient ranged from 0.27 to 0.30, with all four correlations being highly significant (p < 0.0001). Also for each year from 2011 through 2013, the SHR was correlated with the Standardized Readmission Ratio (SRR) (Spearman's rho = 0.54, 0.50, 0.48; p < 0.0001).

    In addition, SHR is negatively correlated in each of the 4-years with the measure assessing percentage of patients in the facility with an AV Fistula (Spearman's rho = −0.12, −0.15, −0.12, −0.13). Thus higher values of SHR are associated with lower usage of AV Fistulas. Further, SHR is positively correlated with catheter use >=90 days (Spearman's rho = 0.21, 0.21, 0.18, 0.16), indicating that higher values of SHR are associated with increased use of catheters. These correlations are all highly significant (p < 0.001). For each year of 2010 through 2013, the SHR is also found to be negatively correlated with the percent of hemodialysis patients with Kt/V >= 1.2, again in the direction expected (Spearman's rho = −0.11, −0.13, −0.10, −0.11; p < 0.0001). Lower SHRs are associated with a higher percentage of patients receiving adequate dialysis dose.

    Data Sources

    Data are derived from an extensive national ESRD patient database, which is largely derived from the CMS Consolidated Renal Operations in a Web-enabled Network (CROWN), which includes Renal Management Information System (REMIS), and the Standard Information Management System database, the Enrollment Database, Medicare dialysis and hospital payment records, the CMS Medical Evidence Form (Form CMS-2728), transplant data from the Organ Procurement and Transplant Network, the Death Notification Form (Form CMS-2746), the Nursing Home Minimum Dataset, the Dialysis Facility Compare and the Social Security Death Master File. The database is comprehensive for Medicare Parts A and B patients. Non-Medicare patients are included in all sources except for the Medicare payment records. Standard Information Management System/CROWNWeb provides tracking by dialysis provider and treatment modality for non-Medicare patients. Information on hospitalizations and patient comorbidities are obtained from Medicare Inpatient Claims Standard Analysis Files.

    Outcome

    The outcome for this measure is the number of inpatient hospital admissions among eligible chronic dialysis patients under the care of the dialysis facility during the 1-year reporting period.

    Measure Eligible Population

    The measure eligible population includes adult and pediatric Medicare ESRD patients who have reached day 91 of ESRD treatment and who received dialysis within the 1-year period.

    Inclusion and Exclusion Criteria

    Patients are included in the measure after the first 90 days of treatment. For each patient, we identify the dialysis provider at each point in time. Starting with day 91 of ESRD treatment, we attribute patients to facilities according to the following rules. A patient is attributed to a facility once the patient has been treated there for 60 days. When a patient transfers from one facility to another, the patient continues to be attributed to the original facility for 60 days and then is attributed to the destination facility. In particular, a patient is attributed to his or her current facility on day 91 of ESRD treatment if that facility had treated him or her for at least 60 days. If on day 91, the facility had treated a patient for fewer than 60 days, we wait until the patient reaches day 60 of treatment at that facility before attributing the patient to the facility. When a patient is not treated in a single facility for a span of 60 days (for instance, if there were two switches within 60 days of each other), we do not attribute that patient to any facility. Patients are removed from facilities 3 days prior to transplant in order to exclude the transplant hospitalization. Patients who withdrew from dialysis or recovered renal function remain assigned to their treatment facility for 60 days after withdrawal or recovery.

    Risk Adjustment

    The SHR measure estimates expected hospitalizations calculated from a Cox model that adjusts for patient risk factors and demographic characteristics. This model accounts for clustering of patients in particular facilities and allows for an estimate of the performance of each individual facility, while applying the risk adjustment model to obtain the expected number of hospitalizations for each facility. The model does not adjust for sociodemographic status. We understand the important role that sociodemographic status plays in the care of patients. However, we continue to have concerns about holding dialysis facilities to different standards for the outcomes of their patients of diverse sociodemographic status because we do not want to mask potential disparities or minimize incentives to improve the outcomes of disadvantaged populations. We routinely monitor the impact of sociodemographic status on facilities' results on our measures.

    NQF is currently undertaking a 2-year trial period in which new measures and measures undergoing maintenance review will be assessed to determine if risk-adjusting for sociodemographic factors is appropriate. For 2-years, NQF will conduct a trial of a temporary policy change that will allow inclusion of sociodemographic factors in the risk-adjustment approach for some performance measures. At the conclusion of the trial, NQF will determine whether to make this policy change permanent. Measure developers must submit information such as analyses and interpretations as well as performance scores with and without sociodemographic factors in the risk adjustment model.

    Furthermore, the Office of the Assistant Secretary for Planning and Evaluation is conducting research to examine the impact of sociodemographic status on quality measures, resource use, and other measures under the Medicare program as directed by the Improving Medicare Post-Acute Care Transformation Act. We will closely examine the findings of the Assistant Secretary for Planning and Evaluation studies and consider how they apply to our quality programs at such time as they are available.

    Calculating the SHR Measure

    The SHR measure is calculated as the ratio of the number of observed hospitalizations to the number of expected hospitalizations. A ratio greater than one means that facilities have more hospitalizations than would be expected for an average facility with a similar patient-mix; a ratio less than one means the facility has fewer hospitalizations than would be expected for an average facility with a similar patient-mix.

    The SHR uses expected hospital admissions calculated from a Cox model as extended to handle repeated events, with piecewise constant baseline rates. The model is fit in two stages. The stage 1 model is first fitted to the national data with piecewise constant baseline rates applied to each facility. Hospitalization rates are adjusted for patient age, sex, diabetes, duration of ESRD, nursing home status, BMI at incidence, comorbidity index at incidence, and calendar year. This model allows the baseline hospitalization rates to vary between facilities then applies the regression coefficients equally to all facilities. This approach is robust to possible differences between facilities in the patient mix being treated. The second stage then uses a risk adjustment factor from the first stage as an offset. The stage 2 model then calculates the national baseline hospitalization rate. The predicted value from stage 1 and the baseline rate from stage 2 are then used to calculate the expected number of hospital days for each patient over the period during which the patient is seen to be at risk.

    The SHR is a point estimate—the best estimate of a facility's hospitalization rate based on the facility's patient-mix. For more detailed information on the calculation methodology please refer to our Web site at: https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/ESRDQIP/061_TechnicalSpecifications.html.

    We sought comments on our proposal to adopt the SHR measure for the ESRD QIP beginning with PY 2020. The comments and our responses for these proposals are set forth below.

    Comment: One commenter fully supported the proposed addition of the SHR measure. Several commenters supported the fact that the SHR measure now accounts for prevalent co-morbidities but stated that they could not support the incorporation of the measure into the QIP until its reliability at the proposed facility size has been demonstrated. The commenter stated that CMS's own data points out the significant issues of reliability, particularly for smaller facilities, with the 1-year SHR, and commenters expressed concerns that facilities will be penalized for performance due to what they termed “random chance,” noting that the reliability statistics for medium and small facilities fall significantly short of the 0.7 IUR threshold generally recommended and considered the minimum by the NQF. Specifically, commenters expressed concerns that only facilities with <5 patient-years at risk during the performance period are not eligible for the measure. They also asked CMS to align specifications across the standardized ratio measures, pointing out that the SHR measure uses a <5 patient-years at risk threshold while the SMR and STrR use <10 patient-years at risk. One commenter requested that CMS wait to incorporate the SHR measure until its reliability at the proposed facility size has been tested and demonstrated.

    Several commenters appreciated CMS's proposal to cast the SHR measure in terms of patient-years rather than patient numbers but noted that even under a scenario of a small facility with 50 patients, for example, where all 50 contribute 12 months of data to the denominator, the data indicate that the facility's performance score would still be more due to random chance than actual performance. These commenters stated that smaller facilities will have even lower reliability, possibly low enough to make the measure completely unreliable. One commenter added that even for medium sized facilities, the IUR is below the 0.7 threshold and argued that it is therefore inappropriate to penalize facilities when so much of their performance on the measure is due to random chance.

    Response: The SHR was recently reviewed and recommended for endorsement by the National Quality Forum Standing Renal Committee (report available here: http://www.qualityforum.org/Renal_2015-2017.aspx) based on the reliability statistics referenced in the comment, which is consistent with our assessment that the SHR is sufficiently reliable for use in quality programs. All components of measure reliability were reviewed in detail at the NQF ESRD Standing Committee's meeting in June, 2016. The reliability result reported in the NQF submission showing the overall IURs of 0.70-0.72 across all facilities was determined acceptable by the NQF Standing Committee as the measure passed on the reliability criterion, and passed on scientific acceptability overall. The evaluation and voting process and result adhered to consensus development guidelines in the evaluation, thereby reinforcing acceptance of the reliability results.

    Given the established effect of sample size on IUR calculations, it is expected that large facilities will have higher IUR values and small facilities will have lower IUR values for any given measure. CMS and consensus-endorsement bodies consider the overall reliability in determining the acceptability of the measure. We are aware of no published literature standard requiring an IUR of 0.7 for quality measure implementation, and are aware of no standard by NQF requiring this threshold as the minimum for endorsement or implementation. Nonetheless, the SHR does achieve an overall IUR of >.7.

    Comment: One commenter requested that CMS release the reliability statistics for the proposed SHR measure using the patient-years at risk construction so that additional analyses can be performed on the measure's reliability.

    Response: We thank the commenter for their request and we have provided the reliability statistics for the proposed SHR measure below. The Inter Unit Reliability (IUR) for assessing the reliability of a measure is defined as:

    ER04NO16.309 Where: s b 2 is the between-facility variance, s w 2 is the within-facility variance of the response for a single individual, and n′ is (approximately) the average number of patients in a facility.

    Table 15 below stratifies facilities into three strata based on patient years at risk for each facility.

    Table 15—IUR for 1-Year SHR, Overall and by Facility Size (Patient Years at Risk), 2010-2013 Facility size 2010 IUR N 2011 IUR N 2012 IUR N 2013 IUR N All 0.72 5407 0.71 5583 0.70 5709 0.70 5864 <32.02 0.60 1811 0.56 1874 0.53 1884 0.53 1919 [32.02, 58.64) 0.63 1788 0.64 1830 0.57 1891 0.56 2032 >=58.64 0.81 1808 0.80 1879 0.81 1934 0.82 1913

    Table 16 below stratifies into three strata based on the number of patients for each facility.

    Table 16—IUR for 1-Year SHR, Overall and by Facility Size (Number of Patients), 2010-2013 Facility size 2010 IUR N 2011 IUR N 2012 IUR N 2013 IUR N All 0.72 5407 0.71 5583 0.70 5709 0.70 5864 Small (<=50) 0.54 1864 0.51 1921 0.48 1977 0.46 2028 Medium (51-87) 0.65 1702 0.63 1785 0.58 1825 0.57 1930 Large (>=88) 0.81 1841 0.81 1877 0.81 1907 0.82 1906

    Comment: Several commenters asked CMS to update the exclusion criteria for the SHR and SRR measures such that a facility is not penalized twice for certain readmissions. As the measures are currently specified, a readmission occurring within 30 days of the index discharge will be captured as a hospitalization by the SHR and a readmission by the SRR, such that a facility is penalized twice for each such readmission. Commenters urged CMS to modify the SHR specifications to incorporate an exclusion for hospitalizations that occur within 29 days of the index discharge such that the two measures will appropriately measure two different types of events. One commenter questioned why CMS is proposing to include both the SRR and the SHR measures in the QIP concurrently.

    Additionally, commenters are concerned that the proposed SHR measure inappropriately penalizes facilities for hospitalizations over which they have little to no control, such as from foot ulcers, lupus flare-ups, myocardial infarction, congestive heart failure, etc. They pointed out that many providers are involved in the care of ESRD patients and that while there is a need to coordinate with other providers, it is not always feasible. Providers struggle with different EMR systems which often do not communicate with one another and there is often a lack of resources on either side which prevents effective communication efforts. Commenters recommended that rather than implementing an all-cause hospitalization measure, CMS should consider specific measures such as hospitalization for catheter infection, hospitalization for volume overload, hospitalization for anemia/blood transfusions, etc. so that facilities are only being held accountable for hospitalizations related to conditions directly related to the patient's dialysis treatment.

    Response: It is true that the SHR and SRR may simultaneously capture the same hospitalization event. We believe this is appropriate because it places additional emphasis on the importance of avoiding hospitalizations for dialysis patients. In addition, while the SRR and SHR are moderately correlated with one another, as might be expected, it is possible for a facility to score relatively well on one measure, and relatively poorly on the other. We also believe that the measures capture distinct aspects of the quality of care provided by a dialysis facility. While the SRR assesses the coordination of care transitions as dialysis patients are discharged from an acute care hospital into the care of a dialysis facility, the SHR evaluates the facility's overall performance in reducing hospitalizations.

    The 2007 TEP that participated in developing the SHR considered the possibility of developing cause specific SHRs, but recommended the use of all-cause SHR measures due to various reasons including the lack of clear research to indicate what causes (that is, reason for admission) should be selected as valid indicators of poor ESRD care, and issues associated with inter-rater reliability in assessing cause of hospitalization. The TEP reached a strong consensus that the all-cause measure would be reliable and valid and the measure would typically be related to quality of care. We have some crude measures of cause of hospitalization which we have used to assess the relationship between the all-cause measure and cause specific components. These measures are useful in assessing the overall SHR measures, but we caution that the cause specific hospitalizations have not been tested or validated at this time. All correlations are in the expected direction and highly significant, (p<0.0001). Thus these preliminary analyses show that the overall hospitalization rate also correlates with specific causes that are commonly thought to be potentially related to poor quality of care.

    Comment: Several commenters strongly supported CMS's use of prevalent comorbidities in the risk models for the SMR and SHR, and commended CMS for moving to incorporate prevalent comorbidities in the proposed specifications for the SHR measure. One commenter encouraged CMS to review co-morbidities as they relate to the pediatric ESRD population since these measures include all patients with ESRD. Commenters also requested that CMS allow for the CMS Medical Evidence Reporting Form 2728 be permitted to be updated because the UB04 and 8371 forms are unable to accommodate the vast number of diagnosis codes that patients with ESRD often present with. These commenters stated that patients often develop additional comorbidities after beginning dialysis, and facilities would be disincentivized to treat patients if recently developed comorbidities were not included in the risk-adjustment calculation. Some commenters supported CMS's proposal to include a risk adjustment for 210 prevalent comorbidities in addition to the incident comorbidities from the 2728 Form. One commenter asked CMS to confirm whether providers will be able to report all conditions/diagnoses on 72X claim forms, not just those related to ESRD or the medications and treatments given. Specifically, they asked whether the Medicare Contractor and their system would be able to accommodate this much information or whether including additional comorbidities would cause a billing issue, cause claims to pend, or cause claims to get stuck in T-status.

    Response: We thank commenters for their suggestions, and we agree wholeheartedly that prevalent comorbidity data should be collected from multiple sources. We would like to clarify that prevalent comorbidity information for the measure is obtained from all Medicare claims data from all facility settings (not limited to dialysis claims only), and CROWNWeb data, and as such, we are not limited to the comorbidities filed on 72X claim forms.

    Comment: Commenter agrees that strategies to reduce hospitalizations are an important area to focus on because they will save the government money and improve the quality of life for patients, however commenter urged CMS to modify the SHR measure to ensure that facilities are not unfairly penalized when they have had no impact on the reason for the hospitalization. They recommended that CMS develop exclusions for patients admitted before being treated at a dialysis unit, patients admitted for other comorbidities not related to kidney failure, and patients who repeatedly fail to adhere to their treatment regime. Additionally, commenter argued that hospitals need to be mandated to share their discharge information to ensure optimal continuum of care.

    Response: The SHR does contain adjustments for comorbidities that were determined likely not to be the result of facility care (as determined by a 2015 Technical Expert Panel). We also exclude patients from a facility if they have not had ESRD for more than 90 days, or if they have not been receiving treatment at the facility for more than 60 days, which precludes the risk of patients being included in a facility's SHR prior to treatment. However, the measure is an all-cause hospitalization measure, reflecting hospital admissions regardless of cause. The measure's design accounts for hospitalizations that are random occurrences by assessing facilities' performances relative to one another. At present, we are aware of no means of distinguishing what hospitalizations are related to dialysis facility treatment. The SHR was originally endorsed as an all-cause measure, and this is consistent in approach to other NQF-endorsed measures, such as the SRR (NQF #2496). Finally, we appreciate the suggestion to mandate hospitals to share discharge information with dialysis facilities and we will take it under advisement.

    Comment: Commenter supported the proposed SHR measure but expressed concerns about the potential for it driving unintended changes in practice. Specifically, they want CMS to make sure that any error in measure rates due to small number of cases will not adversely affect facility payment.

    Response: In order to avoid allowing small numbers of cases to adversely affect facility payment, for the purposes of the SHR measure, facilities with fewer than 5 patient-years at risk during the performance period are not eligible for the measure. Additionally, a small facility adjustment will be applied to small facilities deemed eligible for the measure.

    Comment: Commenter agreed with CMS that outcome measures need to be emphasized more in pay-for-performance programs. But they disagreed that rankings should result from nationwide “tournaments” because this format disadvantages certain providers based on not on the quality of care they deliver but on the demographics of the geographic area they serve.

    Response: We agree with the commenter on the importance of including outcome measures in the ESRD QIP, which is one reason why we proposed to adopt the SHR measure. We also note that unlike other CMS value-based purchasing programs (for example, Hospital Value-Based Purchasing), the ESRD QIP does not introduce a “tournament” mentality because payment increases from some facilities are off-set by payment reductions from other facilities. Rather, all facilities that receive a TPS that is greater than the minimum TPS will avoid a payment reduction, and this means that a facility's payment is not impacted by scores received by another facility.

    Comment: Commenter requested that for the Standardized Hospitalization Ratio Clinical Measure, CMS clearly define what counts as a comorbid condition because, given the definition of “comorbid condition” in the ESRD PPS, there is confusion surrounding this term and whether it is only referring to the 4 payable “comorbid conditions” or whether it refers to all conditions outside of ESRD that ail the patient.

    Response: We encourage the commenter to refer to the SHR methodology report, which contains specific information about the comorbidities that are adjusted for in the SHR. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/ESRDQIP/Downloads/SHR-Methodology-Report.pdf.

    Comment: Commenter supported the limit of the denominator for the SHR measure to Medicare patients because they understand the trade-off to now limit the denominator population due to claims data availability.

    Response: We thank you for your comment and supporting this aspect of the SHR.

    Final Rule Action: After consideration of the comments received, we are finalizing the Standardized Hospitalization Ratio Clinical Measure for inclusion in the ESRD QIP measure set beginning in PY 2020.

    c. Reporting Measures Beginning With the PY 2020 ESRD QIP i. Serum Phosphorus Reporting Measure

    As mentioned above, for PY 2020 we proposed to adopt a new Proposed Serum Phosphorus Reporting Measure. Section 1881(h)(2)(A)(iii) of the Act states that the measures specified for the ESRD QIP shall include other measures as the Secretary specifies, including, to the extent feasible, measures of bone mineral metabolism. Abnormalities of bone mineral metabolism are exceedingly common and contribute significantly to morbidity and mortality in patients with advanced Chronic Kidney Disease (CKD). Numerous studies have associated disorders of mineral metabolism with morbidity, including fractures, cardiovascular disease, and mortality. Overt symptoms of these abnormalities often manifest in only the most extreme states of calcium-phosphorus dysregulation, which is why we believe that routine blood testing of calcium and phosphorus is necessary to detect abnormalities.

    The proposed Serum Phosphorus Reporting Measure is based on a serum phosphorus measure that is endorsed by the NQF (NQF #0255), which evaluates the extent to which facilities monitor and report patient phosphorus levels. In addition, and as explained above, the proposed Serum Phosphorus Reporting Measure is collected using CROWNWeb data and excludes patients using criteria consistent with other ESRD QIP measures. The Measure Applications Partnership expressed full support for this measure.

    For PY 2020 and future payment years, we proposed that facilities must report serum or plasma phosphorus data to CROWNWeb at least once per month for each qualifying patient. Qualifying patients for this proposed measure are defined as patients 18 years of age or older, who have a completed CMS Medical Evidence Form 2728, who have not received a transplant with a functioning graft, and who are assigned to the same facility for at least the full calendar month (for example, if a patient is admitted to a facility during the middle of the month, the facility will not be required to report for that patient for that month). We further propose that facilities will be granted a one-month period following the calendar month to enter this data. For example, we would require a facility to report Serum Phosphorus rates for January 2018 on or before February 28, 2018. Facilities would be scored on whether they successfully report the required data within the timeframe provided, not on the values reported. Technical specifications for the Serum Phosphorus reporting measure can be found at: http://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/ESRDQIP/061_TechnicalSpecifications.html.

    We sought comments on this proposal. The comments and our responses are set forth below.

    Comment: One commenter specifically recommended that CMS work to create a mineral metabolism composite measure which would include Hypercalcemia, intact-PTH and Phosphorus. One commenter urged CMS to convene a TEP to identify measures on Mineral Bone Disease that drive quality outcomes and are within the facility's domain to manage because Serum Phosphorus levels remain highly dependent on patients' adherence to prescribed medications.

    Response: We thank commenters for their support. We have worked with the community in an attempt to find measures that are more appropriate for assessing bone and mineral metabolism. Unfortunately, we are not aware of any measures which are appropriate for inclusion in the ESRD QIP at this time. We will take commenters' suggestions into consideration as we continue to work on identifying more appropriate measures. We will also consider convening a TEP to identify measures on Mineral Bone Disease.

    Comment: One commenter pointed out that the deadlines listed for the Serum Phosphorus Reporting Measure are 30 days sooner than the deadlines for the other measures submitted in CROWNWeb and requested that CMS align the reporting deadlines so that all of January data is required to be submitted by March 31. It would be very confusing, they argued, to have to submit just phosphorus by February 28th but everything else by March 31.

    Response: We thank commenter for sharing their concerns, however we believe that the reporting deadlines are consistent across measures submitted in CROWNWeb. Facilities are granted at least 1-month window after the end of the applicable month to report data. In section IV(E)(2)(a) above, we have finalized a new policy that, for measures reported in CROWNWeb, facilities must report data for the relevant clinical month by the date on which the clinical month closes in CROWNWeb. For example, under our old policy, February data was required to be submitted by March 31st. Under our revised policy, February data will need to be submitted by the date on which the February clinical month closes in CROWNWeb. In normal circumstances, this data would be required by March 31st, but this policy provides an exception in the event that CROWNWeb is not available on that day. The NHSN measures are an exception to this approach to reporting deadlines; in the cases of those measures, facilities have more time to report because they are only required to do so on a quarterly basis.

    Comment: Commenters noted that the exclusions between the proposed Serum Phosphorus Reporting Measure and the Mineral Metabolism measure differ and they argued that changing the exclusion criteria causes unnecessary confusion. They urged CMS to harmonize the measure specifications across measures. Specifically, though they agree with the exclusions, the previous exclusion of “in-center HD patients treated at the facility <7 times during the claim month” has been replaced with “transient dialysis patients (in unit <30 days).” Additionally, another exclusion expanding on this is provided: “Patients not at the facility for the entire month (“Admit Date'> the first day of the month and “Discharge Date” < the last day of the month).” One commenter also pointed to the exclusion from the Mineral Metabolism measure of “in-center HD patients treated at a facility fewer than 7 times during the claim month” and noted that the proposed Serum Phosphorus Reporting Measure specifies instead the exclusion of “transient dialysis patents” and of “patients not at the facility for the entire month” and requested an explanation for why these differences exist.

    Response: We thank commenters for their suggestions. However, the differences in the exclusion criteria between the Mineral Metabolism Reporting Measure and the proposed Serum Phosphorus Reporting Measure can be explained by our rationale for making this proposed replacement. As we explained above, we are proposing to replace the Mineral Metabolism Reporting Measure with the Proposed Serum Phosphorus Reporting Measure to align with NQF specifications. The Proposed Serum Phosphorus Reporting Measure is based on an NQF-endorsed measure, NQF #0255 Measurement of Serum Phosphorus Concentration, which includes the same exclusion criteria we have included. Treatments per month and time at facilities represent different methods for determining patient eligibility. We are updating the exclusion criteria to be more consistent with the other measures included in the ESRD QIP measure set. The Dialysis Adequacy clinical measures use the same exclusion criteria as the proposed Serum Phosphorus Reporting Measure and it is likely that as measures undergo review at NQF, they will also be updated for consistency. Additionally, we are proposing to use admit and discharge data from CROWNWeb as part of our intention to increasingly use CROWNWeb as the data source for calculating measures in the ESRD QIP.

    Comment: One commenter argued that the proposed serum phosphorus measure inappropriately penalizes facilities and care teams for patients' non-compliance with their medication. They stated that compliance with phosphorus binders is a challenging problem and that dialysis units are working to address it by having dieticians reviewing the importance of compliance with their patients, as well as handing out educational handouts and presenting webinars to patients.

    Response: We disagree that the Serum Phosphorus measure penalizes facilities for patient non-compliance with their medical regime. Because Serum Phosphorus is a reporting measure, facilities are evaluated on the basis of how much data they submit, as opposed to what those data represent.

    Final Rule Action: After consideration of the comments received, we are finalizing the adoption of the Serum Phosphorus Reporting Measure into the ESRD QIP Measure set beginning in PY 2020. As discussed above, this measure will replace the Mineral Metabolism Reporting Measure and will ensure that exclusion criteria come into alignment across the ESRD QIP measure set as well as moving the program in the direction of relying increasingly on CROWNWeb as a data source rather than claims.

    ii. Ultrafiltration Rate Reporting Measure

    The ultrafiltration rate measures the rapidity with which fluid (ml) is removed during dialysis per unit (kg) of body weight in unit (hour) time. A patient's ultrafiltration rate is under the control of the dialysis facility and is monitored throughout a patient's hemodialysis session. Studies suggest that higher ultrafiltration rates are associated with higher mortality and higher odds of an “unstable” dialysis session,11 and that rapid rates of fluid removal at dialysis can precipitate events such as intradialytic hypotension, subclinical yet significantly decreased organ perfusion, and in some cases myocardial damage and heart failure.

    11 Flythe S.E., Kimmel S.E., Brunelli S.M. Rapid fluid removal during dialysis is associated with cardiovascular morbidity and mortality. Kidney International (2011) Jan; 79(2): 50-7. Flythe J.E., Curhan G.C., Brunelli S.M. Disentangling the ultrafiltration rate—mortality association: The respective roles of session length and weight gain. Clin J Am Soc Nephrol. 2013 Jul;8(7):1151-61. Movilli, E. et al. “Association between high ultrafiltration rates and mortality in uraemic patients on regular hemodialysis. A 5-year prospective observational multicenter study.” Nephrology Dialysis Transplantation 22.12(2007): 3547-3552.

    We have given due consideration to endorsed measures, as well as those adopted by a consensus organization. Because no NQF-endorsed measures or measures adopted by a consensus organization that require reporting of relevant ultrafiltration data currently exist, we are proposing to adopt the Ultrafiltration Rate reporting measure under the authority of section 1881(h)(2)(B)(ii) of the Act.

    The proposed Ultrafiltration Rate reporting measure is based upon the NQF-endorsed Avoidance of Utilization of High Ultrafiltration Rate (>/= 13 ml/kg/hr) (NQF #2701). This measure assesses the percentage of patient-months for patients with an ultrafiltration rate greater than or equal to 13 ml/kg/hr. The Measure Applications Partnership expressed full support for this measure.

    For PY 2020 and future payment years, we proposed that facilities must report the following data to CROWNWeb for all hemodialysis sessions during the week of the monthly Kt/V draw submitted to CROWNWeb for that clinical month, for each qualifying patient (defined below):

    • HD Kt/V Date • Post-Dialysis Weight • Pre-Dialysis Weight • Delivered Minutes of BUN Hemodialysis • Number of sessions of dialysis delivered by the dialysis unit to the patient in the reporting month Qualifying patients for this proposed measure are defined as patients 18 years of age or older, who have a completed CMS Medical Evidence Form 2728, who have not received a transplant with a functioning graft, who are on in-center hemodialysis, and who are assigned to the same facility for at least the full calendar month (for example, if a patient is admitted to a facility during the middle of the month, the facility will not be required to report for that patient for that month). We further proposed that facilities will be granted a 1-month period following the calendar month to enter this data. For example, we would require a facility to report ultrafiltration rates for January 2018 on or before February 28, 2018. Facilities would be scored on whether they successfully report the required data within the timeframe provided, not on the values reported. Technical specifications for the Ultrafiltration Rate reporting measure can be found at: http://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/ESRDQIP/061_TechnicalSpecifications.html.

    We sought comments on this proposal. The comments and our responses for these proposals are set forth below.

    Comment: One commenter noted that CMS's proposal to adopt the UFR measure for the QIP seems inconsistent with the proposed payment restrictions for patients receiving dialysis more frequently than 3 times per week. The UFR measure restricts the amount of fluid that can be removed from a patient per session, which results in the medically justified need for extra dialysis sessions for some patients. The commenter argued that CMS should therefore allow for payment for extra dialysis sessions for those patients whose UFR rates exceed the proposed QIP threshold. Another commenter questioned the value in implementing UFR as a reporting measure when there is an NQF-endorsed clinical measure that, if implemented, would be more meaningful to patient outcomes. Commenter instead encouraged CMS to implement NQF #2701 as a Clinical Measure in the ESRD QIP.

    Several commenters expressed concern about the clinical rationale behind the UFR measure's technical specifications. Specifically, one commenter noted that the KDOQI hemodialysis adequacy clinical practice guidelines do not include a target for UFR and instead recommend minimizing UFR as much as possible to maximize hemodynamic stability and tolerability of the hemodialysis procedure. The commenter stated that the reason for this is that there is limited evidence for setting a specific target, and that one study suggested an increased risk for individuals with heart failure with a UFR between 10-14 ml/h/kg but improvements for those without heart failure with a UFR in that range. The commenter therefore stated that they would support the implementation of NQF #2701 in the QIP with the knowledge that there will be challenges in the implementation process that will require efforts from facilities, staff, physicians and patients to ensure patient participation and adherence to their dialysis prescription and fluid restrictions. The commenter stated that the KCQA measure excludes patients who dialyze for less time than the average patient, and commenter urged CMS to include this exclusion. Commenters added that due to individualized patient responses to fluid removal, it is difficult to arrive at a single rate for UFR that is “too high” for patients. Rather than the UFR >/= 13 ml/kg/hr that CMS has proposed, commenters urged CMS to consider a measure of UFR >/= 10 ml/kg/hr. One commenter suggested that they would not recommend excluding patients who dialyze more than 3 times per week, transient patients or patients who are new to ESRD because these patients would not be expected to be at risk of developing intradialytic hypotension when compared to the general ESRD population. Another commenter specifically recommended that CMS exclude patients with <3 hemodialysis treatments in the facility during the reporting month. One commenter also suggested that patients who are new to ESRD and in their first 90 days of treatment should not be excluded from any UFR reporting requirements because of their particularly high mortality risk. Finally, one commenter stated that they would support efforts by CMS to ensure that time on dialysis is adjusted in such a way that patients would not suffer from symptoms related to rapid ultrafiltration. The commenter stated that monitoring Kt/V solely instead of taking into consideration the greater role of fluid management and removal is likely to result in more problems with sickness for patients, potentially impacting quality of life, and that while correction of uremia remains important, limiting our focus on the rate of fluid removal is to the detriment of patients, leading to an increase in the risk of cardiovascular disease.

    Response: We thank the commenters for support of the measure's implementation, despite the challenges inherent in implementation described in the comment. We recognize that successful fluid management in this setting requires a multidisciplinary approach, including patient education. Regarding the KDOQI reference, we believe that those clinical practice guidelines are relatively outdated, having been published before most of the recent literature related to the association between high UFR and patient risk. We note that both NQF 2700 and 2701 UFR measures passed NQF review criteria for strength of evidence. Regarding the statement “The KCQA measure excludes patients who dialyze for less time than the average patient, and commenter urged CMS to include this exclusion”, the statement is not factually correct. NQF 2701 provides a numerator exclusion for patients dialyzing for > or equal to 240 minutes. The average duration of dialysis session length for U.S. patients on thrice weekly dialysis is approximately 210 minutes, with a minority of U.S. dialysis patients receiving 240 or more minutes of dialysis per session.

    The rate threshold of >13 ml/kg/hr was chosen to be consistent with the NQF endorsed threshold, and is also consistent with most of the published evidence demonstrating associations of poorer outcomes with UFR between 10-15 ml/kg/hr.

    We thank the commenter for generally supporting the importance of the UFR measure. Patients new to ESRD do have increased mortality risk in general, but there is no convincing evidence to suggest that the observed risk is directly related to high UFR. In addition, fluid management generally and, response to high UFR in particular, may include varied clinically appropriate interventions by the dialysis provider, including patient education, counselling and dietary planning by Renal Dietitian and assessment and interventions by social workers and other members of the Interdisciplinary Care Team to address root causes for large interdialytic weight gains. Patients new to dialysis often have not received much of this education and support. Excluding patients new to dialysis increases the opportunities for dialysis providers to include these interventions and ultimately enhances the attribution of the measure outcome to the dialysis facility. We agree that both small solute removal (for example, Kt/V) and appropriate fluid management (UFR) are important measures of overall adequate care of dialysis patients.

    Comment: Several commenters supported fluid management as an important quality improvement area, but stated that they would support the inclusion of the NQF-endorsed measure, 2701: Avoidance of Utilization of High Ultrafiltration if CMS incorporated it consistent with the specifications reviewed and endorsed by the NQF rather than with the modifications CMS has proposed. They expressed concerns about the changes that CMS proposed to the measure and asked for justification for the approach taken to the measure's exclusion criteria.

    Specifically, commenters requested that CMS retain the exclusion of facilities with 25 or fewer patients, rather than the modified “fewer than 11 patients” that CMS proposed, because commenters believe this modification would hurt small facilities. Additionally, commenters requested that CMS expressly state that reporting the number of hemodialysis sessions delivered during the Kt/V week will be required for the reporting measure because the NQF-endorsed measure excludes patients regularly prescribed >3 sessions/week. They noted that CMS has not indicated this requirement and that NQF 2701 excludes patients regularly prescribed >3 sessions/week. Commenters asked for confirmation that the intent is to implement this measure as specified for those patients receiving thrice weekly HD. Commenters requested clarification as to whether excluding patients on dialysis <90 days at the beginning of the reporting month, an exclusion not present in the KCQA measure, was a data collection issue, or whether CMS has any additional justification for this approach.

    Response: We appreciate the comments. We first note that we have not proposed the NQF-endorsed #2701: Avoidance of Utilization of High Ultrafiltration, but a reporting based upon that measure. This is because the reporting measure is not a measure of clinical performance, as is 2701, but a measure that collects data relevant to the quality of care provided by dialysis facilities. The reporting measure does not limit the measure to patients receiving dialysis less than 3 weekly sessions as ultrafiltration is considered consequential for these patients as well. At a later date, CMS may consider through rulemaking the implementation of NQF #2701 as a clinical performance measure, at which point such an exclusion could be calculated, as specified, using the required data elements for each treatment in the week for which the Kt/V is reported to us.

    Comment: Several commenters supported the proposal to adopt the UFR measure but expressed concerns with CMS's definition of qualifying patients, and requested clarification regarding the exclusions listed in the technical specifications. Commenter urged CMS to clarify how dialysis facilities should report patients who may be assigned to a facility for a full calendar month but not physically present during a portion of that month due to events such as hospitalization. They suggested that CMS use the same exclusion criteria as for other measures, that is, to exclude patients who dialyze at the facility less than seven times during the applicable month. Another commenter requested clarification regarding the exclusion of patients on dialysis for more than 90 days at the beginning of the reporting month.

    Response: As with other measures, such as the Comprehensive Dialysis Adequacy Measure finalized for PY 2019, we define the population for this reporting measure by assignment to a facility for a full month. While a patient may spend part of that month hospitalized, the facility is still required to provide data for dialysis adequacy, and we believe it is appropriate to require reporting of ultrafiltration data for these patients as well, since the data elements are products of ongoing dialysis treatment. We do not restrict facilities from coordinating with hospitals to obtain relevant data, and we believe that such coordination is appropriate. We proposed to require providers to report the number of HD treatments received by each patient in the reporting month, which should alert us to unintended consequences of defining the population as we have.

    Comment: Several commenters urged CMS to exclude transient patients from the UFR measure, and encouraged CMS to include a standard specification for transient patients within the measure specifications. One commenter pointed out that “number of HD sessions delivered during the month” is included as a data element but the transient exclusion is not included in the qualifying patients' description. They also pointed out that the Mineral Metabolism measure had an exclusion for patients with <7 treatments, while the Serum Phosphorus measure defines transient patients as “in unit <30 days” but the proposed UFR measure seems to lack this exclusion altogether, despite its having been present in the measure's original specifications.

    Response: As proposed, transient patients are excluded from the Ultrafiltration Rate Reporting Measure. We wish to clarify that the denominator is defined by patients who are assigned to the facility for an entire month, similar to the Serum Phosphorus measure referenced in the comments.

    Comment: Two commenters supported the proposed UFR measure but recommended that CMS review the reporting deadlines for the measure. Specifically they suggested that rates for January 2018 be due on or before March 31, 2018 rather than February 28 to align with the reporting of other clinical values for January 2018 and to avoid confusion.

    Response: The Proposed Ultrafiltration Rate Reporting measure requires facilities to report data to CROWNWeb for all hemodialysis sessions during the week of the monthly Kt/v draw for that clinical month. We are finalizing that facilities are required to report ultrafiltration rates for January 2018 by the date on which the clinical month closes in CROWNWeb, which is approximately 1-month after the end of that month. These requirements are consistent with our newly finalized policy for other measures reported monthly in CROWNWeb. For example, the proposed Serum Phosphorus Reporting Measure requires facilities to report data monthly to CROWNWeb. Data for January, 2018 must be reported by the date on which the clinical month closes in CROWNWeb.

    Comment: Several commenters supported the proposed UFR measure but encouraged CMS to further investigate whether the threshold should be set at UFR >10 ml/Kg/Hr or at 13 ml/kg/hr. They recommended that paying for HD hourly rather than by treatment would likely resolve concerns about overly aggressive ultrafiltration amounts and rates as the reluctance of providers to offer longer treatments is financial, and they recommended that the UFR measure be used for home HD as well as in-center. Commenters also urged CMS to continue efforts to identify an improved fluid management measure for use in the ESRD QIP.

    Response: We appreciate the comments. We agree that all in the dialysis community should be pursuing ongoing enhancements of quality measures. Regarding the specific recommendation for UFR >10 threshold, the rate threshold of >13 ml/kg/hr was chosen to be consistent with the NQF endorsed threshold, and is also consistent with most of the published evidence demonstrating associations of poorer outcomes with UFR between 10-15 ml/kg/hr.

    Comment: One commenter expressed concerns that the administrative and financial burden associated with the UFR measure is too much for facilities to take on and urged CMS to adopt a transition period for complying with this measure.

    Response: We thank the commenter for expressing their concerns, and we appreciate that the proposed Ultrafiltration Rate Reporting Measure does require a large number of data elements. We believe that there are important clinical and clinical quality reasons for collecting and monitoring these data which outweigh the administrative and financial burden concerns expressed by the commenter. As we indicated in the proposed rule, higher ultrafiltration rates are associated with higher mortality and higher odds of an “unstable” dialysis session. Rapid rates of fluid removal at dialysis can precipitate events such as intradialytic hypotension, subclinical yet significantly decreased organ perfusion, and in some cases myocardial damage and heart failure.

    Final Rule Action: After a careful consideration of the comments received, we are finalizing the Ultrafiltration Rate Reporting Measure for inclusion in the ESRD QIP measures set beginning in PY 2020.

    3. Performance Period for the PY 2020 ESRD QIP

    We proposed to establish CY 2018 as the performance period for the PY 2020 ESRD QIP for all but the NHSN Healthcare Personnel Influenza Vaccination reporting measure because it is consistent with the performance periods we have historically used for these measures and accounts for seasonal variations that might affect a facility's measure score.

    We proposed that the performance period for the NHSN Healthcare Personnel Influenza Vaccination reporting measure will be from October 1, 2016 through March 31, 2017, because this period spans the length of the 2016-2017 influenza season.

    We sought comments on these proposals. The comments and our responses for these proposals are set forth below.

    Comment: Commenters generally supported setting CY 2018 as the Performance Period for PY 2020 but many commenters expressed concern about the performance period for the NHSN HCP Influenza Vaccination Reporting Measure and urged CMS to align with the NHSN protocol upon which the measure is based, and with NQF's Standardized Influenza Immunization Specifications, which define the acceptable immunization period as beginning on “October 1 or when the vaccine became available” so that facilities are not penalized for early vaccination, which is generally recommended to protect patients before the virus begins spreading through the community. One commenter suggested that the performance period should span the entire calendar year, while others recommended that the performance period go from October 1, 2017 through March 31, 2018.

    One commenter also expressed concerns with the CCN Open Date criteria for the NHSN HCP Influenza Vaccination Reporting Measure. They suggested that if the flu season spans from October 1, 2016 through March 31, 2017, then the CCN open date should be January 1, 2016 rather than January 1, 2017. Similarly, for the flu season that spans from October 1, 2017 through March 31, 2018, facilities should be required to have a CCN open date of January 1, 2017. The reason for this is that if a facility is certified on December 31, 2016, they are still required to report this data for the full 2016/2017 flu season even though they were not certified for the full flu season and they should not be required to create a detailed employee log to track the vaccination status of each employee while also having to focus on opening a new facility, keeping track of new admits, and registering for CROWNWeb and NHSN access.

    Response: We thank commenters for their support. As we stated in the CY 2015 ESRD PPS Final Rule (79 FR 66207) under the NHSN HCP Influenza Vaccination reporting measure, the performance period for the denominator (the number of healthcare personnel working in a facility) is from October 1 through March 31. However, the numerator measurement (vaccination status) includes vaccines obtained “as soon as vaccine is available.” As a result, HCP working at the facility as of October 1 who were vaccinated in September would be considered vaccinated for the performance period under this measure. Facilities are not penalized in any way for vaccinating their employees prior to the start of the performance period.

    With regards to commenter's suggestion about our CCN Open Date policy, we accounted for this concern in the CY 2015 ESRD PPS Final Rule (79 FR 66212). We stated that facilities with a CCN open date after January 1, 2016 would not be eligible to receive a score on the NHSN Healthcare Personnel Influenza Vaccination reporting measure in the PY 2018 program. We acknowledged that it takes time for facilities to register with NHSN and become familiar with the NHSN Healthcare Personnel Safety Component Protocol.

    4. Performance Standards, Achievement Thresholds, and Benchmarks for the PY 2020 ESRD QIP

    Section 1881(h)(4)(A) of the Act provides that “the Secretary shall establish performance standards with respect to measures selected . . . for a performance period with respect to a year.” Section 1881(h)(4)(B) of the Act further provides that the “performance standards . . . shall include levels of achievement and improvement, as determined appropriate by the Secretary.” We use the performance standards to establish the minimum score a facility must achieve to avoid a Medicare payment reduction. We use achievement thresholds and benchmarks to calculate scores on the clinical measures.

    a. Performance Standards, Achievement Thresholds, and Benchmarks for the Clinical Measures in the PY 2020 ESRD QIP

    For the same reasons stated in the CY 2013 ESRD PPS final rule (77 FR 67500 through 76502), we proposed for PY 2020 to set the performance standards, achievement thresholds, and benchmarks for the clinical measures at the 50th, 15th, and 90th percentile, respectively, of national performance in CY 2016, because this will give us enough time to calculate and assign numerical values to the proposed performance standards for the PY 2020 program prior to the beginning of the performance period. We continue to believe these standards will provide an incentive for facilities to continuously improve their performance, while not reducing incentives to facilities that score at or above the national performance rate for the clinical measures.

    We sought comments on these proposals. The comments and our responses for these proposals are set forth below.

    Comment: Two commenters supported our continued reliance on the methodology used to set the Performance Standard, Achievement Threshold, and Benchmark at the 50th, 15th and 90th percentiles respectively of national facility performance for PY 2020, as well as the continuation of our current policy for determining payment reductions, including the process for setting the minimum TPS.

    Response: We thank the commenters for their support.

    Final Rule Action: After considering the comments received, we will finalize the performance standards, achievement thresholds, and benchmarks for the clinical measures included in the ESRD QIP for PY 2020.

    b. Estimated Performance Standards, Achievement Thresholds, and Benchmarks for the Clinical Measures Proposed for the PY 2020 ESRD QIP

    At this time, we do not have the necessary data to assign numerical values to the proposed performance standards for the clinical measures, because we do not yet have data from CY 2016 or the first portion of CY 2017. We will publish values for the clinical measures, using data from CY 2016 and the first portion of CY 2017, in the CY 2018 ESRD PPS final rule.

    c. Performance Standards for the PY 2020 Reporting Measures

    In the CY 2014 ESRD PPS final rule, we finalized performance standards for the Anemia Management and Mineral Metabolism reporting measures (78 FR 72213). We did not propose any changes to these policies for the PY 2020 ESRD QIP.

    In the CY 2016 ESRD PPS final rule, we finalized performance standards for the Screening for Clinical Depression and Follow-Up, Pain Assessment and Follow-Up, and NHSN Healthcare Provider Influenza Vaccination reporting measures (79 FR 66209). We did not propose any changes to these policies.

    For the proposed Ultrafiltration Rate Reporting Measure, we proposed to set the performance standard as successfully reporting the following data to CROWNWeb for all hemodialysis sessions during the week of the monthly Kt/V draw for that clinical month, for each qualifying patient (1) HD Kt/V Date; (2) Post-Dialysis Weight; (3) Pre-Dialysis Weight; (4) Delivered Minutes of BUN Hemodialysis; and (5) Number of sessions of dialysis delivered by the dialysis unit to the patient in the reporting month. This information must be submitted for each qualifying patient in CROWNWeb on a monthly basis, for each month of the reporting period. For the proposed Serum Phosphorus Reporting measure, we proposed to set the performance standard as successfully reporting a serum phosphorus value for each qualifying patient in CROWNWeb on a monthly basis, for each month of the reporting period. For the proposed NHSN Dialysis Event Reporting measure, we proposed to set the performance standard as successfully reporting 12 months of data from CY 2018.

    We sought comments on these proposals. We did not receive any comments on these proposed policies for setting Performance Standards for the PY 2020 Reporting Measures.

    Final Rule Action: We are finalizing the performance standards for the Reporting Measures as proposed for the PY 2020 ESRD QIP

    5. Scoring the PY 2020 ESRD QIP a. Scoring Facility Performance on Clinical Measures Based on Achievement

    In the CY 2014 ESRD PPS Final Rule, we finalized a policy for scoring performance on clinical measures based on achievement (78 FR 72215). Under this methodology, facilities receive points along an achievement range based on their performance during the performance period for each measure, which we define as a scale between the achievement threshold and the benchmark. In determining a facility's achievement score for each clinical measure under the PY 2020 ESRD QIP, we proposed to continue using this methodology for all clinical measures except the ICH CAHPS clinical measure. The facility's achievement score would be calculated by comparing its performance on the measure during CY 2018 (the proposed performance period) to the achievement threshold and benchmark (the 15th and 90th percentiles of national performance on the measure in CY 2016).

    We sought comment on this proposal. We did not receive any comments on this proposal.

    Final Rule Action: We are finalizing our policy for scoring facility performance on clinical measures based on achievement as proposed.

    b. Scoring Facility Performance on Clinical Measures Based on Improvement

    In the CY 2014 ESRD PPS Final Rule, we finalized a policy for scoring performance on clinical measures based on improvement (78 FR 72215 through 72216). In determining a facility's improvement score for each measure under the PY 2020 ESRD QIP, we proposed to continue using this methodology for all clinical measures except the ICH CAHPS clinical measure. Under this methodology, facilities receive points along an improvement range, defined as a scale running between the improvement threshold and the benchmark. We proposed to define the improvement threshold as the facility's performance on the measure during CY 2017. The facility's improvement score would be calculated by comparing its performance on the measure during CY 2018 (the proposed performance period) to the improvement threshold and benchmark.

    We sought comment on this proposal. The comments and our responses are set forth below.

    Comment: Commenter expressed concerns that the QIP's scoring and assessment methodology is so complex that facilities are unable to evaluate their progress in real time so they can take action during the performance period to strengthen their performance. They urged CMS to consider ways of simplifying the scoring methodology or to develop a secure Web site that can provide each facility with an ongoing scorecard. Another commenter asked that CMS clarify whether a facility needs a score on either measure in the Safety Domain in order to receive a TPS for PY 2020.

    Response: We thank the commenter and will consider ongoing scorecards and facility level feedback on a quarterly or semiannual basis in future rule making. Under our finalized policy for both PY 2019 and PY 2020, facilities need to have a score on at least one measure in the Clinical Domain and at least one measure in the Reporting Measure Domain to receive a TPS.

    Final Rule Action: After considering the comment received, we will finalize our policy for scoring facility performance on clinical measures based on improvement as proposed.

    c. Scoring the ICH CAHPS Clinical Measure

    In the CY 2015 ESRD PPS final rule, we finalized a policy for scoring performance on the ICH CAHPS clinical measure based on both achievement and improvement (79 FR 66209 through 66210). We did not propose any changes to this policy. Under this methodology, facilities will receive an achievement score and an improvement score for each of the three composite measures and three global ratings in the ICH CAHPS survey instrument. A facility's ICH CAHPS score will be based on the higher of the facility's achievement or improvement score for each of the composite measures and global ratings, and the resulting scores on each of the composite measures and global ratings will be averaged together to yield an overall score on the ICH CAHPS clinical measure. For PY 2020, the facility's achievement score would be calculated by comparing where its performance on each of the three composite measures and three global ratings during CY 2018 falls relative to the achievement threshold and benchmark for that measure and rating based on CY 2016 data. The facility's improvement score would be calculated by comparing its performance on each of the three composite measures and three global ratings during CY 2018 to its performance rates on these items during CY 2017.

    We sought comments on this proposal.

    Final Rule Action: We did not receive comments on our proposal for scoring the ICH CAHPS Clinical Measure. Accordingly, we will finalize our policy for scoring the ICH CAHPS Clinical Measure as proposed.

    d. Calculating Facility Performance on Reporting Measures

    In the CY 2013 ESRD PPS final rule, we finalized policies for scoring performance on the Anemia Management and Mineral Metabolism reporting measures in the ESRD QIP (77 FR 67506). We did not propose any changes to these policies for the PY 2020 ESRD QIP.

    In the CY 2015 ESRD PPS final rule, we finalized policies for scoring performance on the Clinical Depression Screening and Follow-Up, Pain Assessment and Follow-Up, and NHSN Healthcare Provider Influenza Vaccination reporting measures (79 FR 66210 through 66211). We did not propose any changes to these policies.

    With respect to the proposed Ultrafiltration Rate and Serum Phosphorus reporting measures, we proposed to score facilities with a CMS Certification Number (CCN) Open Date before July 1, 2018 using the same formula previously finalized for the Mineral Metabolism and Anemia Management reporting measures (77 FR 67506):

    ER04NO16.310 As with the Anemia Management and Mineral Metabolism reporting measures, we would round the result of this formula (with half rounded up) to generate a measure score from 0-10.

    We sought comments on these proposals. We did not receive comments on these proposals.

    Final Rule Action: We did not receive any comments on our proposals for calculating facility performance on reporting measures. Accordingly, we will finalize these policies as proposed.

    6. Weighting the Clinical Measure Domain, and Weighting the Total Performance Score a. Weighting of the Clinical Measure Domain for PY 2020

    In light of the proposed removal of the Safety Subdomain from the Clinical Measure Domain, our policy priorities for quality improvement for patients with ESRD discussed in section IV.C.6 of the proposed rule (81 FR 42826), and the criteria finalized in the CY 2015 ESRD PPS Final Rule used to assign weights to measures in a facility's Clinical Measure Domain score (79 FR 66214 through 66216), we proposed to weight the following measures in the following subdomains of the proposed clinical measure domain as follows (see Table 17):

    Table 17—Proposed Clinical Measure Domain Weighting for the PY 2020 ESRD QIP Measures/measure topics by subdomain Measure weight
  • in the clinical domain score
  • (proposed for
  • PY 2020)
  • %
  • Measure weight as percent of TPS
  • (proposed for
  • PY 2020)
  • %
  • Patient and Family Engagement/Care Coordination Subdomain 40 ICH CAHPS measure 25 20 SRR Measure 15 12 Clinical Care Subdomain 60 STrR measure 11 8.8 Dialysis Adequacy measure 18 18.8 Vascular Access Type measure topic 18 18.8 Hypercalcemia measure 2 1.6 (Proposed) SHR measure 11 8.8 Note: We proposed that the Clinical Domain make up 80 percent of a facility's Total Performance Score (TPS) for PY 2020. The percentages listed in this Table represent the measure weight as a percent of the Clinical Domain Score.

    Specifically, we proposed to reduce the weight of the Safety Measure Domain in light of validation concerns discussed above in the context of the proposal to reintroduce the NHSN Dialysis Event Reporting Measure (see Section (IV)(1)(a) above). For PY 2020 we proposed to reduce the weight of the Safety Measure Domain from 15 percent to 10 percent. In future years of the program, we stated that we may consider increasing the weight of the NHSN BSI Clinical Measure and/or the NHSN BSI Measure Topic once we see that facilities are completely and accurately reporting to NHSN and once we have analyzed the data from the proposed increased NHSN Data Validation Study. In order to accommodate the reduction of the weight of the Safety Measure Domain, we proposed to increase the weight of the Clinical Measure Domain to 80 percent, and to keep the weight of the Reporting Measure Domain at 10 percent.

    We also proposed to weight the proposed SHR Clinical Measure at 11 percent of a facility's Clinical Measure Domain score. Facilities have had significant experience with SHR via public reporting on Dialysis Facility Compare, and reducing hospitalizations is a top policy goal for CMS. Further, increasing the emphasis on outcome measures is an additional policy goal of CMS, for reasons discussed above. For these reasons, we believe it is appropriate to weight the proposed SHR Clinical Measure at 11 percent of a facility's Clinical Measure Domain score.

    Next, we proposed to decrease the weight of the Hypercalcemia clinical measure within the Clinical Care Subdomain to 2 percent of a facility's clinical domain score. We proposed to do so at this time to accommodate the weight assigned to the proposed SHR measure. The Hypercalcemia clinical measure was recently re-endorsed at NQF with a reserved status because there was very little room for improvement and facility scores on the measure are very high overall. Although this is true, the Hypercalcemia clinical measure does not meet the criterion for being topped out in the ESRD QIP (as described in section IV.D. of the proposed rule (81 FR 42833)). Therefore, despite its limited value for assessing facility performance, we decided not to propose to remove the Hypercalcemia clinical measure from the ESRD QIP measure set, but rather to significantly reduce its weight in the clinical subdomain because it provides some indication of the quality of care furnished to patients by facilities.

    Finally, to accommodate the proposed addition of the SHR Clinical Measure beginning in PY 2020 and the proposed reduction in weight of the Hypercalcemia measure, we proposed to reduce the weights of the following measures by 1 percentage point each from what we proposed for PY 2019, within the Clinical Measure Domain: ICH CAHPS, SRR, STrR, Dialysis Adequacy, and Vascular Access Type. As illustrated in Table 10, these minor reductions in the weights of these measures in the Clinical Measure Domain would be counterbalanced by the increase in the overall percent of the TPS that we proposed to make to the Clinical Measure Domain, such that the proposed weights for these measures as a percentage of the TPS will remain as constant as possible from PY 2019 to PY 2020. Accordingly, this proposal would generally maintain the percentage of the TPS assigned to these measures.

    We sought comments on these proposals. The comments and our responses to are set forth below.

    Comment: Another commenter pointed out an error in the VAT measure weight as a percent of the TPS for PY 2020 in Table 10 of the proposed rule (81 FR 42841), reproduced as Table 17 above. Specifically, the table in the proposed rule indicated that the VAT measure topic would be weighted as 18.8 percent of the TPS in PY 2020, however both Table 10 and Figure 6 indicated the combined VAT measure will be weighted as 18.0 percent of the Clinical Measure Domain. Commenter's analysis found that the 18.0 percent combined VAT weight and the 80 percent Clinical Domain Weight results in a combined VAT measure that would comprise 14.4 percent of the TPS rather than 18.8 percent.

    Response: We thank commenters for bringing this calculation error to our attention. We acknowledge that our calculation was incorrect. The column showing the weights within the clinical measure domain was correct but when we calculated the measure weights as a percent of the TPS, we miscalculated the weight of the VAT measure. The column showing measure weights as a percent of the TPS is provided for illustrative purposes only. We note, however, that we are not finalizing the weights as proposed. Section IV.E.5.b of this rule describes the policy and weighting that we are finalizing for PY 2020.

    Comment: One commenter requested that CMS assign less weight to the ICH CAHPS measure because of the subjective nature of the survey. They argued that administering it twice a year may become bothersome to patients, thus leading to less honest and less valid responses, and fewer responses in general.

    Response: We believe that the subjective nature of the ICH CAHPS survey should not factor into the weight assigned to the measure within the Clinical Measure Domain. Response to the ICH CAHPS Survey is completely voluntary. Patients may refuse to respond if they find the survey bothersome or if they do not wish to respond for any other reason. The survey data reflects the reported experiences of the respondents. The fact that the data may be subjective does not mean that it is incorrect. Instead the survey reflects the patients' perspectives on their care, and we continue to believe that this measure is vitally important because it is the only measure in the ESRD QIP which measures the patients' experience of the care they receive.

    Final Rule Action: In response to the comments received, we are not finalizing the weighting as proposed. Instead, we are finalizing a revised weighting structure. Specifically, for PY 2020 we are finalizing that the Clinical Measure Domain will continue to comprise 75 percent of the TPS, the Safety Measure Domain will comprise 15 percent of the TPS and the Reporting Measure Domain will comprise 10 percent of the TPS. Table 18 below shows the weights being finalized for PY 2020.

    b. Weighting the Total Performance Score

    We continue to believe that while the reporting measures are valuable, the clinical measures evaluate actual patient care and therefore justify a higher combined weight (78 FR 72217). We proposed to reduce the weight of the Safety Measure Domain from 15 percent of a facility's TPS for PY 2019 to 10 percent of a facility's TPS for PY 2020. We are gradually reducing the weight of this Safety Measure Domain over the course of 2 years because we believe it is important to reduce the weight of the Domain in light validation concerns, but it is important to maintain as much consistency as possible in the QIP Scoring Methodology from year to year.

    We proposed that for PY 2020, to be eligible to receive a TPS, a facility must be eligible to be scored on at least one measure in the Clinical Measure Domain and at least one measure in the Reporting Measure Domain.

    We sought comments on these proposals. The comments and our responses for these proposals are set forth below.

    Comment: One commenter did not support CMS's proposed modifications to the weighting of the safety measure domain and clinical measure domain for PY 2020 because they do not believe addition of the proposed Safety Measure Domain is necessary. They also argued that CMS is proposing too many measures that focus little attention on patient outcomes and recommended that CMS evaluate the existing and proposed measures for PY 2020 and remove those that are less relevant to quality of care.

    Response: We thank commenter for their recommendations. We are not finalizing the weighting of the safety measure domain and clinical measure domain as proposed and instead we are finalizing a revised weighting structure. We believe it is crucial to emphasize the importance of the NHSN BSI Measure Topic so that facilities prioritize their efforts to accurately and completely report to NHSN their Dialysis Event data, while at the same time mount significant efforts to reduce bloodstream infections. Accordingly, we are going to maintain the Safety Measure Domain at 15 percent of the TPS for PY 2020. We have prioritized outcome measures for inclusion in the ESRD QIP, and we will continue to try identifying appropriate outcome measures, specified for use in dialysis facilities, which we believe will contribute to improved patient outcomes. We have clearly identified criteria for use when determining which measures should be removed from the program. At this time, we are not proposing to remove any measures from the ESRD QIP's measure set.

    Comment: A commenter recommended that CMS maintain the Safety Measure Domain at 15 percent of the TPS for PY 2020, arguing that the reintroduction of the NHSN Dialysis Event Reporting Measure compensates for any concerns regarding the validity of the NHSN BSI Clinical Measure, along with the more robust data validation methodology. Commenter argued that lowering the weight of the Safety Measure Domain would dis-incentivize reporting to NHSN.

    Response: We thank the commenter for their recommendation and we agree that for PY 2020, in order to ensure that facilities continue to be appropriately incentivized both for reporting to NHSN, through the NHSN Dialysis Event Reporting Measure, and for continued efforts to reduce infections among their patients, through the NHSN BSI Clinical Measure, we should maintain the Safety Measure Domain at 15 percent of the TPS rather than reducing the weight of that Domain to 10 percent in PY 2020. By maintaining the Safety Measure Domain at a higher percentage of the TPS, we are ensuring that facilities continue to report complete and accurate data beyond PY 2019. Therefore, we have provided updated weights for the Clinical Measure Domain for PY 2020 in Table 18.

    Table 18—Finalized Clinical Measure Domain Weighting for the PY 2020 ESRD QIP Measures/measure topics by subdomain Measure weight
  • in the clinical
  • domain score
  • (proposed for
  • PY 2020)
  • (%)
  • Measure weight as
  • percent of TPS
  • (updated)
  • Patient and Family Engagement/Care Coordination Subdomain 40 ICH CAHPS measure 25 18.75 SRR Measure 15 11.25 Clinical Care Subdomain 60 STrR measure 11 8.25 Dialysis Adequacy measure 18 13.5 Vascular Access Type measure topic 18 13.5 Hypercalcemia measure 2 1.5 SHR measure 11 8.25 Note: We initially proposed that the Clinical Domain make up 80 percent of a facility's TPS for PY 2020. We are finalizing a different weighting structure: For PY 2020 we are maintaining the Clinical Domain at 75 percent of a facility's TPS. The percentages listed in this Table represent the measure weight as a percent of the Clinical Domain Score.

    Final Rule Action: After consideration of the comments received, we are not finalizing these policies as proposed. Instead, as discussed above, we are finalizing the weighting structure shown in Table 18 above. We are going to maintain the Safety Measure Domain at 15 percent of a facility's TPS for PY 2020. Accordingly, the measure weights in the Clinical Measure Domain Score have not changed but the Measure Weights as a Percent of TPS have changed as shown. We believe this change to our proposal will ensure that facilities continue to be appropriately incentivized both for reporting to NHSN and for continued efforts to reduce infections among their patients.

    7. Example of the PY 2020 ESRD QIP Scoring Methodology

    In this section, we provide an example to illustrate the scoring methodology for PY 2020. Figures 6-9 illustrate how to calculate the Clinical Measure Domain score, the Reporting Measure Domain score, the Safety Measure Domain score, and the TPS. Figure 10 illustrates the full scoring methodology for PY 2020. Note that for this example, Facility A, a hypothetical facility, has performed very well. Figure 6 illustrates the methodology used to calculate the Clinical Measure Domain score for Facility A.

    ER04NO16.311

    Figure 7 illustrates the general methodology for calculating the Reporting Measure Domain score for Facility A.

    ER04NO16.312

    Figure 8 illustrates the methodology used for calculating the Safety Measure Domain score for Facility A.

    ER04NO16.313

    Figure 9 illustrates the methodology used to calculate the TPS for Facility A.

    ER04NO16.314

    Figure 10 illustrates the full scoring methodology for PY 2020.

    ER04NO16.315

    We received comments on the Figures provided in this example. The comments and our responses are set forth below.

    Comment: Two commenters identified calculation errors in Figure 7 of the proposed rule (81 FR 42843) and requested clarification. Specifically, commenters pointed out that each of the six measures in the Reporting Domain should be weighted as 16.67 percent rather than 14 percent, as presented in Figure 7 of the CY 2017 ESRD PPS proposed rule.

    Response: We thank the commenters for bringing this calculation error to our attention. Figure 11 below has been updated to correct the calculation errors which appeared in the proposed rule.

    ER04NO16.316

    Additionally, in light of the weighting structure we are finalizing for PY 2020, we have created an updated figure, Figure 12 below, showing the weights we are finalizing. For PY 2020, the Safety Measure Domain will comprise 15 percent of the TPS, the Clinical Measure Domain will make up 75 percent of the TPS and the Reporting Measure Domain will make up 10 percent of the TPS.

    ER04NO16.317 8. Minimum Data for Scoring Measures for the PY 2020 ESRD QIP

    Our policy is to score facilities on clinical and reporting measures for which they have a minimum number of qualifying patients during the performance period. With the exception of the Standardized Readmission Ratio, Standardized Hospitalization Ratio, Standardized Transfusion Ratio, and ICH CAHPS clinical measures, a facility must treat at least 11 qualifying cases during the performance period in order to be scored on a clinical or reporting measure. A facility must have at least 11 index discharges to be eligible to receive a score on the SRR clinical measure, 10 patient-years at risk to be eligible to receive a score on the STrR clinical measure, and 5 patient-years at risk to be eligible to receive a score on the SHR clinical measure. In order to receive a score on the ICH CAHPS clinical measure, a facility must have treated at least 30 survey-eligible patients during the eligibility period and receive 30 completed surveys during the performance period. We did not propose to change these minimum data policies for the measures that we proposed to continue including in the PY 2019 ESRD QIP measure set. For the proposed Ultrafiltration Rate and Serum Phosphorus Reporting Measures, we also proposed that facilities with at least 11 qualifying patients will receive a score on the measure. We believe that setting the case minimum at 11 for these reporting measures strikes the appropriate balance between the need to maximize data collection and the need to not unduly burden or penalize small facilities. We further believe that setting the case minimum at 11 is appropriate because this aligns with case minimum policy for the vast majority of the reporting measures in the ESRD QIP.

    Under our current policy, we begin counting the number of months for which a facility is open on the first day of the month after the facility's CMS Certification Number (CCN) Open Date. Only facilities with a CCN Open Date before July 1, 2018 would be eligible to be scored on the Anemia Management, Mineral Metabolism, Pain Assessment and Follow-Up, Clinical Depression Screening and Follow-Up reporting measures, and only facilities with a CCN Open Date before January 1, 2018 would be eligible to be scored on the NHSN Bloodstream Infection Clinical Measure, ICH CAHPS Clinical Measure, and NHSN Healthcare Personnel Influenza Vaccination reporting measure. We further proposed that, consistent with our CCN Open Date policy for other reporting measures, facilities with a CCN Open Date after July 1, 2018, would not be eligible to receive a score on the Ultrafiltration Rate Reporting Measure because of the difficulties these facilities may face in meeting the requirements of this measure due to the short period of time left in the performance period.

    Table 19 displays the proposed patient minimum requirements for each of the measures, as well as the proposed CCN Open Dates after which a facility would not be eligible to receive a score on a reporting measure.

    Table 19—Proposed Minimum Data Requirements for the PY 2020 ESRD QIP Measure Minimum data requirements CCN open date Small facility adjuster Dialysis Adequacy (Clinical) 11 qualifying patients N/A 11-25 qualifying patients. Vascular Access Type: Catheter (Clinical) 11 qualifying patients N/A 11-25 qualifying patients. Vascular Access Type: Fistula (Clinical) 11 qualifying patients N/A 11-25 qualifying patients. Hypercalcemia (Clinical) 11 qualifying patients N/A 11-25 qualifying patients. NHSN Bloodstream Infection (Clinical) 11 qualifying patients On or before January 1, 2018 11-25 qualifying patients. NHSN Dialysis Event (Reporting) 11 qualifying patients On or before January 1, 2018 N/A. SRR (Clinical) 11 index discharges N/A 11-41 index discharges. STrR (Clinical) 10 patient-years at risk N/A 10-21 patient-years at risk. SHR (Clinical) 5 patient-years at risk N/A 5-14 patient-years at risk. ICH CAHPS (Clinical) Facilities with 30 or more survey-eligible patients during the calendar year preceding the performance period must submit survey results. Facilities will not receive a score if they do not obtain a total of at least 30 completed surveys during the performance period. On or before January 1, 2018 N/A. Anemia Management (Reporting) 11 qualifying patients Before July 1, 2018 N/A. Serum Phosphorus (Reporting) 11 qualifying patients Before July 1, 2018 N/A. Depression Screening and Follow-Up (Reporting) 11 qualifying patients Before July 1, 2018 N/A. Pain Assessment and Follow-Up (Reporting) 11 qualifying patients Before July 1, 2017 N/A. NHSN Healthcare Personnel Influenza Vaccination (Reporting) N/A Before January 1, 2018 N/A. Ultrafiltration Rate (Reporting) 11 qualifying patients Before July 1, 2018 N/A.

    We sought comments on these proposals. The comments and our responses for these proposals are set forth below.

    Comment: Several commenters expressed concerns about the numbers included in the Minimum Data Table (Table 11) in the proposed rule (81 FR 42846) because of the effect on small facilities with very small sample sizes. Commenters asserted that performance scores for many such facilities are random and may not reflect actual performance. One commenter requested additional detail from CMS so they can better understand CMS's rationale for these values and for the unit of analysis. They pointed out that NQF considered patients as the unit of analysis for reliability testing, while CMS proposed to use patient-years at risk as the unit of analysis in the QIP. Commenters argued that these values are too low and will result in too much random volatility in performance scoring under the QIP. Commenters urged CMS to adopt consistent criteria for the establishment of minimum data requirements and ranges for the SFA, particularly for the Standardized Ratio Measures, and mentioned that the NQF uses 0.7 as a recommended IUR value to limit random noise as much as possible. Several commenters specifically urged CMS to set the minimum data requirement for each measure at the sample size at which the IUR reaches 0.70. Alternatively, if CMS does not choose to implement this change, they recommended that the top end of the SFA range be set at a sample size adequate to reach an IUR of 0.7 so that enough of the observed result for each measure is due to actual performance rather than to random “noise” due to small sample numbers.

    Commenters offered the STrR as an example of the problem with the small sample sizes used. This measure was found to have very low reliability, particularly for small facilities. The IUR for facilities with sample sizes below 46 patients was about 0.4, suggesting that 60 percent of inter-facility difference was due to random noise rather than underlying performance. The SFA in this case only raises the scores for very small facilities but does not offset the substantial effect of random variation for small sample sizes.

    Response: We thank the commenters for their recommendations. We recognize the importance of the scientific standard of measure reliability, and note that the STrR satisfied this condition. All components of measure reliability were reviewed in detail at the NQF ESRD Standing Committee's meeting in June, 2016. The reliability result reported in the NQF submission showing the overall IURs of 0.60-0.66 across all facilities was determined acceptable by the NQF Standing Committee as the measure passed on the reliability criterion, and passed on scientific acceptability overall. The evaluation and voting process and result adhered to consensus development guidelines in the evaluation, thereby reinforcing acceptance of the reliability results.

    Given the established effect of sample size on IUR calculations, it is expected that large facilities will have a higher IUR and that smaller facilities will have lower IUR values for any given measure. Reliability results by facility size were not required by NQF. However, the decision to include reliability based on tertiles of facility size was intended to enhance interpretation of the detail provided in the measure submission.

    Regarding the commenter's recommendation to use an IUR of 0.7, we are not aware of any formal or prescriptive NQF guideline or standard that sets or requires this test result value as a minimum threshold for passing reliability. The commenter may be referring to a non-peer reviewed prior RAND Report referenced by NQF as an example of signal to noise method that can be used for reliability testing. Additionally, there is no formal required threshold identified by NQF, as demonstrated in the endorsement of other quality metrics that have a range of reliability statistics, several of which are below the threshold of 0.7. Specifically, the STrR reliability results are comparable to the reliability test results for other NQF-endorsed risk adjusted outcome measures used in public reporting. For example, four NQF endorsed, cause-specific hospital mortality measures demonstrated similar levels of reliability (for example, #0229 Heart Failure Measure, ICC: 0.55; #0468 Pneumonia Mortality Measure, ICC: 0.79; #1893 COPD Mortality Measure, ICC: 0.51; #2558 CABG Mortality Measure, ICC: 0.32).

    Final Rule Action: After consideration of the comments received, we are finalizing these policies as proposed. For the reasons described above, at this time, we do not believe it would be appropriate to establish a minimum IUR threshold.

    9. Payment Reductions for the PY 2020 ESRD QIP

    Section 1881(h)(3)(A)(ii) of the Act requires the Secretary to ensure that the application of the scoring methodology results in an appropriate distribution of payment reductions across facilities, such that facilities achieving the lowest TPSs receive the largest payment reductions. We proposed that, for the PY 2020 ESRD QIP, a facility will not receive a payment reduction if it achieves a minimum TPS that is equal to or greater than the total of the points it would have received if:

    • It performed at the performance standard for each clinical measure; and

    • It received the number of points for each reporting measure that corresponds to the 50th percentile of facility performance on each of the PY 2018 reporting measures.

    We noted this proposed policy for PY 2020 is identical to the policy finalized for PY 2019 and we recognized that we were not proposing a policy regarding the inclusion of measures for which we were not able to establish a numerical value for the performance standard through the rulemaking process before the beginning of the performance period in the PY 2019 minimum TPS. We stated that we did not propose such a policy because no measures in the proposed PY 2020 measure set meet this criterion. However, should we choose to adopt a clinical measure in future rulemaking without the baseline data required to calculate a performance standard before the beginning of the performance period, we stated that we would propose a criterion accounting for that measure in the minimum TPS for the applicable payment year at that time.

    The PY 2018 program is the most recent year for which we will have calculated final measure scores before the beginning of the performance period for PY 2020 (that is, CY 2018). Because we have not yet calculated final measure scores, we are unable to determine the 50th percentile of facility performance on the PY 2018 reporting measures. We will publish that value in the CY 2018 ESRD PPS final rule once we have calculated final measure scores for the PY 2018 program.

    Section 1881(h)(3)(A)(ii) of the Act requires that facilities achieving the lowest TPSs receive the largest payment reductions. In the CY 2014 ESRD PPS final rule (78 FR 72223 through 72224), we finalized a payment reduction scale for PY 2016 and future payment years: For every 10 points a facility falls below the minimum TPS, the facility would receive an additional 0.5 percent reduction on its ESRD PPS payments for PY 2016 and future payment years, with a maximum reduction of 2.0 percent. We did not propose any changes to this policy for the PY 2020 ESRD QIP.

    Because we are not yet able to calculate the performance standards for each of the clinical measures, we are also not able to calculate a proposed minimum TPS at this time. We will publish the minimum TPS, based on data from CY 2016 and the first part of CY 2017, in the CY 2018 ESRD PPS final rule.

    We sought comments on this proposal regarding our policy to determine payment reductions for PY 2020.

    Final Rule Action: We did not receive comments on this proposal. Accordingly, we are finalizing this policy as proposed.

    F. Future Policies and Measures Under Consideration

    As we continue to refine the ESRD QIP's policies and measures, we are evaluating different methods of ensuring that facilities strive for continuous improvement in their delivery of care to patients with ESRD. We also seek to refine our scoring methodology in an effort to make it easier for facilities and the ESRD community to understand. For future rulemaking, we are considering several policies and measures, and we are seeking comments on each of these policies and measures.

    As discussed in section IV.E.2.b.i. above, we proposed to adopt the Standardized Hospitalization Ratio (SHR) Clinical measure and calculate performance rates for that measure in accordance with NQF-endorsed, Measures Application Partnership reviewed specifications. Similarly, performance rates for the SRR and STrR will continue to be calculated in accordance with NQF-endorsed, Measures Application Partnership reviewed specifications. Stakeholders have expressed that for most standardized ratio measures, rates are easier to understand than ratios. (The exception is the NHSN BSI Clinical Measure, which is intentionally expressed as a ratio, and cannot be transformed into a rate without distorting the underlying results.) For future years of the QIP, we are considering a proposal to express the ratios as rates instead, for the SRR and STrR measures. Specifically, we would not propose any changes to the manner in which performance rates themselves are calculated, but would propose to calculate rates by multiplying the facility's ratio for each of these measures by the national raw rate of events (also known as the median), which is specific to the measure each year. We are also considering reporting national performance standards and individual facility performance rates as rates, as opposed to ratios, for these measures. Similarly, we are considering a proposal to use rates, as opposed to ratios, when calculating facility improvement scores for these measures.

    In PY 2019, we proposed to adopt a patient-level influenza immunization reporting measure that could be used to calculate a future clinical measure based on either “ESRD Vaccination—Full-Season Influenza Vaccination” (Measures Application Partnership #XDEFM) or NQF #0226: “Influenza Immunization in the ESRD Population (Facility Level).” We continue to believe that it is important to include a clinical measure on patient-level influenza vaccination in the ESRD QIP. However, we did not propose to add a patient-level influenza immunization reporting measure into the ESRD QIP. Nevertheless, data elements were recently amended in CROWNWeb to support data collection for either of the two potential clinical measures on patient-level influenza (that is, Measures Application Partnership #XDEFM and NQF #0226). We will continue to collect these data and conduct detailed analyses to determine whether either of these clinical measures would be appropriate for future inclusion in the ESRD QIP.

    As part of our effort to continuously improve the ESRD QIP, we are also working on developing additional, robust measures that provide valid assessments of the quality of care furnished to ESRD patients by ESRD facilities. Some measures we are considering developing for future inclusion in the ESRD QIP measure set include a Standardized Mortality Ratio (SMR) measure, a measure examining utilization of hospital Emergency Departments, a measure examining medication reconciliation efforts, and a measure examining kidney transplants in patients with ESRD.

    We sought comments on these issues, including whether data for a patient-level influenza immunization clinical measure should be collected through CROWNWeb or through NHSN.

    Comment: Commenters supported CMS's future policy for consideration which would allow for the use of rates rather than ratios for the SRR and STrR measures because they are easier to understand and because the current ratio measures have a wide range of uncertainty that does not provide an accurate view of a facility's performance when the ratio is reduced to a single number. One commenter argued that this approach will improve accuracy, transparency and clinical relevance. They recommended that CMS use the year-over-year difference between normalized rates, currently available from DFR data until they can be replaced by risk-standardized rate measures.

    Despite support for the general concept, several commenters urged CMS to carefully consider the methodology used if it is decided to convert ratios to rates. They suggested that the use of the national median rate as the conversion factor would be potentially misleading in certain regions of the country where typical performance varies significantly from the national rate.

    One commenter offered two simulations of possible methodologies to convert rates to ratios: First, using the median rate to convert the ratio to a rate; second, using the mean rate to convert the ratio to a rate. In both of these scenarios, QIP scores remained identical—dialysis facilities received the same scores regardless of the ratio or rate methodology. The commenter concluded that they would likely support this proposal but would need to see additional analyses regarding the methodology to be used.

    Response: We thank commenters for their suggestions and for sharing the two simulations provided. We will take their suggestions into consideration as we consider the possibility of introducing this policy in future years of the ESRD QIP. If we consider proposing this policy for future years of the program, we will share the proposed methodology through rulemaking.

    Comment: A commenter stated that they would likely support a proposal to report national performance standards and individual facility performance as rates, as opposed to ratios, but they would need to see the complete proposal first. They also supported CMS's discussion about possibly using rates instead of ratios for the readmissions and transfusion measures because the current ratios are problematic in that they have a wide range of uncertainty that does not provide an accurate view of a facility's performance when the ratio is reduced to a single number. There are also problems with regard to the reliability of a standardized ratio. Commenter suggested that CMS could immediately switch to rates and encouraged the Agency to use the year-over-year difference between normalized rates currently available from DFR data until they can be replaced by risk standardized rate measures. Commenter also suggested that the use of the national median rate as the conversion factor for ratios may be misleading in parts of the country where typical performance varies significantly from the national rate. Using rates instead of ratios would make the measure results more meaningful by expressing results in terms that have intrinsic meaning.

    Response: We thank the commenter for sharing their suggestions and concerns, which we will carefully consider as we consider the possibility of introducing this policy in future years of the ESRD QIP program.

    Comment: One commenter recommended that CMS consider calculating rates in the same manner currently utilized in DFC rather than by calculating a ratio and then converting it into a rate because the latter approach may be methodologically flawed and create unnecessary complexity.

    Response: We thank commenter for their suggestion and, as we continue to consider the possibility of introducing this policy in future years of the ESRD QIP, we will consider the feasibility of calculating rates in the same manner currently utilized in DFC.

    Comment: Commenters submitted a great deal of feedback on the possible introduction of an influenza immunization measure in the ESRD QIP. One commenter pointed out that despite recommendations, vaccines are consistently underutilized in the adult population and urged CMS to consider developing and implementing a comprehensive composite measure for all vaccines recommended for ESRD Patients, as such a measure would be of great benefit to ESRD patients and to the ESRD QIP. Alternatively, they recommended that CMS consider including reporting measures for pneumococcal and hepatitis B vaccination in addition to the existing and proposed Influenza vaccination measures. Several commenters stated that they would support the adoption of NQF #0226, Influenza Immunization in the ESRD Population, in the QIP because it fully aligns with NQF's specifications for influenza vaccinations, and because it is endorsed by the NQF. They also appreciate that the measure is standardized with NQF's 2008 immunization report which set the measurement timeframe as October 1 through March 31, or when the vaccine becomes available. They expressed serious concerns about MUC #XDEFM for several reasons. First, it does not follow the NQF specifications for a measurement timeframe of October 1 through March 31 or when the vaccine became available,” and second it has not been fully tested or specified. They added that scientific acceptability should be considered an essential component of a measure's properties and that measure developers should be required to show that data elements can be reliably reported and that the measure is valid.

    Commenters also supported the proposal to use CROWNWeb to collect patient-level influenza clinical measure data, because KCQA specified and tested the patient-level influenza measure using facility data with the intention that such data would be submitted through CROWNWeb. They added that using NHSN would introduce another factor that would require reliability and validity testing as well as increasing the burden on dialysis facilities because of manual entry issues. They strongly recommended that if CMS does add a patient-level influenza immunization clinical measure, it should add NQF #0226 unchanged and collect the data through CROWNWeb.

    Response: We thank commenters for their support and for their suggestions regarding the potential future introduction of a patient-level influenza immunization measure into the ESRD QIP for future years of the program. We will take their suggestions into consideration as we evaluate options.

    Comment: Several commenters supported the influenza vaccination reporting measure for future consideration in the QIP and suggested that NHSN be used to collect data for the measure for consistency, ease of use, and access purposes. Given that the NHSN HCP Influenza vaccination measure is already collected in NHSN, adding the patient-level measure to the existing reporting system would provide consistency and continuity for facilities. Additionally, commenters pointed out that state health departments, LDO's and ESRD Networks can gain access to the data reported in NHSN and continued use of this system would more easily facilitate sharing of data with other entities engaged in the oversight of infection prevention. One commenter added that if NHSN is used to collect data, it will serve as a single repository for influenza vaccination data, and therefore could be used by regulatory agencies and local health departments who are able to access the data and use it for quality improvement and other public health purposes. One commenter also recommended that CMS consider adding an additional incentive for facilities that report vaccination rates, above the proposed required vaccination information.

    Response: We thank commenters for their support, and we will take their suggestions into consideration as we consider the feasibility of introducing a patient-level influenza immunization measure into the ESRD QIP's measure set in future years of the program.

    Comment: One commenter expressed concerns about the potential use of Measures Application Partnership #XDEFM as the basis for a future clinical measure because it does not follow the NQF standardized specifications for a measurement timeframe and given that the vaccine is often available in late July or early August, omitting patients who were vaccinated before October 1 unfairly penalizes those facilities who are able to obtain the vaccine early and serves as a disincentive to early and thorough vaccination.

    Another commenter disagreed with CMS's concerns that NQF #0226 would exclude patients who die from influenza, but might not have died if they had been vaccinated. The measure specifications do not include such an exclusion and in fact the measure excludes unvaccinated patients who die prior to March 31. This exclusion does not penalize facilities for patients who could still have received a vaccination within the timeframe specified by the Agency's own measurement timeframe. The commenter recommended setting the denominator such that it is aligned with the NHSN protocol and NQF specifications and that CMS clearly state that the CDC would determine the date when a vaccine is made available each year.

    Response: We thank commenters for sharing their suggestions regarding the future potential introduction of either NQF #0226 or Measures Application Partnership #XDEFM, and we will take them into consideration when considering the future adoption of a patient-level influenza immunization measure.

    Comment: Commenters submitted a great deal of feedback on the possible introduction of a Standardized Mortality Ratio measure in the ESRD QIP. Several commenters stated that they would potentially support the adoption of an SMR measure into the QIP but expressed a few concerns with the measure. Two commenters stressed that any mortality measure would need to be carefully tailored to the actions of the dialysis facility and they recommended that CMS work more closely with stakeholders to establish an appropriate measure that focuses on year-over-year, facility-specific improvement before considering its addition into the QIP, particularly in light of the decision of the NQF's Renal Standing Committee not to recommend the revised SMR Measure. Commenters urged CMS to update the SMR Specifications to make them less ambiguous and more precise, and they argued that the 1-year period is inappropriate based on the testing data. Instead, they recommended at minimum a 4-year period and they encouraged CMS to consider including a larger list of relevant prevalent comorbidities as identifiable in Medicare claims data because they feel it's important to adapt the SHR and SMR in a way that takes into account the effect that such comorbidities have on hospitalization and mortality rates. Commenters appreciated that the introduction of an SMR measure in the QIP would promote high quality care for ESRD patients and recommended that the measure reflect a rolling average of facility performance due to the potential for a small number of outliers to impact facility performance substantially on the measure and further recommended that the measure include an adjuster for small facilities so that those with small sample sizes are not inappropriately penalized. Finally, they recommended that CMS adopt an NQF-endorsed SMR measure.

    Response: We thank the commenters for sharing their suggestions regarding the potential implementation of a Standardized Mortality Ratio Measure in future years of the ESRD QIP. We will take these comments and suggestions into consideration as we consider whether to propose such a measure in the future.

    Comment: Commenters provided a great deal of feedback regarding the possible introduction of a Transplant Measure in future years of the ESRD QIP. One commenter agreed that referrals and patient education about transplants are important concepts to measure, but stated that they could not support the two transplant-related wait list measures proposed by a recent TEP because they are not appropriate for the QIP based upon the most recent specifications released by CMS because they measure the success of being waitlisted and attribute that to dialysis facilities when that responsibility rests solely with the transplant center. Instead, the commenter recommended that CMS focus efforts on developing measures related to patient education, referral to a transplant center, initiation of the waitlist evaluation process, or completion of the waitlist evaluation process, and care coordination. Another commenter had specific concerns about the proposed future adoption of a transplant measure. Specifically, they argued that transplants carry a level of risk that patients must assume, so it is important to require that all patients be assessed for transplant, however commenter expressed concern with the expectation that a percentage of a facility's patients be required to actively pursue a transplant. Another commenter stated that as CMS moves toward a more bundled care environment, it is important for the ESRD QIP to implement a transplant measure. They added that it would be beneficial to track and report the number of transplant patients, number of transplants, and the employment status of these patients in order to identify key indicators and best practices to help patients get transplanted and retain employment.

    Response: We thank the commenters for sharing their suggestions regarding the potential implementation of a Transplant Measure in future years of the ESRD QIP. We will take these comments and suggestions into consideration as we consider whether to propose such a measure in the future.

    Comment: Commenters agreed that emergency department (ED) visits are an important marker of healthcare utilization and cautiously supported the concept of measuring Emergency Department Utilization but added that it would be a complex measure which would require careful construction and risk modeling. One commenter stated that without more information about the potential emergency department utilization measure, they could not support such a measure for inclusion in the QIP. Another commenter stated that any such measure would need to include dialysis-related emergency room visits. Commenters stated that much work would need to be done to appropriately construct an ED visit measure for dialysis facility accountability and that such a measure would need to include risk modeling to account for many factors that may influence the frequency of ED visit. It would need to account for the fact that there are a wide variety of circumstances that lead to ED visits, many of which are completely beyond the control or the knowledge of the facility at the time they are occurring. Commenters stressed that CMS will need to carefully consider the specifications for the measure as certain facilities may not be able to achieve low rates of unnecessary patient utilization of the ED. They provided two examples: A facility that is only open three days a week should not be penalized if their patients utilize the ED on a day that they are not open. Second, patients in urban settings may live close enough to the hospital that they have the option to go home and see if their illness subsides sufficiently without having to go to a hospital ED, while patients in rural settings may not have that option. Facilities in more rural settings should not be penalized simply because their patients live in rural settings and feel the need to go to the ED out of an abundance of caution.

    Response: We thank the commenters for sharing their suggestions regarding the potential implementation of an ED Utilization measure in future years of the ESRD QIP. We will take these comments and suggestions into consideration as we consider whether to propose such a measure in the future.

    Comment: Many commenters supported CMS's proposal to consider the inclusion of a Medication Reconciliation measure in future years of the ESRD QIP, and specifically stated that they would support the adoption of NQF #2988: Medication Reconciliation for Patients Receiving Care at Dialysis Facilities, which is currently under evaluation by the NQF Patient Safety Standing Committee. They supported this measure because it is an important patient safety process for patients with ESRD given that many of them have multiple prescriptions and because it would help providers identify unnecessary medications, duplicate therapies or incorrect dosages, thus reducing the risk of patients experiencing adverse drug events. One commenter added that such a measure would incentivize providers to perform medication reconciliation across the continuum of care and would increase the focus on patient safety, resulting in improved patient outcomes.

    Response: We thank the commenters for their support and input and will take their recommendations into consideration as we proceed with our measure development work.

    Comment: One commenter stated that, provided they are outcome measures, rather than process measures, they would support all of the following measures for consideration in future payment years of the ESRD QIP: The SMR Measure, an ED Utilization Measure, a Medication Reconciliation measure, and a measure examining kidney transplants in ESRD patients.

    Response: We thank commenter for their support of these measures under future consideration.

    Comment: A commenter argued that future pediatric measure development should consider the entire pediatric population, beyond Medicare beneficiaries and include the full range of pediatric patients without regard to provider in order to ensure the greatest knowledge of their health status and to provide meaningful and appropriate data about the quality of pediatric care. The commenter also urged CMS to examine the appropriateness of including measures that evaluate adult and pediatric patients together and to work on finding measures that are more appropriate for assessing small numbers of pediatric patients who are dialyzed at adult facilities.

    Response: We thank the commenter for their suggestions and we agree that it is vitally important to measure the care being provided for pediatric patients, both in pediatric facilities and in facilities that treat adult and pediatric patients together. Unfortunately, in large part due to the small numbers of pediatric patients, there are currently very few measures available that focus on the care furnished to pediatric patients with ESRD. For example, as we noted in the CY 2015 ESRD PPS Final Rule (79 FR 66172), using 2013 data, there were only 10 facilities that were eligible to receive a score on the Pediatric Hemodialysis Adequacy measure. We will continue to work with the ESRD community to identify measures for inclusion in the ESRD QIP that examine the care of this vulnerable population.

    Comment: One commenter urged CMS to reinstitute a measure establishing a minimal standard for anemia management to ensure that patients are neither over-treated nor under-treated.

    Response: When we retired the Hemoglobin Less Than 10 g/dL measure, we did so for important clinical reasons which we continue to believe warrant including this measure only as a Reporting Measure and not as a Clinical Measure (76 FR 70257). Specifically, we could not identify a specific hemoglobin lower bound level that has been proven safe for all patients treated with ESAs. Additionally, at the time the measure was retired, we discussed with the FDA our proposal to retire the Hemoglobin Greater than 10 g/dL measure starting in PY 2013. Because the measure encouraged providers/facilities to keep hemoglobin above 10 g/dL, the FDA agreed that retiring the measure was consistent with the new labeling for ESAs approved by the FDA. We are also not aware of, nor have any stakeholders noted, any studies that identify a specific hemoglobin level which should be maintained to increase quality of life or minimize transfusions or hospitalizations. However, if any new evidence or studies emerge, we will take such evidence into consideration in adopting future measures for the ESRD QIP. Factors that impact anemia management, including optimal iron stores, dialysis adequacy, avoidance of infections, reduction of inflammation, and other factors should be addressed by the health care team to improve patient health. We urge patients and providers to work together to achieve optimal hemoglobin levels for each individual patient. We will continue to monitor and evaluate practice patterns and outcomes for all segments of the Medicare ESRD population as we develop and refine our measurement of the quality of anemia management.

    Comment: A commenter urged CMS to consider developing quality measures for use with patients with AKI. Some of their specific recommendations were to develop a Kt/V measure specific for AKI patients with a target of 3.9. They also recommended a BSI measure specific to AKI patients, arguing that AKI patients should not be included in the same measure pool as ESRD patients given that they have a higher risk of infections and have additional complex complications. Finally, they urged CMS to develop patient-reported outcomes measures specific to AKI patients, including assessments of patient satisfaction.

    Response: We thank the commenter for their recommendations. We agree that patients with AKI must be ensured a high quality of care, however given the measures that are currently available for use in Dialysis Facilities, we are unable to measure care for patients with AKI at this time. The quality measures currently in use in the ESRD QIP specifically include patients with end-stage renal disease and are not designed to measure the care of patients with AKI. In the event that measures are developed that include patients with AKI, we will consider the feasibility of including those measures in our measure set in future years of the program.

    Comment: One commenter argued that recovery time is an important and powerful indicator of day-to-day quality of life and is associated with patient survival and recommended that CMS start collecting and reporting data on recovery time as a meaningful clinical outcomes measure.

    Response: We thank the commenter for their suggestion and we agree that recovery time is an important and powerful indicator of the quality of life of patients with ESRD. However, at this time, we are not aware of any clinical quality measures that are available to measure this important outcome. Should one become available, we will consider the feasibility of including it in the measure set for the ESRD QIP in future years of the program.

    V. Durable Medical Equipment, Prosthetics, Orthotics, and Supplies (DMEPOS) Competitive Bidding Program (CBP) A. Background

    Section 1847(a) of the Social Security Act (the Act), as amended by section 302(b)(1) of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) (Pub. L. 108-173), requires the Secretary of the Department of Health and Human Services (the Secretary) to establish and implement the Competitive Bidding Program (CBP) in Competitive Bidding Areas (CBAs) throughout the United States for contract award purposes for the furnishing of certain competitively priced Durable Medical Equipment, Prosthetics, Orthotics, and Supplies (DMEPOS) items and services. The programs, mandated by section 1847(a) of the Act, are collectively referred to as the “Medicare DMEPOS Competitive Bidding Program.” The 2007 DMEPOS competitive bidding final rule (Medicare Program; Competitive Acquisition for Certain DMEPOS and Other Issues published in the April 10, 2007 Federal Register (72 FR 17992)), established CBPs for certain Medicare Part B covered items of DMEPOS throughout the United States. The CBP, which was phased in over several years, utilizes bids submitted by DMEPOS suppliers to establish applicable payment amounts under Medicare Part B for certain DMEPOS items and services.

    Section 1847(a)(1)(G) of the Act, as added by section 522(a) of the Medicare Access and CHIP Reauthorization Act of 2015 (Pub. L. 114-10) (MACRA), now requires a bid surety bond for bidding entities. Section 1847(a)(1)(G) of the Act, as added by section 522(a) of MACRA, provides that, with respect to rounds of competitions under section 1847 beginning not earlier than January 1, 2017 and not later than January 1, 2019, a bidding entity may not submit a bid for a CBA unless, as of the deadline for bid submission, the entity has (1) obtained a bid surety bond, in the range of $50,000 to $100,000, in a form specified by the Secretary consistent with subparagraph (H) of section 1847(a)(1), and (2) provided the Secretary with proof of having obtained the bid surety bond for each CBA in which the entity submits its bid(s). Section 1847(a)(1)(H)(i) provides that in the event that a bidding entity is offered a contract for any product category for a CBA, and its composite bid for such product category and area was at or below the median composite bid rate for all bidding entities included in the calculation of the single payment amount(s) for the product category and CBA, and the entity does not accept the contract offered, the bid surety bond(s) for the applicable CBAs will be forfeited and CMS will collect on the bid surety bond(s). In instances where a bidding entity does not meet the bid forfeiture conditions for any product category for a CBA as specified in section 1847(a)(1)(H)(i) of the Act, then the bid surety bond liability submitted by the entity for the CBA will be returned to the bidding entity within 90 days of the public announcement of the contract suppliers for such product category and area.

    Section 522 of MACRA further amended section 1847(b)(2)(A) of the Act by adding clause (v) to the conditions that a bidding entity must meet in order for the Secretary to award a contract to any entity under a competition conducted in a CBA to furnish items and services. Section 1847(b)(2)(A)(v) of the Act adds the requirement that the bidding entity must meet applicable State licensure requirements in order to be eligible for a DMEPOS CBP contract award. We note, however, that this does not reflect a change in policy as CMS already requires contract suppliers to meet applicable State licensure requirements in order to be eligible for a contract award.

    B. Summary of the Proposed Provisions, Public Comments, and Responses to Comments on the DMEPOS CBP

    The proposed rule, titled “End-Stage Renal Disease Prospective Payment System, Coverage and Payment for Renal Dialysis Services Furnished to Individuals with Acute Kidney Injury, End-Stage Renal Disease Quality Incentive Program, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program Bid Surety Bonds, State Licensure and Appeals Process for Breach of Contract Actions, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program and Fee Schedule Adjustments, Access to Care Issues for Durable Medical Equipment; and the Comprehensive End-Stage Renal Disease Care Model” (81 FR 42802 through 42880), was published in the Federal Register on June 30, 2016, with a comment period that ended on August 23, 2016. In the proposed rule for the DMEPOS Competitive Bidding Program, we made proposals to implement statutory requirements for bid surety bonds and state licensure for the DMEPOS CBP, as well as to revise the current regulations to provide that the appeals process is applicable to all breach of contract actions taken by CMS, rather than just for the termination of a competitive bidding contract. We received approximately 14 public comments on our proposals, including comments from homecare associations, a surety association, DME manufacturers, and individuals.

    In this final rule, we provide a summary of each proposed provision, a summary of the public comments received and our responses to them, and the policies we are finalizing for the DMEPOS Competitive Bidding Program. Comments related to the paperwork burden are addressed in the “Collection of Information Requirements” section in this final rule. Comments related to the impact analysis are addressed in the “Economic Analyses” section in this final rule.

    1. Bid Surety Bond Requirement

    At proposed § 414.402, we proposed adding a definition for “bidding entity” to mean the entity whose legal business name is identified in the “Form A: Business Organization Information” section of the bid (81 FR 42877).

    At proposed § 414.412, “Submission of bids under a competitive bidding program,” we proposed adding a new paragraph (h) that would allow CMS to implement section 1847(a)(1)(G) of the Act, as amended by section 522(a) of MACRA, to state that an entity may not submit a bid for a CBA unless, as of the deadline for bid submission, the entity has obtained a bid surety bond for the CBA (81 FR 42879). Proposed § 414.412(h)(1) would specify that the bond must be obtained from an authorized surety. An authorized surety is a surety that has been issued a Certificate of Authority by the U.S. Department of the Treasury as an acceptable surety on Federal bonds and the certificate has neither expired nor been revoked (81 FR 42879).

    At proposed § 414.412(h)(2) “Bid Surety Bond requirements,” we proposed that a bid surety bond contain the following information: (1) The name of the bidding entity as the principal/obligor; (2) The name and National Association of Insurance Commissioners number of the authorized surety; (3) CMS as the named obligee; (4) The conditions of the bond as specified in the proposed rule at (h)(3); (5) The CBA covered by the bond; (6) The bond number; (7) The date of issuance; and (8) The bid bond value of $100,000 (81 FR 42879).

    Section 1847(a)(1)(G) of the Act permits CMS to determine the amount of the bond within a range of $50,000 to $100,000. We proposed setting the bid surety bond amount at $100,000 for each CBA in which a bidding entity submits a bid (81 FR 42879). This requirement is intended to ensure that bidding entities accept a contract offer(s) when their composite bid(s) is at or below the median composite bid rate used in the calculation of the single payment amounts. The CBP has historically had a contract acceptance rate exceeding 90 percent, and we believe that this acceptance rate will increase with this rule. We considered whether a lower bid surety bond amount would be appropriate for a particular subset of suppliers, for example, small suppliers as defined by § 414.402, and therefore, specifically solicited comments on whether to establish a lower bid surety bond amount for certain types of suppliers (81 FR 42848).

    Proposed § 414.412(h)(3) specifies conditions for forfeiture of the bid surety bond and return of the bond liability (81 FR 42879). Pursuant to section 1847(a)(1)(H) of the Act, when (1) a bidding entity is offered a contract for any product category in a CBA, (2) the entity's composite bid is at or below the median composite bid rate for all bidding entities included in the calculation of the single payment amounts for the product category and CBA, and (3) the entity does not accept the contract offer, then the entity's bid surety bond for that CBA will be forfeited and CMS will collect on it. When the bidding entity does not meet these forfeiture conditions, the bid bond liability will be returned within 90 days of the public announcement of the contract suppliers for the CBA. The provision at proposed § 414.412(h) requires CMS to notify a bidding entity when it does not meet the bid forfeiture conditions and as a result CMS will not collect on the bid surety bond (81 FR 42879).

    We proposed that bidding entities that provide a falsified bid surety bond would be prohibited from participation in the current round of the CBP in which they submitted a bid and from bidding in the next round of the CBP. Additionally, offending suppliers would be referred to the Office of Inspector General and Department of Justice for further investigation. We also proposed that if we find that a bidding entity has accepted a contract offer and then breached the contract in order to avoid bid surety bond forfeiture, the breach would result in a termination of the contract and preclusion from the next round of competition in the CBP. These proposed penalties are included in proposed § 414.412(h)(4).

    We sought comments on these proposals. We note that we did not receive any comments on whether a lower bid surety bond amount would be appropriate for a particular subset of suppliers, for example, small suppliers, as defined at § 414.402.

    The comments and our responses to the comments for these proposals are set forth below.

    Comment: A majority of commenters supported setting the bid surety bond amount at $50,000, with some commenters suggesting that the bid surety bond amount could be raised in the future if necessary. One commenter stated that this is a “new requirement” and that “little is known about how [bid surety bonds] will work”. Another commenter stated that they do not “know of any real-life experience” with obtaining a bid surety bond. Another commenter stated that due to the unknown nature and specifics regarding the new bid surety bond, the requirement of $100,000 per CBA would be “administratively burdensome to qualify for and obtain the [bid surety] bond.” A commenter suggested that the large expenditure potentially required by suppliers bidding in multiple CBAs could “deter some highly qualified suppliers from choosing to participate in the bidding process.”

    Response: We agree with commenters that there may be unknown variables associated with obtaining this new bid surety bond, as well as the potential financial and administrative burdens that will be placed on bidders. We believe that a lower bid surety bond amount would be appropriate to encourage continued participation of bidders in the CBP and are therefore revising the bid surety bond amount to $50,000 in the final rule. While we acknowledge that there will be a number of entities that are required to make large expenditures in order to obtain a bid surety bond for each CBA in which they are submitting a bid, we anticipate that this revision on the bid surety bond amount from $100,000 to $50,000 will reduce that overall burden on all suppliers. We intend to monitor the implementation of the bid surety bond requirement and will consider increasing the bid surety bond amount in future rulemaking if necessary.

    Comment: Several commenters proposed setting the bid surety bond amount higher for National Mail Order (NMO) suppliers with a suggested range from $100,000 to $1,000,000 since the NMO has a “national scope” and that NMO suppliers “operate nationally.”

    Response: We appreciate the comments suggesting that NMO suppliers should be required to obtain a higher bid surety bond amount since they provide competitively bid items nationwide. MACRA section 522(a) requires CMS to set the bid surety bond requirement in a competitive acquisition area within a range of $50,000 to $100,000. We proposed to implement the requirement to obtain a bid surety bond for each CBA in the manner required by MACRA. We proposed that the bid surety bond amount be applied in a consistent manner and will not vary by CBA. A “nationwide competitive bidding area” is defined in regulation at § 414.402 as a CBA that includes the United States, its Territories, and the District of Columbia. In the proposed rule, we did not contemplate setting a different bid surety bond amount for the NMO competition since the NMO competition, by definition, is a single CBA (emphasis added) and the NMO competition is not a specific subset of suppliers. The contract acceptance rate for the original NMO competition and the NMO Recompete were 95 percent and 100 percent, respectively. This indicates to us that a higher bid surety bond amount for an NMO competition is not necessary at this time. Furthermore, the highest bid surety bond amount we are permitted to set under section 522(a) of MACRA is $100,000. In this final rule, we will be setting the bond amount at $50,000 for all suppliers.

    Comment: One commenter suggested implementing stronger penalties for submission of false bid surety bonds such as a prohibition from participation in all future rounds of the CBP.

    Response: We did not propose to prohibit an entity from participation in all future rounds of the CBP in this rulemaking and do not think it is necessary at this time because we believe that referring bidding entities that provide a falsified bid surety bond to the Office of the Inspector General and Department of Justice for further investigation is sufficient.

    Comment: A commenter inquired as to why the bid surety bond was only required until January 1, 2019.

    Response: This commenter's interpretation that the bid surety bond is only required until January 1, 2019 is incorrect. Section 1847(a)(1)(G) of the Act provides that the bid bond requirement is applicable to rounds of competition beginning not earlier than January 1, 2017 and not later than January 1, 2019. Thus, the bid surety bond will be required by bidders submitting bids starting with the Round 1 2019 competition.

    Comment: Several commenters suggested that CMS create a limit on either the amount of bid surety bonds required to be purchased by an entity, or the amount of bid surety bonds that could be forfeited by an entity in the event of default.

    Response: Section 1847(a)(1)(G) of the Act does not provide us with the authority to limit the number of bid surety bonds purchased by an entity or to place a cap on the forfeiture amount. Section 1847(a)(1)(G) of the Act explicitly states that a bid surety bond must be purchased for each competitive acquisition area in which a bidder is submitting a bid.

    Comment: One commenter suggested that CMS add a provision that sets forth the discharge of the authorized surety more explicitly.

    Response: For purposes of responding to this comment, we are assuming that the term discharge refers to the return of the bid surety bond liability. We will issue guidance (for example, in the Request for Bids instructions) prior to the opening of the bidding window on the mechanism for the return of the bid surety bond liability to the bidding entity.

    Final Rule Action: As a result of the comments received regarding the bid surety bond requirement, and our reevaluation of the potential impact to the CBP, in this final rule we are adopting a lower amount of $50,000 for the bid surety bond instead of $100,000 for each CBA and revising § 414.412(h)(2)(i)(H) accordingly. We agree that there are a number of unknown variables associated with bid surety bonds and there will be financial and administrative burdens that will be placed on bidders. Therefore, we have revised the bid surety bond amount to $50,000. After considering the comments and for the reasons we set forth previously, the provisions at § 414.412 (h)(1) through (h)(2)(i)(G) for bid surety bonds will be finalized. However, we have updated § 414.412(h)(2)(i)(D) to reference § 414.412(h)(3), which specifies the conditions of the bond. In addition, proposed § 414.412(h)(3) through (4) will be finalized as proposed.

    2. State Licensure Requirement

    We proposed to revise § 414.414(b)(3), “Conditions for awarding contracts,” to align with 1847(b)(2)(A) of the Act, as amended by section 522(b) of MACRA (81 FR 42848). The amendment to the Act states that “[t]he Secretary may not award a contract to any entity under the competition conducted in an [sic] competitive acquisition area . . . to furnish such items or services unless the Secretary finds . . . [t]he entity meets applicable State licensure requirements.” The regulation at § 414.414(b)(3) stated that “[e]ach supplier must have all State and local licenses required to perform the services identified in the request for bids.” Therefore, we proposed revisions to § 414.414(b)(3) to align with the language of section 1847(b)(2)(A) of the Act as revised by section 522(b) of MACRA, to state that a contract will not be awarded to a bidding entity unless the entity meets applicable State licensure requirements (81 FR 42878). We noted, however, that this does not reflect a change in policy as§ 414.414(b)(3) already requires suppliers to have applicable State and local licenses (81 FR 42848).

    We sought comments on these proposals. The comments and our responses to the comments regarding these proposals are set forth below.

    Comment: One commenter stated that “state licensure for DMEPOS will add an extra layer of unnecessary regulation. Currently, we must also be accredited which costs thousands of dollars for the privilege just to have a license.”

    Response: We are not adding requirements or additional layers of regulation. Suppliers currently are required to have applicable state and local licenses under § 414.414(b)(3). The regulation we are finalizing at § 414.414(b)(3) simply captures the language of section 1847(b)(2)(A)(v) of the Act, as added by section 522 of MACRA, which prohibits CMS from awarding a contract to any entity in a CBA unless those requirements are met (81 FR 42848). Therefore, the change we are adopting in this final rule does not represent a change in policy.

    Comment: We received a number of comments on our proposed revisions to § 414.414(b)(3) that were beyond the scope of the proposed rulemaking.

    Response: These comments were beyond the scope of the proposed rulemaking, therefore, we will not be addressing these comments in our final rule.

    Final Rule Action: We are finalizing § 414.414(b)(3) as proposed, to state that a contract may not be awarded to a bidding entity unless the entity meets applicable State licensure requirements. This action does not place a new burden on suppliers nor does it represent a change in policy as CMS currently requires suppliers to be in compliance with all State and local licenses. The final regulation makes it explicit that CMS may not award a contract to any entity in a CBA unless the entity meets applicable State licensure requirements, as required by section 522(b) of MACRA.

    3. Appeals Process for a DMEPOS Competitive Bidding Breach of Contract Action

    We believe DMEPOS suppliers should have the option to appeal all actions that CMS may take for breaches of contract. As a result, we proposed revising § 414.423, Appeals Process for Termination of Competitive Bidding Contract, to expand the appeals process for suppliers who have been sent a notice of a breach of contract stating that CMS intends to take one or more of the actions described in § 414.422(g)(2) as a result of the breach (81 FR 42848). While we recognize that we have the authority to take one or more actions specified in § 414.422(g)(2), the current appeals process is available for one of those actions, specifically, contract termination. Therefore, the proposed revisions would expand § 414.423 to allow appeal rights for each action specified in § 414.422(g)(2) for a breach of contract (81 FR 42848). If a supplier's notice of breach of contract includes more than one breach of contract action CMS would take, and the supplier chooses to appeal more than one action, CMS would make separate decisions for each breach of contract action after reviewing the hearing officer's recommendation (81 FR 42849). We also proposed revisions to § 414.422(g)(2) to remove the breach of contract actions of (1) requiring a contract supplier to submit a corrective action plan; and (2) revoking the supplier number of the contract supplier (81 FR 42849). We proposed removing § 414.423(g)(2)(i) because a corrective action plan is already a part of the formal appeals process outlined in § 414.423, and therefore, unnecessary to list as an action CMS can impose on contract suppliers that it considers to be in breach (81 FR 42849). We also proposed removing the supplier number revocation action at § 414.422(g)(2)(v) because the DMEPOS CBP does not have the authority to revoke a DMEPOS supplier's Medicare billing number (81 FR 42849). Furthermore, we proposed revising this section to state that CMS will specify in the notice of breach of contract which actions it is taking as a result of the breach of contract (81 FR 42849).

    Proposed revisions were made throughout § 414.423 to extend the appeals process to any breach of contract actions described in proposed § 414.422(g)(2) that we might take as a result of the breach, rather than just contract termination actions (81 FR 42849). We also proposed removing the references to termination throughout § 414.423 and instead cross-reference all of the breach of contract actions in proposed § 414.422(g)(2) (81 FR 42849).

    In proposed revisions to § 414.423(a), we proposed deleting the language indicating that termination decisions made under this section are final and binding as this reference is not inclusive of all breach of contract actions, and the finality of a decision is correctly addressed in paragraph (k)(4) of this section (81 FR 42878).

    In the proposed revisions to § 414.423(b)(1), we proposed deleting the phrase “either in part or in whole” because § 414.422(g)(1) specifies that any deviation from contract requirements constitutes a breach of contract (81 FR 42878). In addition, we proposed removing the requirement that the breach of contract notice to the supplier be delivered by certified mail from § 414.423(b)(1) to allow CMS the flexibility to use other secure methods for notifying suppliers (81 FR 42878). We also proposed changes to § 414.423 (b)(2)(i) and (ii) (81 FR 42878). The revised § 414.423(b)(2)(i) states that the notice of breach of contract would include the details of the breach of contract, while § 414.423(b)(2)(ii) requires CMS to include the action or actions that it is taking as a result of the breach of contract and the timeframes associated with each breach of contract action in the notice (81 FR 42878). For example, when a notice of breach of contract includes an action of preclusion, the effective date of the preclusion would be the date specified in the letter and the timeframe of the preclusion will specify the round of the CBP from which the supplier is precluded. We also proposed to add language to paragraph (b)(2)(vi) to specify that the effective date of the action or actions that CMS would take is the date specified by CMS in the notice of breach of contract, or 45 days from the date of the notice of breach of contract unless a timely hearing request has been filed or a CAP has been submitted within 30 days of the date of the notice of breach of contract where CMS allows a supplier to submit a CAP (81 FR 42878-79).

    We proposed revising § 414.423(c)(2)(ii) to specify that the subsequent notice of breach of contract may, at CMS' discretion, allow the supplier to submit another written CAP pursuant to § 414.423(c)(1)(i) (81 FR 42879). We proposed to revise § 414.423(e)(3) to clarify that CMS retains the option to offer the supplier an opportunity to submit another CAP, if CMS deems appropriate, in situations where CMS has already accepted a prior CAP (81 FR 42879).

    Proposed revisions to § 414.423(f)(5) explain that in the event the supplier fails to timely request a hearing, the breach of contract action or actions specified in the notice of breach of contract would take effect 45 days from the date of the notice of breach of contract (81 FR 42879). Proposed revisions to § 414.423(g)(3) were made to clarify that the hearing scheduling notice must be sent to all parties, not just the supplier (81 FR 42879).

    We proposed revising § 414.423(j) to clarify that the hearing officer would issue separate recommendations for each breach of contract action in situations where there is more than one breach of contract action presented at the hearing (81 FR 42880).

    In § 414.423(k), we proposed specifying that CMS would make separate decisions for each recommendation when the hearing officer issues multiple recommendations (81 FR 42880). In addition, we proposed revisions to this paragraph to expand CMS' final determination process, clarifying that the notice of CMS' decision would be sent to the supplier and the hearing officer and would indicate whether any breach of contract actions included in the notice of breach of contract still apply and will be effectuated, and would indicate the effective date of the breach of contract action, if applicable (81 FR 42880). We also proposed expanding on § 414.423(l), effect of breach of contract action or actions, to specify effects of all contract actions described in § 414.422(g)(2) (81 FR 42880). In addition, we proposed adding proposed § 414.423(l)(1), effect of contract suspension, to outline the supplier's requirements regarding furnishing items and reimbursement for the duration of the contract suspension, as well as the details regarding the supplier's obligation to notify beneficiaries (81 FR 42880). We also proposed adding proposed § 414.423(l)(3) (81 FR 42880), effect of preclusion, to specify that a supplier who is precluded would not be allowed to participate in a specific round of the CBP, which would be identified in the original notice of breach of contract. Additionally, we proposed adding proposed § 414.423(l)(4), effect of other remedies allowed by law, to state if CMS decides to impose other remedies under § 414.422(g)(2)(iv), the details of the remedies would be included in the notice of breach of contract (81 FR 42880). Proposed § 414.423(l) also specifies the steps suppliers must take to notify beneficiaries after CMS takes the contract action or actions described in § 414.422(g)(2) (81 FR 42880). Lastly, we proposed to remove language from § 414.423(l)(2), effect of contract termination, to avoid confusion as to which supplier is providing notice to the beneficiary (81 FR 42880).

    We sought comments on these proposals. The comments and our responses to the comments regarding these proposals are set forth below.

    Comment: Numerous commenters suggested that notification of breach of contract should be sent via a manner that provided a “verifiable and guarantee receipt.” Some commenters suggested retaining certified mail in additional to the proposed secure manner.

    Response: We will send a breach of contract notification to contract supplier via electronic means in the future once we have this functional capability. Specifically, contract suppliers will receive an email notifying them to check their secure inbox located in CMS' secure online portal for the DMEPOS CBP (currently known as “Connexion”). Once a supplier logs in to retrieve the notice, the audit logs will record the download history for the document (for example, user name date/time stamp, etc.). However, until the portal has this functionality, we will continue to provide suppliers with notification through certified mail. We will provide advanced notice to contract suppliers when the transition to electronic breach of contract notifications occurs.

    Comment: One commenter stated that in the breach of contract hearing scheduling notice CMS should “clearly state the parties that would receive the notice in addition to the supplier.”

    Response: The supplier and CMS are the parties to the hearing (and the parties may have representatives appear on their behalf). We do not find it necessary, however, to further describe these parties in the breach of contract hearing scheduling notice or make this delineation within the text of § 414.423.

    Comment: One commenter stated that CMS should address the problem of binding bids by exercising its general contracting authority to include in each competitive bidding contract severe financial penalties for any supplier that does not provide services after signing a contract. This penalty should also be referenced as part of the appeals process policies.

    Response: We have adopted regulations to take one or more of the breach of contract actions outlined in § 414.422(g)(2) against contract suppliers that accept competitive bidding contracts and fail to meet the terms of the contracts. We believe those actions are appropriate and we are not considering other types of penalties at this time.

    Final Rule Action: After considering the comments and for the reasons we discussed previously, we are finalizing the proposed changes to § 414.423 to expand the breach of contract appeals process to all breach of contract actions that CMS may take pursuant to § 414.422(g)(2). We are also finalizing § 414.422(g)(2) to adopt the proposed changes to the breach of contract actions that CMS may take when a supplier is in breach of its competitive bidding contract (81 FR 42949). We are removing the word “only” from § 414.423(c)(2)(ii) to clarify when suppliers may submit a CAP. CMS proposed affording suppliers the opportunity to submit a CAP, at CMS' discretion, when the supplier receives a subsequent notice of breach of contract action (81 FR 42849). Removing “only” from this section clarifies that CMS may accept a CAP in response to a subsequent termination notice and not just the initial termination notice. This final regulation provides suppliers who are in breach of contract the opportunity to appeal any breach of contact action that CMS may take rather than only having the opportunity to appeal a contract termination action. This provides greater transparency to suppliers and affords CMS greater flexibility in managing suppliers that are in breach of their competitive bidding contract. Also, in § 414.423(c)(2)(ii), we are changing “paragraph (1)(i)” to “paragraph (c)(1)(i)” to make the paragraph reference more clear.

    In the final rule we are also making a revision to § 414.402, Definitions, for the term “hearing officer”. In the revised definition, we are removing the references to “termination” and replacing those references with “breach of contract” to align with the final changes to § 414.423 that we are adopting in this final rule, as well as deleting the abbreviation “(HO)”, which is no longer used in § 414.423 As we discuss in section XII. “Waiver of Proposed Rulemaking,” because these revisions to § 414.202 are technical in nature, to align the definition of hearing officer with the terminology and process finalized in § 414.423, we find good cause to waive notice and comment rulemaking for this definition revision.

    VI. Method for Adjusting DMEPOS Fee Schedule Amounts for Similar Items With Different Features Using Information From Competitive Bidding Programs (CBPs) A. Background 1. Fee Schedule Payment Basis for Certain DMEPOS

    Section 1834(a) of the Social Security Act (the Act) governs payment for durable medical equipment (DME) covered under Part B and under Part A for a home health agency and provides for the implementation of a fee schedule payment methodology for DME furnished on or after January 1, 1989. Sections 1834(a)(2) through (a)(7) of the Act set forth separate payment categories of DME and describe how the fee schedule for each of the following categories is established:

    • Inexpensive or other routinely purchased items;

    • Items requiring frequent and substantial servicing;

    • Customized items;

    • Oxygen and oxygen equipment;

    • Other covered items (other than DME); and

    • Other items of DME (capped rental items).

    Section 1834(h) of the Act governs payment for prosthetic devices, prosthetics, and orthotics (P&O) and sets forth fee schedule payment rules for P&O. Effective for items furnished on or after January 1, 2002, payment is also made on a national fee schedule basis for parenteral and enteral nutrition (PEN) in accordance with the authority under section 1842(s) of the Act. The term “enteral nutrition” will be used throughout this document to describe enteral nutrients, supplies and equipment covered as prosthetic devices in accordance with section 1861(s)(8) of the Act and paid for on a fee schedule basis and enteral nutrients under the Medicare DMEPOS Competitive Bidding Program (CBP), as authorized under section 1847(a)(2)(B) of the Act. Additional background discussion about DMEPOS items subject to section 1834 of the Act, rules for calculating reasonable charges, and fee schedule payment methodologies for PEN and for DME prosthetic devices, prosthetics, orthotics, and surgical dressings, was provided in the July 11, 2014 proposed rule at 79 FR 40275 through 40277.

    2. DMEPOS Competitive Bidding Programs Payment Rules

    Section 1847(a) of the Act, as amended by section 302(b)(1) of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) (Pub. L. 108-173), requires the Secretary to establish and implement CBPs in competitive bidding areas (CBAs) throughout the United States for contract award purposes for the furnishing of certain competitively priced DMEPOS items and services. The programs mandated by section 1847(a) of the Act are collectively referred to as the “Medicare DMEPOS Competitive Bidding Program.” Section 1847(a)(2) of the Act provides that the items and services to which competitive bidding applies are:

    • Off-the-shelf (OTS) orthotics for which payment would otherwise be made under section 1834(h) of the Act;

    • Enteral nutrients, equipment and supplies described in section 1842(s)(2)(D) of the Act; and

    • Certain DME and medical supplies, which are covered items (as defined in section 1834(a)(13) of the Act) for which payment would otherwise be made under section 1834(a) of the Act.

    The DME and medical supplies category includes items used in infusion and drugs (other than inhalation drugs) and supplies used in conjunction with DME, but excludes class III devices under the Federal Food, Drug, and Cosmetics Act and Group 3 or higher complex rehabilitative power wheelchairs and related accessories when furnished with such wheelchairs. Sections 1847(a) and (b) of the Act specify certain requirements and conditions for implementation of the Medicare DMEPOS CBP.

    3. Methodologies for Adjusting Payment Amounts Using Information From the DMEPOS Competitive Bidding Program

    Below is a summary of the three general methodologies used in adjusting payment amounts for DMEPOS items in areas that are not CBAs using information from the DMEPOS CBP. Also summarized are the processes for updating adjusted fee schedule amounts and for addressing the impact of unbalanced bidding on SPAs when adjusting payment amounts using information from the DMEPOS CBPs. We published a final rule titled “Medicare Program; End-Stage Renal Disease Prospective Payment System, Quality Incentive Program, and Durable Medical Equipment, Prosthetics, Orthotics, and Supplies” on November 6, 2014 (hereinafter, the CY 2015 final rule), in which we adopted these methodologies (79 FR 66223 through 66233). We also issued program instructions on these methodologies in Transmittal #3350, (Change Request # 9239), issued on September 11, 2015 and Transmittal #3416, (Change Request # 9431) issued on November 23, 2015. The CBP product categories, HCPCS codes and single payment amounts (SPAs) included in the CBPs are available on the Competitive Bidding Implementation Contractor (CBIC) Web site: http://www.dmecompetitivebid.com/palmetto/cbic.nsf/DocsCat/Home.

    Section 1834(a)(1)(F)(ii) of the Act provides the Secretary with the authority to use information from the DMEPOS CBPs to adjust the DME payment amounts for covered items furnished on or after January 1, 2011, in areas where competitive bidding is not implemented for the items. Similar authority exists at section 1834(h)(1)(H)(ii) of the Act for OTS orthotics. Also, section 1842(s)(3)(B) of the Act provides authority for making adjustments to the fee schedule amounts for enteral nutrients, equipment, and supplies (enteral nutrition) based on information from CBPs. Section 1834(a)(1)(F)(ii) of the Act also requires adjustments to the payment amounts for all DME items subject to competitive bidding furnished in areas where CBPs have not been implemented on or after January 1, 2016.

    For items furnished on or after January 1, 2016, section 1834(a)(1)(F)(iii) of the Act requires us to continue to make such adjustments to DME payment amounts where CBPs have not been implemented as additional covered items are phased in or information is updated as contracts are re-competed. Section 1834(a)(1)(G) of the Act requires that the methodology used to adjust payment amounts for DME and OTS orthotics using information from the CBPs be promulgated through notice and comment rulemaking. Also, section 1834(a)(1)(G) of the Act requires that we consider the “costs of items and services in areas in which such provisions [sections 1834(a)(1)(F)(ii) and 1834(h)(1)(H)(ii)] would be applied compared to the payment rates for such items and services in competitive acquisition [competitive bidding] areas.”

    a. Adjusted Fee Schedule Amounts for Areas Within the Contiguous United States

    Pursuant to § 414.210(g)(1), CMS determines a regional price for DME items or services for each state in the contiguous United States and the District of Columbia equal to the un-weighted average of the single payment amounts (SPAs) for an item or service for CBAs that are fully or partially located in the same region that contains the state or the District of Columbia. CMS uses the regional prices to determine a national average price equal to the un-weighted average of the regional prices. The regional SPAs (RSPAs) cannot be greater than 110 percent of the national average price (national ceiling) or less than 90 percent of the national average price (national floor). This methodology applies to enteral nutrition and most DME items furnished in the contiguous United States (that is, items that are included in more than 10 CBAs).

    The fee schedule amounts for areas defined as rural areas for the purposes of the CBP are adjusted to 110 percent of the national average price described above. The regulations at § 414.202 define a rural area to mean, for the purpose of implementing § 414.210(g), a geographic area represented by a postal zip code if at least 50 percent of the total geographic area of the area included in the zip code is estimated to be outside any metropolitan area (MSA). A rural area also includes a geographic area represented by a postal zip code that is a low population density area excluded from a CBA in accordance with the authority provided by section 1847(a)(3)(A) of the Act at the time the rules at § 414.210(g) are applied.

    b. Adjusted Fee Schedule Amounts for Areas Outside the Contiguous United States

    Pursuant to § 414.210(g)(2), in areas outside the contiguous United States (that is, noncontiguous areas such as Alaska, Guam, and Hawaii), the fee schedule amounts are reduced to the greater of the average of SPAs for the item or service for CBAs outside the contiguous United States (currently only applicable to Honolulu, Hawaii) or the national ceiling amounts calculated for an item or service based on RSPAs for CBAs within the contiguous United States.

    c. Adjusted Fee Schedule Amounts for Items Included in 10 or Fewer CBAs

    Pursuant to § 414.210(g)(3), for DME items included in ten or fewer CBAs, the fee schedule amounts for the items are reduced to 110 percent of the un-weighted average of the SPAs from the ten or fewer CBAs. This methodology applies to all areas within and outside the contiguous United States.

    d. Updating Adjusted Fee Schedule Amounts

    Section 1834(a)(1)(F)(ii) of the Act requires the Secretary to use information from the CBP to adjust the DMEPOS payment amounts for items furnished on or after January 1, 2016, and section 1834(a)(1)(F)(iii) requires the Secretary to continue to make such adjustments as additional covered items are phased in or information is updated as competitive bidding contracts are recompeted. In accordance with § 414.210(g)(8), the adjusted fee schedule amounts are revised when an SPA for an item or service is updated following one or more new competitions and as other items are added to CBPs. DMEPOS fee schedule amounts that are adjusted using SPAs will not be subject to the annual DMEPOS covered item update and will only be updated when SPAs from the CBP are updated. Updates to the SPAs may occur at the end of a contract period as contracts are recompeted, as additional items are added to the CBP, or as new CBAs are added. In cases where adjustments to the fee schedule amounts are made using any of the methodologies described above, and the adjustments are based solely on the SPAs from CBPs that are no longer in effect, the SPAs are updated before being used to adjust the fee schedule amounts. The SPAs are adjusted based on the percentage change in the Consumer Price Index for all Urban Consumers (CPI-U) over the course of time described in § 414.210(g)(4). For example, if the adjustments were to be effective January 1, 2017, the SPAs from CBPs no longer in effect would be updated based on the percentage change in the CPI-U from the mid-point of the last year the SPAs were in effect to June 30, 2016, the month ending 6 months prior to the date the initial fee schedule reductions go into effect. Following the initial adjustment, if the adjustments continue to be based solely on the SPAs that are no longer in effect, the SPAs will be updated every 12 months using the CPI-U for the 12-month period ending 6 months prior to the date the updated payment adjustments would go into effect.

    e. Method for Avoiding HCPCS Price Inversions When Adjusting Fee Schedule Amounts Using Information From the DMEPOS Competitive Bidding Program

    In our CY 2015 final rule (79 FR 66263), we adopted a method to address unbalanced bidding, which is a situation that results in price inversions under CBPs. We added § 414.210(g)(6) to the regulations for certain limited situations where bidding for similar but different enteral infusion pumps and standard power wheelchairs resulted in the SPAs for higher utilized items with additional features (for example, an enteral infusion pump with an alarm or a Group 2 power wheelchair) being less than the SPAs for lower utilized items without those additional features (for example, an enteral infusion pump without an alarm or Group 1 power wheelchair). A Group 2 power wheelchair is faster, travels further, and climbs higher obstacles than a Group 1 power wheelchair. Under CBPs, when similar items with different features are included in the same product category, the HCPCS code with higher beneficiary utilization at the time of the competition receives a higher weight and the bid for this item has a greater impact on the supplier's composite bid as well as the competitiveness of the supplier's overall bid for the product category (PC) within the CBP as compared to the bid for the less frequently utilized item. If, at the time the competition takes place under the CBP, the item with the additional features is priced higher and over time is utilized more than the other similar items without these features, it could result in unbalanced bidding, which in turn causes the item without the additional features to receive a higher single payment amount under the CBP than the item with the additional features. This situation results in a price inversion, where the higher weighted and higher priced item at the time of the competition becomes the lower priced item in the CBP following the competition. Unbalanced bidding can occur when a bidder has a higher incentive to submit a lower bid for one item than another due to the fact that the item has a higher weight and therefore a greater effect on the supplier's composite bid for the product category than the other item. Our current regulation at § 414.210(g)(6) for adjusting DMEPOS fee schedule amounts paid in non-CBAs using information from CBPs includes methodologies to address price inversions for power wheelchairs and enteral infusion pumps only. This rule limits SPAs for items without additional features (for example, an enteral infusion pump without an alarm) to the SPAs for items with the additional features (for example, an enteral infusion pump with an alarm) prior to using these SPAs to adjust fee schedule amounts.

    For example, if most of the utilization or allowed services for standard power wheelchairs are for higher paying Group 2 wheelchairs than Group 1 wheelchairs at the time the competition occurs, the bids for the Group 2 wheelchairs have a greater impact on the supplier's composite bid and chances of being offered a contract. Therefore the supplier has a much greater incentive to make a lower bid for the Group 2 wheelchairs relative to the fee schedule payment than they do for the Group 1 wheelchairs. If, for example, Medicare is paying $450 per month for a Group 2 wheelchair at the time of the competition and a Group 2 wheelchair has a high weight, while Medicare is paying $350 per month for the Group 1 version of the same wheelchair at the time of the competition and the Group 1 wheelchair has a very low weight, the bids for the two items could be unbalanced or inverted whereby the bid submitted for the Group 2 wheelchair is $250 (44 percent below the fee schedule amount for the item) while the bid submitted for the Group 1 wheelchair is $300 (14 percent below the fee schedule amount for the item). A price inversion therefore results where Medicare previously paid $450 for one item and now pays $250, and previously paid $350 for another item for which it now pays $300. The item weight under the CBP results in Medicare paying more for a Group 1 power wheelchair than a higher-performing Group 2 power wheelchair.

    In the CY 2015 proposed rule published on July 11, 2014 in the Federal Register (79 FR 40208) (hereinafter, CY 2015 proposed rule), we referred to an additional feature that one item has and another item does not have as a “hierarchal” feature, meaning that one item provides an additional, incremental service that the other item does not provide (79 FR 40287). For example, HCPCS code B9002 describes an enteral infusion pump with an alarm, while code B9000 describes an enteral infusion pump without an alarm. Code B9002 describes an item that provides an additional service (an alarm) and the alarm was referred to as a hierarchal feature, meaning the item with the alarm provides an item and service above what the item without the alarm provides. Commenters believed the term “hierarchal feature” should be better defined (79 FR 66231). We agreed and finalized the rule only for the specific scenarios addressed in the CY 2015 proposed rule, namely, enteral infusion pumps and standard power wheelchairs. Therefore, the final regulation at § 414.210(g)(6)(i) specifically requires that in situations where a SPA for an enteral infusion pump without alarm is greater than the SPA in the same CBA for an enteral infusion pump with alarm, the SPA for the enteral infusion pump without alarm is adjusted to equal the SPA for the enteral infusion pump with alarm prior to applying the payment adjustment methodologies for these items in non-CBAs. We also adopted regulations at § 414.210(g)(6)(ii) through (v) to address bid inversion for standard power wheelchairs. In the CY 2015 final rule at 79 FR 66231, we stated that we would consider whether to add a definition of hierarchal feature, or to apply the rule we proposed to other items not identified in the final rule through future notice and comment rulemaking.

    B. Summary of the Proposed Provisions on the Method for Adjusting DMEPOS Fee Schedule Amounts for Similar Items With Different Features Using Information From Competitive Bidding Programs

    The proposed rule, titled “End-Stage Renal Disease Prospective Payment System, Coverage and Payment for Renal Dialysis Services Furnished to Individuals with Acute Kidney Injury, End-Stage Renal Disease Quality Incentive Program, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program Bid Surety Bonds, State Licensure and Appeals Process for Breach of Contract Actions, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program and Fee Schedule Adjustments, Access to Care Issues for Durable Medical Equipment; and the Comprehensive End-Stage Renal Disease Care Model” (81 FR 42802 through 42880), was published in the Federal Register on June 30, 2016, with a comment period that ended on August 23, 2016. During the comment period, we issued a correction to the proposed rule with minor technical edits, including corrections to several HCPCS codes we listed describing groupings of similar items with different features (81 FR 42825). The correction notice, which went on public display on August 2, 2016, was published in the Federal Register on August 3, 2016 (FR Doc. C1-2016-15188) (81 FR 51147).

    In the proposed rule, for the Method for Adjusting DMEPOS Fee Schedule Amounts for Similar Items with Different Features using Information from Competitive Bidding Programs, we proposed changes to the methodologies for adjusting fee schedule amounts for DMEPOS items using information from CBPs and for submitting bids and establishing single payment amounts under the CBPs for certain groupings of similar items with different features.

    After performing a review of all HCPCS codes in the CBPs in order to comply with our commitment to consider whether to apply the regulation at § 414.210(g)(6) to other cases of price inversion that resulted from unbalanced bidding that were not identified or addressed in the CY 2015 final rule (79 FR 66231), we found a significant number of price inversions resulting from the 2016 DMEPOS CBP Round 2 Recompete for contract periods beginning July 1, 2016. The items affected included transcutaneous electrical nerve stimulation (TENS) devices, walkers, hospital beds, power wheelchairs, group 2 support surfaces (mattresses and overlays), enteral infusion pumps, and seat lift mechanisms. As a result of our review, we proposed a rule that would expand the provisions of § 414.210(g)(6) to address these and other price inversions.

    To perform our review, we examined instances within the HCPCS where there are multiple codes for an item (for example, a walker) that are distinguished by the addition of features (for example, folding walker versus rigid walker or wheels versus no wheels) which may experience price inversions. Our review included all groupings of similar items with different features within each of the product categories. We have included the HCPCS codes describing groupings of similar items that would be subject to this final rule and the features associated with each code below:

    Enteral Infusion Pumps B9000 Pump without alarm. B9002 Pump with alarm. Hospital Beds E0250 Fixed Height With Mattress & Side Rails. E0251 Fixed Height With Side Rails. E0255 Variable Height With Mattress & Side Rails. E0256 Variable Height With Side Rails. E0260 Semi-Electric With Mattress & Side Rails. E0261 Semi-Electric With Side Rails. E0290 Fixed Height With Mattress. E0291 Fixed Height. E0292 Variable Height With Mattress. E0293 Variable Height. E0294 Semi-Electric With Mattress. E0295 Semi-Electric. E0301 Heavy Duty Extra Wide With Side Rails. E0302 Extra Heavy Duty Extra Wide With Side Rails. E0303 Heavy Duty Extra Wide With Mattress & Side Rails. E0304 Extra Heavy Duty Extra Wide With Mattress & Side Rails. Mattresses and Overlays E0277 Powered mattress. E0371 Powered overlay. E0372 Non-powered overlay. E0373 Non-powered mattress. Power Wheelchairs K0813 Group 1 Sling Seat, Portable. K0814 Group 1 Captains Chair, Portable. K0815 Group 1 Sling Seat. K0816 Group 1 Captains Chair, Standard Weight. K0820 Group 2 Sling Seat, Portable. K0821 Group 2 Captains Chair, Portable. K0822 Group 2 Sling Seat, Standard Weight. K0823 Group 2 Captains Chair, Standard Weight. Seat Lift Mechanisms E0627 Electric. E0628 Electric. E0629 Non-electric. Transcutaneous Electrical Nerve Stimulation (Tens) Devices E0720 Two leads. E0730 Four leads. Walkers E0130 Rigid. E0135 Folding. E0141 Rigid With Wheels. E0143 Folding With Wheels.

    As shown in Table 20, under the 2015 DMEPOS fee schedule, Medicare pays more for walkers with wheels than walkers without wheels. The same is true for walkers that fold as compared to walkers that do not fold. Walkers that are rigid and do not fold are very rarely used and have extremely low utilization, and a walker that folds and has wheels is used much more frequently than a walker that folds but does not have wheels.

    Table 20—Average of 2015 DMEPOS Fee Schedule Amounts for Purchase of Walkers Code Item Average 2015 fee schedule amount 1 2014 Allowed services E0130 Rigid Walker without Wheels $64.97 59 E0135 Folding Walker without Wheels 78.97 5,053 E0141 Rigid Walker with Wheels 107.89 455 E0143 Folding Walker with Wheels 111.69 95,939 1 Average of 2015 fee schedule amounts for all areas.

    Under the DMEPOS CBP, because the folding walker without wheels (E0135) are used more frequently than the rigid walker without wheels (E0130), code E0135 receives a higher weight than code E0130. In addition, under the 2015 fee schedule, Medicare pays more for code E0135 than code E0130. Weights are assigned to individual items (HCPCS codes) within a product category (for example, standard mobility equipment) under the DMEPOS CBP for the purpose of calculating a composite bid for each supplier submitting bids for that product category in a CBA. The weights are based on the beneficiary utilization rate using national data when compared to other items in the same product category. The beneficiary utilization rate of an item captures the total allowed services for the item from Medicare claims submitted for the item on a national basis. A supplier's bid for each item in the product category is multiplied by the weight assigned to the item, and the sum of these calculations equals the supplier's composite bid. Contracts are offered to eligible suppliers with the lowest composite bids. Therefore, the higher the weight for an item in a product category, the more the bid for that item will affect the supplier's composite bid and chances of being offered a contract for that product category. Conversely, the lower the weight for an item in a product category, the less the bid for that item will affect the supplier's composite bid and chances of being offered a contract for that product category.

    Similarly, because the folding walker with wheels (E0143) is used more frequently than the rigid walker with wheels (E0141), and more frequently than the walkers without wheels (E0130 and E0135), it receives a higher weight under the DMEPOS CBP than all three codes for the less expensive, less frequently utilized codes with fewer features: Codes E0130, E0135, and E0141. Under the 2015 fee schedule, Medicare pays more for code E0143 than codes E0130 (rigid walkers without wheels), E0135 (folding walkers without wheels) or E0141 (rigid walkers with wheels). Under the Round 2 Recompete, the fact that code E0143 (folding walkers with wheels) received a far greater weight than the other walkers that either did not fold, did not have wheels, or had neither feature resulted in price inversions as illustrated in Table 21. The first price inversion involves a rigid walker without wheels (E0130). A rigid walker without wheels has lower fee schedule amounts on average and a lower weight than a folding walker without wheels (E0135), yet under competitive bidding, it has a greater SPA than the folding walker. The second price inversion involves a rigid walker with wheels (E0141), which has lower fee schedule amounts on average and a lower weight than a folding walker with wheels (E0143), but has a greater SPA than the folding walker with wheels under competitive bidding. The third price inversion involves a rigid walker without wheels (E0130), which has a greater SPA than a folding walker with wheels despite having lower fee schedule amounts on average and a lower weight than the folding walker with wheels (E0143).

    Table 21—Round 2 (2016) Price Inversions for Purchase of Walkers Code Item 2015 Fee 1 Avg SPA 2 E0130 Rigid Walker without Wheels $64.97 $47.23 E0135 Folding Walker without Wheels 78.97 43.05 E0141 Rigid Walker with Wheels 107.89 75.03 E0143 Folding Walker with Wheels 111.69 45.92 1 Average of 2015 fee schedule amounts for all areas. 2 Average of Round 2 2016 SPAs.

    In all cases, Medicare pays a higher payment for walkers with wheels than walkers without wheels under the fee schedule. This differential in payment amounts is significant because it reflects the fact that the walker with wheels has a feature that likely resulted in higher fee schedule amounts for this item, making it more costly than the same type of walker without the addition of wheels. Rather than defining the ability of a walker to fold or the presence of wheels as a “hierarchal” feature, it can simply be noted that under the fee schedule, Medicare pays more for walkers with the ability to fold than walkers without the ability to fold and that Medicare pays more for walkers with wheels than for walkers without wheels.

    If the items with additional features are more expensive and are also utilized more than the items without the features, a price inversion can result in a CBA due to the item weights and how they factor into the composite bids, as described above. Therefore, we proposed to adopt a definition of price inversion in our regulations at proposed § 414.402 as any situation where the following occurs: (a) One item (HCPCS code) in a grouping of similar items (for example, walkers, enteral infusion pumps, or power wheelchairs) in a product category includes a feature that another, similar item in the same product category does not have (for example, wheels, an alarm, or Group 2 performance); (b) the average of the 2015 fee schedule amounts (or initial, unadjusted fee schedule amounts for subsequent years for new items) for the code with the feature is higher than the average of the 2015 fee schedule amounts for the code without the feature; and (c) following a competition, the SPA for the code with the feature is lower than the SPA for the item without that feature (81 FR 42877). We proposed to classify this circumstance as a price inversion under competitive bidding that would be adjusted prior to revising the fee schedule amounts for the items (81 FR 42854). For this adjustment, we considered two methodologies.

    The first method we considered for addressing price inversions (method 1) uses the methodologies at 42 CFR 414.210(g)(6) and limits the SPA for the code without the feature to the SPA for the code with the feature before the SPA is used to adjust the fee schedule amounts for the item (81 FR 42854). For example, under the Round 2 Recompete, the SPA for code E0141 for the South Haven-Olive Branch, MS CBA is $106.52. Code E0143 describes the same type of walker, but code E0143 walkers fold, while code E0141 walkers are rigid and do not fold. However, under the Round 2 Recompete, the SPA for code E0143 (wheeled walkers that fold) for the South Haven-Olive Branch, MS CBA is $44.00, or $62.52 less than the SPA for E0141 (wheeled walkers that do not fold). The average of the 2015 fee schedule amounts for codes E0141 and E0143 are $107.89 and $111.69, respectively. Altogether, since (a) one walker in a product category includes a feature that another, similar walker in the same product category does not have (in this situation, the ability to fold); (b) the average of the 2015 fee schedule amounts for the folding walker (E0143) is higher than the average of the 2015 fee schedule amounts for the rigid walker (EO141); and (c) the SPA for the folding walker ($44.50) is lower than the SPA for the rigid walker ($106.52), these items would meet the proposed definition of a price inversion under the DMEPOS CBP. Under method 1, the SPA of $106.52 for code E0141 in this CBA would be adjusted to the SPA of $44.00 for code E0143 in this CBA, so that $44.00, rather than $106.52, would be used for this CBA in computing the regional price for code E0141 described in § 414.210(g)(1)(i) under the method used to adjust the fee schedule amounts for code E0141. To further illustrate how method 1 would work, the 2016 SPAs for codes E0130, E0135, E0141, and E0143 for the Akron, Ohio CBA, and the amounts they would be adjusted to before applying the fee schedule adjustment methodologies are listed in Table 22 below.

    Table 22—Adjustment of 2016 SPAs for Purchase of Walkers for Akron, OH To Eliminate Price Inversions With Method 1 Code Item 2015 Fee 1 2016 SPA Adjusted amount 2 E0130 Rigid Walker without Wheels $64.97 $50.85 $44.88 E0135 Folding Walker without Wheels 78.97 44.88 n/a E0141 Rigid Walker with Wheels 107.89 84.82 48.62 E0143 Folding Walker with Wheels 111.69 48.62 n/a 1 Average of 2015 fee schedule amounts for all areas. 2 The SPA would be adjusted to this amount before making adjustments to the fee schedule.

    The method 1 approach is currently used for enteral infusion pumps and standard power wheelchairs at § 414.210(g)(6), and each price inversion correction is made for a set of two items, as described in the regulation. For example, § 414.210(g)(6)(ii) states that in situations where a single payment amount in a CBA for a Group 1, standard, sling/solid seat and back power wheelchair is greater than the single payment amount in the same CBA for a Group 2, standard, sling/solid seat and back power wheelchair, the single payment amount for the Group 1, standard, sling/solid seat and back power wheelchair is adjusted to be equal to the single payment amount for the Group 2, standard, sling/solid seat and back power wheelchair prior to applying the payment adjustment methodologies in the section. We stated in the proposed rule that, if method 1 is finalized, we would indicate that additional price inversions involving additional sets of two items to which this rule would apply would be identified in a table in the preamble of the final rule (81 FR 42854). An example of such a table is provided below in Table 23 using codes for walkers, seat lift mechanisms, and TENS devices:

    Table 23—Additional Price Inversions Subject to 42 CFR 414.210(g)(6) Item Code without feature(s) Code with feature(s) Feature(s) Adjustment Walker E0130 E0135 Folding E0130 SPA adjusted not to exceed (NTE) SPA for E0135. Walker E0141 E0143 Folding E0141 SPA adjusted NTE SPA for E0143. Walker E0130 E0143 Folding, Wheels E0130 SPA adjusted NTE SPA for E0143. Walker E0135 E0143 Wheels E0135 SPA adjusted NTE SPA for E0143. Seat Lift E0629 E0627 1 Powered E0629 SPA adjusted NTE SPA for E0627. Seat Lift E0629 E0628 1 Powered E0629 SPA adjusted NTE SPA for E0628. TENS E0720 E0730 Two Additional Leads E0720 SPA adjusted NTE SPA for E0730. 1 Codes E0627 and E0628 both describe powered electric seat lift mechanisms. Code E0627 describes powered seat lift mechanisms incorporated into non-covered seat lift chairs.

    The second method we considered and proposed (method 2) would limit the SPAs in situations where price inversions occur so that the SPAs for all of the similar items, both with and without certain features, are limited to the weighted average of the SPAs for the items based on the item weights assigned under competitive bidding (81 FR 42855). This approach would factor in the supplier bids for the lower volume and higher volume items. This would establish one payment for similar types of items that incorporates the volume and weights for items furnished prior to the unbalanced bidding and resulting price inversions. To illustrate how method 2 would work, the 2016 SPAs for codes E0130, E0135, E0141, and E0143 for the Vancouver, WA CBA, and the amounts they would be adjusted to before applying the fee schedule adjustment methodologies using the weights from Round 2 Recompete are listed in Table 24.

    Table 24—Adjustment of 2016 SPAs for Purchase of Walkers for Vancouver, WA To Eliminate Price Inversions Method 2 Code Item 2015 Fee 1 2016 SPA Round 2
  • recompete
  • item weight
  • (%)
  • Adjusted
  • amount 2
  • E0130 Rigid Walker without Wheels $64.97 $51.62 0.1 $45.53 E0135 Folding Walker without Wheels 78.97 47.65 4.8 45.53 E0141 Rigid Walker with Wheels 107.89 81.62 0.5 45.53 E0143 Folding Walker with Wheels 111.69 45.22 94.6 45.53 1 Average of 2015 fee schedule amounts for all areas. 2 The SPA would be adjusted to this amount before making adjustments to the fee schedule.

    The item weights from the Round 2 Recompete for the four walker codes in this subcategory of walkers in the table above are 0.1 percent for E0130, 4.8 percent for E0135, 0.5 percent for E0141, and 94.6 percent for E0143. The weighted average of the SPA for the four walker codes would be $45.53 ($51.62 × 0.001 + $47.65 × 0.048 + $81.62 × 0.005 + $45.22 × 0.946). This weighted average SPA would be used to adjust the fee schedule amounts for these four codes rather than simply limiting the SPAs for E0135 and E0143 in Table 16 above. This method uses item weights in a product category to adjust the SPA before making adjustments to the fee schedule amount. In accordance with the proposed definition of a price inversion, (a) E0135 and E0143 include features that other, similar walkers in the same product category do not (the ability to fold); (b) the average of the 2015 fee schedule amounts for the folding walkers (E0135 & E0143) are higher than the average of the 2015 fee schedule amounts for the rigid walkers (E0130 & E0141); and (c) the 2016 SPAs for the folding walkers were less than the SPAs for the respective rigid walkers. Therefore, the SPA for code E0130 is higher than the SPA for code E0135, the SPAs for codes E0141 and E0143 were inverted such that the SPA for code E0141 is higher than the SPA for code E0143, and the SPAs for codes E0135 and E0143 were inverted such that the SPA for code E0135 is higher than the SPA for code E0143. Under the proposed method 2, these three price inversions would be addressed so that the SPAs for all of the similar items described by codes E0130, E0135, E0141, and E0143 in this CBA would be adjusted to the weighted average of the SPAs for these codes for similar items in this CBA. As a result, the adjusted SPA of $45.53 rather than $51.62, $47.65, $81.62, and $45.22, would be used to compute the regional price for codes E0130, E0135, E0141, and E0143, respectively, using method 2 to adjust the fee schedule amounts for these items and in accordance with § 414.210(g)(1)(i).

    Although we believe that both method 1 and method 2 would correct inverted SPAs, method 1 simply limits the amount paid for the item without a feature(s) to the item with the feature(s), while method 2 factors in the SPAs for all of the items. Therefore, if the cost of an item without a feature was actually more than the cost of an item with a feature (for example, for volume discounts for the item with the feature drives the price down below the price for the item without the feature), method 1 would not allow the higher cost of the item without the feature to be factored into the payment made to the suppliers of the items. Therefore, we proposed to use method 2 because it took into account the supplier bids for all of the similar items when establishing the payment amounts used to adjust fees; and therefore, factors in contemporary information relative to bids and supplier information for various items with different features and costs (81 FR 42855). The SPAs established based on supplier bids for all of the similar items are used to calculate the weighted average. If, for some reason, the market costs for an item without a feature are actually higher than the market costs for an item with the feature, due to economies of scale, supply and demand, or other economic factors, these costs are accounted for in the weighted average of the SPAs established for each of the similar items. Under method 1, the SPA for the lower weight item without a feature is limited to the SPA for the higher weight item with the feature, and so potential cost inversions driven by market forces or supplier costs are not accounted for in establishing the adjusted payment amounts. We solicited comments on both method 2, which we proposed, and method 1, which we considered.

    In summary, we proposed to expand use of the method at § 414.210(g)(6) to other situations where price inversions occur under CBPs. First, we proposed to revise 42 CFR 414.402 to add the definition of price inversion as any situation where the following occurs (81 FR 42856, 42877):

    • One item (HCPCS code) in a grouping of similar items (for example, walkers, enteral infusion pumps or power wheelchairs) in a product category includes a feature that another, similar item in the same product category does not have (for example, wheels, alarm, or Group 2 performance);

    • The average of the 2015 fee schedule amounts (or initial, unadjusted fee schedule amounts for subsequent years for new items) for the code with the feature is higher than the average of the 2015 fee schedule amounts for the code without the feature; and

    • The SPA in any year after and including 2016 for the code with the feature is lower than the SPA for the code without that feature.

    Second, we proposed to revise § 414.210(g)(6) to specify that, in situations where price inversions occur under a CBP, the SPAs for the items would be adjusted before applying the fee schedule adjustment methodologies under § 414.210(g) (81 FR 42877). We proposed that the adjustments to the SPAs would be made using method 2 described above (81 FR 42855). We also proposed changes to the regulation text at § 414.210(g)(6) to reflect use of method 2 to adjust the SPAs for all of the similar items where price inversions have occurred, both with and without certain features, so that they are limited to the weighted average of the SPAs for the items in the product category in the CBA before applying the fee schedule adjustment methodologies under § 414.210(g) (81 FR 42856, 42877). We proposed to apply this rule to price inversions as defined in the proposed rule for the groupings of similar items listed in the Table 18 of the proposed rule and identified again below in Table 25 (81 FR 42856). For the purpose of calculating the weighted average at proposed § 414.210(g)(6)(iii), we proposed to add a definition of “total nationwide allowed services” at § 414.202, to mean the total number of services allowed for an item furnished in all states, territories, and the District of Columbia where Medicare beneficiaries reside and can receive covered DMEPOS items and services (81 FR 42856, 42877). We proposed to define the weight for each code in a grouping of similar items at § 414.210(g)(6)(iii) for purposes of calculating the weighted average as the proportion of the total nationwide allowed services for the code for claims with dates of service in calendar year 2012 relative to the total nationwide allowed services for each of the other codes in the grouping of similar items for claims with dates of service in calendar year 2012. We proposed to use data from calendar year 2012 because this is the most recent calendar year that includes data for items furnished before implementation of Round 2 of the CBP and the beginning of the price inversions (81 FR 42856). The weights reflect the frequency that covered items in a grouping of similar items were furnished in calendar year 2012 on a national basis relative to other items in the grouping.

    Table 25—Groupings of Similar Items Grouping of similar items HCPCS codes 1 Enteral Infusion Pumps B9000, B9002. Hospital Beds E0250, E0251, E0255, E0256, E0260, E0261, E0290, E0291, E0292, E0293, E0294, E0295, E0301, E0302, E0303, E0304. Mattresses and Overlays E0277, E0371, E0372, E0373. Power Wheelchairs K0813, K0814, K0815, K0816, K0820, K0821, K0822, K0823. Seat Lift Mechanisms E0627, E0628, E0629. TENS Devices E0720, E0730. Walkers E0130, E0135, E0141, E0143. 1 The descriptions for each HCPCS code are available at: https://www.cms.gov/Medicare/Coding/HCPCSReleaseCodeSets/Alpha-Numeric-HCPCS.html. C. Response to Comments on the Method for Adjusting DMEPOS Fee Schedule Amounts for Similar Items With Different Features Using Information From Competitive Bidding Programs

    We solicited comments on the method for adjusting DMEPOS fee schedule amounts for similar items with different features using information from competitive bidding programs and received 8 public comments on our proposals, including comments from DMEPOS manufacturers and suppliers.

    The comments and our responses to the comments for these proposals are set forth below.

    Comment: Some commenters suggested there are underlying/additional issues to price inversions and suggest that CMS analyze the history of the product group and how payment rates for the applicable codes were originally established. Comments suggested other factors may have caused price inversions such as the method used to “gap-fill” fee schedule amounts for items when the data mandated by the statute for calculating the fee schedule amounts does not exist, awarding contracts under the CBP based on composite bids (individual bids for items multiplied by item weights), and establishing single payment amounts under the CBP based on the median of bids submitted. Some commenters suggested that these underlying issues should be addressed and the competitions re-competed in order to address the situation.

    Response: We appreciate the comments but do not agree with these comments. The fee schedule amounts for walkers, TENS devices, and hospital bed codes E0250 through E0261 were established based on average reasonable charges from 1986 and 1987 as mandated by section 1834(a) of the Act. The fee schedule amounts for these items, based on supplier's average reasonable charges, are higher as more features are added to the items (for example wheels, folding, 4 lead rather than 2 lead, with mattress, variable height, and semi-electric). The fee schedule amounts for hospital beds without side rails (for example E0294) were gap-filled using the fee schedule amounts for hospital beds with side rails (for example E0260) and subtracting the fee schedule amounts for side rails (E0305 and E0310). We do not agree that the establishment of fee schedule amounts contributed to price inversions since the fee schedule amounts increased with addition of a feature when fees were established under both the reasonable charge and gap-filling methodologies. The fee schedule amounts for heavy duty hospital beds (E0301 thru E0304) were established based on manufacturer suggested retail prices and are higher than the fee schedule amounts for the standard weight versions of these beds to reflect the ability to accommodate heavier patients. The fee schedule amounts for electric and non-electric seat lift mechanisms are very similar, with the fee schedule amounts for electric seat lift mechanisms being slightly higher than the fee schedule amounts for the seat lift mechanisms without the power feature. The fee schedule amounts for power wheelchairs are based on manufacturer suggested retail prices and in no case does the fee schedule amount for a Group 1 power wheelchair exceed the fee schedule amount for the Group 2 version of the same type of power wheelchair. The fee schedule amounts for enteral infusion pumps (code B9000 for the pump without alarm and code B9002 for the pump with alarm) are the same. For hospital beds, power wheelchairs, and enteral infusion pumps, in no case was a fee schedule amount for an item without a feature established so that it exceeded the fee schedule amount for an item with the feature. For this reason, we do not believe that the methods used to establish fee schedules contributed to price inversions. The fee schedule amounts for Group 2 support surfaces (mattresses and overlays) are addressed below. We do not believe that using composite bids to select contract suppliers for contract award or median bids to establish single payment amounts under the competitive bidding program are underlying causes for the price inversions.

    Establishing single payment amounts based on the median of bids (as opposed to the highest bid) is applied consistently to each item in the product category and reflects the bids of all of the winning suppliers rather than just one. It is also similar to how the DME fee schedule amounts were initially established for each item, either based on average reasonable charges or average supplier prices (as opposed to the highest charge or price). We fail to see how establishing SPAs under the CBP using median bid amounts is an underlying cause of price inversions. We believe that use of composite bids is necessary when a competition under the CBP is for a group of items versus a single item. It is the method used to determine which bids are the most competitive (that is, generate the most savings) for the items in the product category as a whole. Use of a composite bid would not be necessary if the competition under the program were for a single item (for example, for one HCPCS code for oxygen and oxygen equipment used to bill and receive payment for all items and services furnished on a monthly basis related to oxygen and oxygen equipment). Therefore, we do not believe price inversions are caused by use of composite bids and item weights alone. Based on our analysis and the examples we discussed previously, we believe the problem results when there are multiple codes for items that can be substituted for one another because they serve the same general purpose (for example, standard power wheelchairs), but have different item weights that may vary significantly. As discussed in the proposed rule, price inversions result under the CBP when different item weights are assigned to similar items with different features within the product category. To prevent this from occurring under future competitions, we proposed, and as discussed in this final rule, an alternative “lead item” bidding method addressed in the section on submitting bids and determining single payment amounts for certain groupings of similar items with different features under the DMEPOS CBP (81 FR 42862).

    In the interim period before this new bidding method, which we are adopting in this final rule, can be implemented, we must maintain the current contracts and payment amounts currently in effect, as required by section 1847(a) of the Act. We do not believe that other changes are necessary to address price inversions during this interim period. Under the final regulation, we will adjust inverted SPA prices prior to adjusting the fee schedule amounts for items that have been specifically listed in the rule.

    Comment: Some commenters suggested that a definition be established for a “grouping of similar items or products” to require that all items included in the grouping be comprised of items with the exact same features or some subset of those features. A few commenters suggested further sub-groupings of items into smaller groups with similar features, such as a separate grouping for heavy duty hospital beds. These commenters also suggested that a definition be established for “product feature(s)” to require that feature(s) differentiating products within the group subject to the rule provide additional functional or clinical necessity.

    Response: We appreciate the comments but do not agree that either definition is necessary because this specific groupings of items and the specific items within each grouping that would be subject to the proposed rule were listed in the proposed rule, and the definition of price inversion was included in the proposed rule, to identify situations where the SPA's for these items would be considered inverted.

    We do not believe that a definition of product feature(s) is needed because we believe that situations where one item includes a certain feature and another item does not include that feature is clear, and generally Medicare should not pay more for the item without the feature than with a feature under any circumstances. Items without features should be paid less or equal to an item with a feature because the addition of a feature adds value to an item. We believe, for example, the Medicare payment rate for a non-electric hospital bed with side rails and mattress should not be higher than the payment rate for a semi-electric hospital bed with side rails and mattress. The Medicare program would be paying more for less features such as the non-electric bed. Likewise, we believe the Medicare payment rate for a semi-electric hospital bed without a mattress should not be higher than the Medicare payment rate for a semi-electric hospital bed with a mattress.

    We do not believe that establishing smaller “subgroupings” of items is necessary because the groupings of items, relate to the items where existing price inversions have been identified for two or more of the codes in at least one CBA. In some cases, a code in a grouping may not be involved in a price inversion with another code in the grouping, and no adjustment is therefore necessary to adjust the difference in the SPAs for the two codes. In the case of heavy duty hospital beds, we have not determined that any price inversions have occurred where the SPA for a standard weight bed exceeds the SPA for a heavy duty version of the same bed. As such, there would be no situation where an SPA for a heavy duty bed will be adjusted using a weighted average of an SPA for a standard weight bed and an SPA for a heavy duty bed. The price inversions that have occurred for heavy duty beds within the grouping of codes for hospital beds have involved situations where the SPA for a heavy duty bed without a mattress is higher than the SPA for the same type of heavy duty bed with a mattress (the exact same feature). The changes we are finalizing to the regulation for addressing this situation are to adjust the SPAs for both heavy duty beds based on the weighted average of the SPAs for both heavy duty beds. The SPAs for standard weight beds would not be affected by this adjustment. Therefore, we are finalizing as we proposed.

    Comment: Some commenters believe that that the grouping for mattresses and overlays (HCPCS codes E0277, E0371, E0372 and E0373) should not be subject to the rule. The commenters believe that there may be valid reasons why the cost of a non-powered mattress or overlay falling under the general category of Group 2 support surfaces may be higher than the costs of a powered mattress or overlay falling under the general category of Group 2 support surfaces. For example, a non-powered mattress or overlay product cannot be billed to Medicare until it has been classified under a HCPCS code by the Medicare Pricing, Data Analysis, and Coding (PDAC) contractor. These are costs that a powered mattress or overlay system do not incur. The commenters stated that there is no evidence that the powered systems are more effective or are superior to the non-powered mattresses and overlays.

    Response: We appreciate the comments, however we do not agree. The fee schedule amounts for all four codes for Group 2 support surfaces (E0277, E0371, E0372, and E0373) were established from 1992 to 1996 using the same gap-filling methodology. Manufacturer suggested retail prices were used from the same general timeframe for various products falling under each code. The fee schedule amounts for the Group 2 overlays (E0371 and E0372) established in 1996 initially as codes K0413 and K0414, respectively, and non-powered mattress (E0373) established in 1997 initially as code K0464, did not exceed the fee schedule amounts for powered mattress code E0277, but would have been limited to the fees for code E0277 if they had exceeded those amounts. The position of CMS in 1996 and 1997 and today is that the fee schedule amounts for overlays should not exceed the fee schedule amounts for mattresses, and that the fee schedule amounts for a non-powered Group 2 mattress should not exceed the fee schedule amounts for a powered Group 2 mattress. The addition of power or a complete mattress rather than an overlay that sits on top of a standard mattress are recognized as additional features. This position is supported by the structure of fee schedule amounts for Group 1 support surfaces calculated using average reasonable charges from 1986 and 1987. The fee schedule amounts for Group 1 mattresses are higher than the fee schedule amounts for Group 1 overlays, and fee schedule amounts for powered overlays are higher than the fee schedule amounts for non-powered overlays. We believe our proposal, which we are finalizing as proposed, provides a solution to address price inversions for this grouping of items that is necessary to avoid the risk of beneficiaries receiving items with less functionality (for example, a non-powered overlay), and preventing access to items with more functionality (for example, a powered mattress system), only because the payment amounts for the non-powered items are higher than the payment amounts for the powered items, or, as has occurred in 128 out of 130 competitive bidding areas, because the payment amounts for a non-power overlay (a support surface that is neither powered, nor mattress size) are higher than the payment amounts for a powered mattress system. The cost incurred to have a product code verified by the PDAC under codes E0371 or E0373 is a one-time, insignificant cost and prevents products from being classified as Group 1 products paid below $200 under the current fee schedule rather than Group 2 products paid at fee schedule amounts exceeding $3,000 under the current fee schedule.

    Comment: Four of the eight commenters provided comments regarding the method to be used for adjusting SPAs in situations where price inversions have occurred. Three commenters preferred the proposed method 2, where a weighted average of the SPAs for the items involved in the price inversion is used to establish the payment amount for all of the items. The commenters favored this method because it takes into account the SPAs and supplier bids for all of the items involved in the price inversion rather than simply limiting the SPA for the lower volume item without a certain feature(s) to the higher volume item with the feature(s). One commenter preferred alternative method 1, where the SPA for the lower volume item without a certain feature(s) is limited to the SPA for the higher volume item with the feature(s). Method 1 is the method in the regulations that currently addresses price inversions for enteral infusion pumps and standard power wheelchairs. This commenter stated that since method 2 calculates a weighted average single payment amount using the item volume weights for groupings for similar items assigned under competitive bidding, it has the potential to compound unintended consequences with the assumption that current pricing and volume using “total nationwide allowed services” for multiple products will be balanced by a weighted average.

    Response: We agree with the three commenters that method 2 should be used rather than method 1 for the reasons noted above. The weighted average approach takes into account the supplier's bids for all of the items in the grouping of items and therefore addresses the commenter's concerns that the supplier bids for the lower volume items be taken into account in setting the payment amounts for the items. We do not understand what the commenter that favored method 1 versus method 2 means by “compounding unintended consequences” and so it is not clear why the commenter suggested method 1 over method 2.

    Final Rule Action: After consideration of comments received on the proposed rule and for the reasons we set forth previously, we are finalizing the proposed revisions to § 414.210(g)(6), with two technical changes. As a result of the administrative HCPCS editorial process, code B9000 for enteral infusion pumps without alarm is discontinued, effective January 1, 2017. Since only one code (B9002), rather than a group of codes, will remain in the HCPCS for enteral infusion pumps, there will no longer be multiple codes for this category of items, and so the proposed grouping of enteral infusion pumps is being removed from this section and therefore, not being finalized. Similarly, a decision was made to discontinue HCPCS code E0628 for electric seat lift mechanisms, effective January 1, 2017, and therefore this code is being removed from the grouping of seat lift mechanisms in this section and not being finalized in the regulation. We are also finalizing the proposed definitions at § 414.402 of “price inversion” and “total nationwide allowed services.”

    VII. Submitting Bids and Determining Single Payment Amounts for Certain Groupings of Similar Items With Different Features Under the Durable Medical Equipment, Prosthetics, Orthotics, and Supplies (DMEPOS) Competitive Bidding Program (CBP) A. Background on the DMEPOS CBP

    Medicare pays for most DMEPOS furnished after January 1, 1989, pursuant to fee schedule methodologies set forth in sections 1834 and 1842 of the Social Security Act (the Act). Specifically, subsections (a) and (h) of section 1834 and subsection (s) of section 1842 of the Act provide that Medicare payment for these items is equal to 80 percent of the lesser of the actual charge for the item or a fee schedule amount for the item. The regulations implementing these provisions are located at 42 CFR part 414, subparts C and D.

    Section 1847(a) of the Act, as amended by section 302(b)(1) of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) (Pub. L. 108-173), requires the Secretary to establish and implement CBPs in competitive bidding areas (CBAs) throughout the United States for contract award purposes for the furnishing of certain competitively priced DMEPOS items and services. Section 1847(b)(5) of the Act directs the Secretary to base the SPA for each item or service in each CBA on the bids submitted and accepted in the CBP. For competitively bid items, the SPAs have replaced the fee schedule payment methodology. Section 1847(b)(5) of the Act provides that Medicare payment for these competitively bid items and services is made on an assignment-related basis and is equal to 80 percent of the applicable SPA, less any unmet Part B deductible described in section 1833(b) of the Act. Section 1847(b)(2)(A)(iii) of the Act prohibits the Secretary from awarding a contract to an entity in a CBA unless the Secretary finds that the total amounts to be paid to contractors in a CBA are expected to be less than the total amounts that would otherwise be paid. This requirement aims to guarantee savings to both the Medicare program and its beneficiaries.

    We implemented CBPs in 9 Round 1 metropolitan statistical areas on January 1, 2011, and an additional 91 Round 2 metropolitan statistical areas on July 1, 2013. Bids are submitted during a 60-day bidding period allowing suppliers adequate time to prepare and submit their bids. We then evaluated each submission and awarded contracts to qualified suppliers in accordance with the requirements of section 1847(b)(2) of the Act, § 414.414, which specifies conditions for awarding contracts, and § 414.416, which specifies how single payment amounts are established.

    B. Summary of the Proposed Provisions on Submitting Bids and Determining Single Payment Amounts for Certain Groupings of Similar Items With Different Features Under the DMEPOS Competitive Bidding Program

    The proposed rule, titled “End-Stage Renal Disease Prospective Payment System, Coverage and Payment for Renal Dialysis Services Furnished to Individuals with Acute Kidney Injury, End-Stage Renal Disease Quality Incentive Program, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program Bid Surety Bonds, State Licensure and Appeals Process for Breach of Contract Actions, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program and Fee Schedule Adjustments, Access to Care Issues for Durable Medical Equipment; and the Comprehensive End-Stage Renal Disease Care Model” (81 FR 42802 through 42880), was published in the Federal Register on June 30, 2016, with a comment period that ended on August 23, 2016. Under the heading of Submitting Bids and Determining Single Payment Amounts for Certain Groupings of Similar Items with Different Features under the DMEPOS CBP, we proposed to establish an alternative bidding method in proposed § 414.412(d)(2) that could be used to avoid price inversions discussed above in section VI of the proposed rule (81 FR 42877). Under this alternative bidding method, one item in the grouping of similar items would be the lead item for the grouping for bidding purposes. The item in the grouping with the highest allowed services during a specified base period, as detailed below, would be considered the lead item of the grouping (8 FR 42858 through 42859). For purposes of this final rule, the lead item bidding method described below only applies to the groupings of similar items with different features identified in this rule, and does not apply to other items not listed in this rule that may be in the same product category as the items listed in this rule.

    For each grouping of similar items, we proposed that the supplier's bid for the lead item would be used as the basis for calculating the SPAs for the other items within that grouping, based on the ratio of the average of the fee schedule amounts for each item for all areas nationwide in 2015, to the average of the fee schedule amounts for the lead item for all areas nationwide in 2015 (81 FR 42859, 42878). In proposed § 414.412(d)(2), we proposed to use the fee schedule amounts for 2015 for the purpose of maintaining the relative difference in fee schedule amounts for the items in each grouping as it existed prior to any adjustments being made to the amounts based on information from the CBPs (81 FR 42877). This is to avoid the impact of price inversions that have occurred in pricing items under the CBP from affecting the relative difference in fee schedule amounts for the items. Under the CBP, we found price inversions for groupings of similar items within the following categories: Standard power wheelchairs, walkers, hospital beds, enteral infusion pumps, TENS devices, support surface mattresses and overlays and seat lift mechanisms. These groupings of similar items are a subset of similar items with different features identified in this rule, as opposed to entire product categories.

    Under the proposed lead item bidding method, a supplier submits one bid amount for furnishing all of the items in the grouping (for example, standard power wheelchairs), rather than submitting bid amounts for each individual HCPCS code describing each different item (81 FR 48259). The competitive bidding item in this case (for example, standard power wheelchairs) is a combination of HCPCS codes (for example, K0813 thru K0829) for power wheelchairs with different features (Group 1/Group 2, portable/standard weight/heavy duty weight/very heavy duty weight/extra heavy duty weight, sling seat/captains chair). Suppliers submitting bids under the method will understand that if their bid is in the winning range, it would be used to establish the single payment amounts for all of the codes in the grouping. Suppliers will therefore take into account the cost of furnishing all of the items described by the various codes when determining their bid amount for the lead item. Thus, to avoid cases of price inversions, the supplier is submitting a bid for an item (for example, standard power wheelchair), and for lead item bidding purposes, an “item” is a product that is identified by a combination of codes, as described in § 414.402. We also believe that the proposed lead item bidding method would greatly reduce the burden on suppliers of formulating and submitting multiple bids for similar items because it would require less time to enter their bids and would reduce the chances of keying errors when submitting bids. The lead item bidding method is intended to prevent future price inversions for a grouping of similar items, including codes for items (for example, total electric hospital beds) where price inversions have not occurred thus far, but where we believe price inversions would be likely based on information about the fee schedule amounts and the utilization of these items. By applying the lead item bidding method to all hospital beds, including total electric hospital beds, this prevents price inversions from occurring for all hospital beds. We also believe it is a more efficient method for implementing CBPs and pricing.

    To identify the lead item, we proposed using allowed services from calendar year 2012 for the first time this bidding method is used for specific items in specific CBAs (81 FR 42859). We did not observe price inversions under the Round 1 competitions and contracts that were in effect from January 2011 through December 2013. The price inversions began with the Round 2 competitions and contracts that began on July 1, 2013; therefore, we proposed using data for allowed services from calendar year 2012 to ensure that the effects of price inversions do not impact the utilization of the various items that is used to identify the lead item. Once this bidding method has been used in all competitions for an item (for example, standard power wheelchairs), we proposed that the lead item would be identified for future competitions based on allowed services for the items at the time the subsequent competitions take place rather than the allowed services from calendar year 2012. For example, using allowed services from calendar year 2012 is necessary to identify the lead items initially since utilization of items for years subsequent to 2012 could be affected by the price inversions that began with the Round 2 competitions and contracts on July 1, 2013. Once the lead item bidding method is implemented for a grouping of similar items, and the price inversions are eliminated, utilization of items for years subsequent to the point at which the price inversions are eliminated can be used for the purpose of identifying the lead item because they would not be affected by price inversions. This will also help to prevent price inversions in adjusted fee schedule amounts using competitive bidding SPAs. We proposed to announce which items would be subject to this bidding method at the start of each competition in each CBA where this bidding method is used (81 FR 42859).

    The following Tables 26, 27, and 28 show how the lead item for three groupings of similar items (standard power wheelchairs, walkers, and hospital beds, respectively) would be identified using 2012 allowed services and how the SPAs would be established based on the method described above. Under the proposal, when bidding for the lead item, a supplier is bidding to furnish the entire grouping of similar items. In the tables below, the lead items identified would be the lead items in initial competitions where the lead item bidding method is used. The first proposed category for lead item bidding is standard power wheelchairs (81 FR 42860).

    Table 26—Lead Item Bidding for Standard Power Wheelchairs and Relative Difference in Fees HCPCS Features Allowed
  • services for 2012
  • Average of 2015 rental fees Fee relative to lead item
    K0823 (lead item) Group 2 Captains Chair, Standard Weight 1,108,971 $578.51 1.00 K0825 Group 2 Captains Chair, Heavy Duty 122,422 637.40 1.10 K0822 Group 2 Sling Seat, Standard Weight 99,597 574.73 0.99 K0824 Group 2 Sling Seat, Heavy Duty 10,609 696.23 1.20 K0827 Group 2 Captains Chair, Very Heavy Duty 6,683 766.42 1.32 K0814 Group 1 Captains Chair, Portable 6,287 443.98 0.77 K0816 Group 1 Captains Chair, Standard Weight 2,176 484.14 0.84 K0826 Group 2 Sling Seat, Very Heavy Duty 1,063 901.38 1.56 K0821 Group 2 Captains Chair, Portable 1,048 475.55 0.82 K0813 Group 1 Sling Seat, Portable 771 346.83 0.60 K0815 Group 1 Sling Seat 545 505.52 0.87 K0828 Group 2 Sling Seat, Extra Heavy Duty 114 993.20 1.72 K0829 Group 2 Captains Chair, Extra Heavy Duty 105 912.06 1.58 K0820 Group 2 Sling Seat, Portable 46 370.46 0.64

    Rather than submitting 14 individual bids for each of the 14 items, the supplier would submit one bid for the lead item. The SPA for lead item K0823 would be based on the median of the bids for this code, following the rules laid out in § 414.416(b) and for calculating rental amounts pursuant to § 414.408(h)(2). The SPAs for the other items would be based on the relative difference in fees for the other items as compared to the lead item. For example, if the SPA for code K0823 is $300.00, the SPA for code K0825 would be equal to $330.00, or $300.00 multiplied by 1.1. Similarly, if the SPA for code K0823 is $300.00, the SPA for code K0816 would be equal to $252.00, or $300.00 multiplied by 0.84. Suppliers submitting bids would be educated in advance that their bid for code K0823 is a bid for all 14 codes and bidding suppliers would factor this into their decision on what amount to submit as their bid for the lead item. This would avoid price inversions and would carry over the relative difference in item weight that establishes Medicare payment amounts for standard power wheelchairs under the fee schedule into the CBPs. The second proposed category for lead item bidding is walkers as shown in Table 27 below. Under our proposal, when bidding for the lead item, a supplier is bidding to furnish the entire grouping (81 FR 42860).

    Table 27—Lead Item Bidding for Walkers and Relative Difference in Fees HCPCS Features Allowed
  • services for
  • 2012
  • Average of 2015 purchase fees Fee relative to lead item
    E0143 (lead item) Folding With Wheels 958,112 $111.69 1.00 E0135 Folding 56,399 78.97 0.71 E0149 Heavy Duty With Wheels 23,144 214.34 1.92 E0141 Rigid With Wheels 6,319 107.89 0.97 E0148 Heavy Duty 4,366 122.02 1.09 E0147 Heavy Duty With Braking & Variable Wheel Resistance 4,066 551.98 4.94 E0140 With Trunk Support 1,483 346.38 3.10 E0144 Enclosed With Wheels & Seat 1,275 305.95 2.74 E0130 Rigid 788 64.97 0.58

    Rather than submitting 9 individual bids for each of the 9 items, the supplier would submit one bid for the lead item. The SPA for lead item E0143 would be based on the median of the bids for this code, following the rules laid out in § 414.416(b) and for calculating rental and purchase amounts per § 414.408(f) and (h)(7). We proposed to include a new § 414.416(b)(3) that would include the lead item bidding method (81 FR 42860, 42878). The SPAs for the other items would be based on the relative difference in fees for the item compared to the lead item, following the rules for inexpensive or routinely purchased items at § 414.408(f) and (h)(7), and, for E0144, following the rules for capped rental items at § 414.408(h)(1). For example, if the SPA for purchase for code E0143 is $80.00, Medicare payment for rental of E0143 would be $8.00 per month in accordance with § 414.408(h)(7), and the SPA for purchase of E0143 used would be $60.00. The SPAs for code E0135 would be equal to $56.80 ($80.00 multiplied by 0.71), for purchase of a new E0135 walker, $5.68 per month for rental of E0135, and $42.60 for purchase of a used E0135 walker. The SPAs for rental of code E0144 would be equal to $21.92 ($8.00 multiplied by 2.74) for rental months 1 through 3, and $16.44 for rental months 4 through 13. Suppliers submitting bids would be educated in advance that their bid for code E0143 is a bid for all 9 codes and bidding suppliers would factor this into their decision on what amount to submit as their bid for the lead item. This would avoid price inversions and would carry over the relative difference in item weights that establish Medicare payment amounts for walkers under the fee schedule into the CBPs.

    The third proposed category for lead item bidding is hospital beds as shown in Table 28. Under the proposal, when bidding for the lead item, a supplier is bidding to furnish the entire grouping (81 FR 42860 through 42861).

    Table 28—Lead Item Bidding for Hospital Beds and Relative Difference in Fees HCPCS Features Allowed services for 2012 Average of 2015 rental fees Fee relative to lead item E0260 (lead item) Semi-Electric With Mattress & Side Rails 2,201,430 $134.38 1.00 E0261 Semi-Electric With Side Rails 109,727 124.20 0.92 E0303 Heavy Duty Extra Wide With Mattress & Side Rails 47,795 284.67 2.12 E0265 Total Electric With Mattress & Side Rails 37,584 185.75 1.38 E0255 Variable Height With Mattress & Side Rails 25,003 108.10 0.80 E0250 Fixed Height With Mattress & Side Rails 15,075 88.95 0.66 E0295 Semi-Electric 15,056 113.78 0.85 E0294 Semi-Electric With Mattress 9,446 119.93 0.89 E0301 Heavy Duty Extra Wide With Side Rails 6,075 252.96 1.88 E0256 Variable Height With Side Rails 4,135 76.53 0.57 E0304 Extra Heavy Duty Extra Wide With Mattress & Side Rails 2,448 737.98 5.49 E0266 Total Electric With Side Rails 1,969 166.51 1.24 E0251 Fixed Height With Side Rails 1,463 68.26 0.51 E0297 Total Electric 957 129.68 0.97 E0296 Total Electric With Mattress 955 148.29 1.10 E0302 Extra Heavy Duty Extra Wide With Side Rails 732 685.28 5.10 E0292 Variable Height With Mattress 305 76.97 0.57 E0293 Variable Height 189 65.29 0.49 E0290 Fixed Height With Mattress 64 67.29 0.50 E0291 Fixed Height 7 48.85 0.36

    Rather than submitting 20 individual bids for each of the 20 items, the supplier would submit one bid for the lead item. The SPA for lead item E0260 would be based on the median of the bids for this code, following the rules laid out in § 414.416(b) and for calculating rental amounts per § 414.408(h)(1). The SPAs for the other items would be based on the relative difference in the average of the 2015 fee schedule amounts for the item compared to the lead item. For example, if the SPA for code E0260 is $75.00, the SPA for code E0261 would be equal to $69.00, or $75.00 multiplied by 0.92. Suppliers submitting bids would be educated in advance that their bid for code E0260 is a bid for all 20 codes and bidding suppliers would factor this into their decision on what amount to submit as their bid for the lead item.

    The fourth through seventh proposed categories for lead item bidding are as are shown in Table 29, Table 30, Table 31 and Table 32. Under our proposal, when bidding for the lead item, a supplier is bidding to furnish the entire grouping (81 FR 42861).

    Table 29—Lead Item Bidding for Enteral Infusion Pumps and Relative Difference in Fees HCPCS Features Allowed services for 2012 Average of 2015 rental fees Fee relative to lead item B9002 (lead item) Pump with alarm 265,890 $121.70 1.00 B9000 Pump without alarm 935 115.47 0.95 Table 30—Lead Item Bidding for TENS Devices and Relative Difference in Fees HCPCS Features Allowed services for 2012 Average of 2015 rental fees Fee relative to lead item E0730 (lead item) 4 lead 267,428 $402.70 1.00 E0720 2 lead 46,238 388.83 0.97 Table 31—Lead Item Bidding for Support Surface Mattress/Overlay and Relative Difference in Fees HCPCS Features Allowed services for 2012 Average of 2015 rental fees Fee relative to lead item E0277 (lead item) Powered mattress 139,240 $663.22 1.00 E0372 Powered air mattress overlay 2,076 505.82 0.76 E0371 Nonpowered mattress overlay 1,444 416.85 0.63 E0373 Nonpowered mattress 716 576.84 0.87 Table 32—Lead Item Bidding for Seat Lift Devices and Relative Difference in Fees HCPCS Features Allowed services for 2012 Average of 2015 rental fees Fee relative to lead item E0627 (lead item) Electric, in chair * 49,162 $372.22 1.00 E0629 Non-electric 5,901 366.70 0.99 E0628 Electric 5,091 372.22 1.00 * Chair excluded from coverage by section 1861(n) of the Act.

    In summary, we proposed to revise § 414.412(d) to add this bidding method as an alternative to the current method for submitting bid amounts for each item in the seven groupings of similar items identified above (81 FR 42862). Suppliers participating in future CBPs may be required to use this method when submitting bids for these groups of similar items. Also, we proposed to revise § 414.416(b)(3) to add the method for calculating SPAs for items within each grouping of similar items based on the SPAs for lead items within each grouping of similar items (81 FR 42878). We believe that the proposed method would better accomplish the CBP objectives, which include reducing the amount Medicare pays for DMEPOS and limiting the financial burden on beneficiaries by reducing their out-of-pocket expenses for DMEPOS they obtain through the CBP (72 FR 17996).

    We believe this approach to bidding would safeguard beneficiaries from receiving items with fewer features simply because of the price inversions. We also believe that the proposed lead item bidding method would greatly reduce the burden on suppliers of formulating and submitting multiple bids for similar items because it would require less time to enter bids and would reduce the chances of keying errors when submitting bids. Finally, we believe this approach would safeguard beneficiaries and the Medicare Trust Fund from paying higher amounts for items with fewer features.

    C. Response to Comments on Submitting Bids and Determining Single Payment Amounts for Certain Groupings of Similar Items With Different Features Under the DMEPOS Competitive Bidding Program

    We solicited comments on this section. We received 4 public comments on our proposals from medical device manufacturers and suppliers.

    The comments and our responses to the comments for these proposals are set forth below.

    Comment: One commenter believes that the lead item bidding method does not align with Congressional intent for basing payment for items under the competitive bidding program on bids submitted and accepted for a single item.

    Response: We believe that single payment amounts under the program are based on bids submitted and accepted for covered items and services described in section 1847(a)(2) of the Act. DMEPOS items and services are also described by HCPCS codes, which group covered items and services into categories for billing purposes. For the purpose of implementing the DMEPOS competitive bidding program, the definition of “item” at § 414.402 states that an item is a product that is identified by a HCPCS code or a combination of codes and/or modifiers. Therefore, we maintain that under the DMEPOS competitive bidding program, an item can be a group of HCPCS codes, such as a group of codes for similar items with different features under the proposed lead item bidding method. Under the lead item bidding method, suppliers take into account the cost of furnishing all of the covered items and services into their bid for the lead item, just as they would take into account the cost of furnishing a range of covered items and services described by a single HCPCS code, as HCPCS codes rarely describe a single DMEPOS product. One alternative to the lead item bidding method for eliminating price inversions under the DMEPOS competitive bidding program is to eliminate the multiple codes from the HCPCS for similar items with different features and establish a single code that describes all the items and services (for example, one codes for “hospital bed, any type, includes all related accessories”). This is a long term alternative we can consider in the future to address price inversions if we determine that there is no need for multiple codes for similar items.

    Comment: One commenter believes that it is unreasonable to keep constant the relative price difference among items under the fee schedule, as product prices could vary over time due to market factors and other reasons.

    Response: We appreciate these comments, but do not agree that the lead item bidding method would prevent suppliers from accounting for changes in costs for the items over time or that it is unreasonable to keep the relative difference in prices constant for the items and services identified in the proposed rule. If, for example, the costs of Group 1 power wheelchairs increases over time, suppliers can take these costs into account in submitting their bid for the lead item, a Group 2 power wheelchair, as their bid is used to calculate the payment amounts for all of the items in the grouping of similar items. If the costs of Group 1 power wheelchairs increases to the point where they cost more than a Group 2 power wheelchair, the supplier can elect to furnish the lower cost Group 2 power wheelchair instead, since this product would also meet the needs of the beneficiary. Or, alternatively as a long term solution if we determine that there is not a need for multiple codes for the similar items with different features can be eliminated from the HCPCS and a single code can be established that describes all the items and services (for example, standard power wheelchair, any type). This would address the issue of price inversions as well, and the supplier would take into account the cost of furnishing the different types of standard power wheelchairs into their bid for the single code, just as they would under the lead item bidding method.

    Comment: Commenters suggested that (1) other factors other than allowed services should be considered when determining lead items such as allowed payment amounts for HCPCS codes and (2) CMS analyze features defined in the existing HCPCS codes and (3) CMS segregate products that exceed the code requirements in clinically or functionally relevant ways to ensure beneficiaries don't lose access to necessary features.

    Response: We appreciate the comments but do not agree. These comments are based on the assumption that the presence or absence of a feature (for example, heavy duty versus non-heavy duty) is not sufficient to determine a pricing order for similar items (for example, hospital beds). As we indicated in the section for the method for adjusting DMEPOS fee schedule amounts for similar items with different features using information from CBPs, we do not believe that a Medicare fee schedule amount for an item without a certain feature(s) should exceed the Medicare fee schedule amount for the item with that feature(s). If products within a HCPCS code exceed the code requirements in clinically or functionally relevant ways, consideration can be made to revise the HCPCS codes to separately identify these products.

    Comment: One commenter wants CMS to make the process of determining the groupings and the lead item transparent and open for industry or stakeholder input.

    Response: We believe that the proposed rule is transparent in identifying the groupings of similar items and the lead item. We included a proposed definition of price inversion, a listing of codes representing groupings of similar items, and a method for determining the lead item in each grouping.

    Comment: One commenter wants CMS to consider the highest Medicare fee schedule amounts for the items when deciding upon a lead item.

    Response: We appreciate the comments but do not agree. We believe the item with the most allowed services of any item in a group is the item that is used most often and therefore should be considered the lead item since it is likely to be the one that suppliers furnish more than any of the other items in the group of similar items. The item with the highest fee schedule amount may not be the item that suppliers furnish more than any of the other items in the group of similar items; however, in many cases the item with the highest fee schedule amount is also the item with the most allowed services of any item in the group of similar items.

    Comment: One commenter specifically suggested that CMS consider heavy-duty items as a separate grouping when determining the lead item because they believed heavy duty items were more costly.

    Response: We believe that that the presence or absence of a feature can be used to determine the pricing order for similar items with different features. We believe that all hospital beds are similar items used for the same purpose and that the heavy duty feature (the ability to accommodate heavier patients) is clearly an additional feature. We see no reason to single out this feature (heavy duty) from other features as warranting a separate category of hospital beds. There is no evidence that heavy-duty items are more costly than the grouping of hospital beds. We believe it is more efficient to include these items in the grouping of hospital beds so that suppliers do not have to enter additional bids for these items, increasing the chance of keying errors.

    Final Rule Action: After consideration of comments received on the proposed rule and for the reasons we articulated, we are finalizing our final policy for submitting bids and determining single payment amounts for certain groupings of similar items with different features under the DMEPOS CBP (alternative bidding methodology), with two technical changes. We are finalizing the provisions of § 414.412 to add the lead item bidding method described above to prevent price inversions under the DMEPOS CBPs. This method would only replace the current method of bidding for select groups of similar items identified in the final regulation. A decision was made as part of the administrative HCPCS editorial process to discontinue code B9000 for enteral infusion pumps without alarm, effective January 1, 2017. Since only one code (B9002), rather than a group of codes, will remain in the HCPCS for enteral infusion pumps, there will no longer be multiple codes for this category of items, and so the proposed grouping of enteral infusion pumps is being removed and not being finalized in § 414.412(d). Similarly, a decision was made to discontinue HCPCS code E0628 for electric seat lift mechanisms, effective January 1, 2017, and therefore this code is being removed from the grouping of seat lift mechanisms and not being finalized in § 414.412(d).

    VIII. Bid Limits for Individual Items Under the Durable Medical Equipment, Prosthetics, Orthotics, and Supplies (DMEPOS) Competitive Bidding Program (CBP) A. Background

    Under the DMEPOS CBP, Medicare sets payment amounts for selected DMEPOS items and services furnished to beneficiaries in CBAs based on bids submitted and accepted by Medicare. Section 1847(b)(5) of the Act provides that Medicare payment for these competitively bid items and services is made on an assignment-related basis and is equal to 80 percent of the applicable SPA, less any unmet Part B deductible described in section 1833(b) of the Act. Section 1847(b)(2)(A)(iii) of the Act prohibits the Secretary from awarding a contract to an entity unless the Secretary finds that the total amounts to be paid to contractors in a CBA are expected to be less than the total amounts that would otherwise be paid. This requirement guarantees savings to both the Medicare program and its beneficiaries. The CBP also includes provisions to ensure beneficiary access to quality DMEPOS items and services: Section 1847 of the Act directs the Secretary to award contracts to entities only after a finding that the entities meet applicable quality and financial standards and beneficiary access to a choice of multiple suppliers in the area is maintained.

    We implemented Round 1 of the DMEPOS CBP on January 1, 2011, and the Round 1 Recompete on January 1, 2014. Round 2 of the DMEPOS CBP and the national mail order program were implemented on July 1, 2013, and Round 2 and national mail order Recompete were implemented on July 1, 2016. The programs phased in under Round 1 and 2 are in place in approximately 100 metropolitan statistical areas (MSAs) throughout the nation, including Honolulu, Hawaii. A 60-day bidding window allows bidders adequate time to prepare and submit their bids. Section 414.412 specifies the rules for submission of bids under a CBP. Each bid submission is evaluated and contracts are awarded to qualified suppliers in accordance with the requirements of section 1847(b)(2) of the Act and § 414.414, which specifies conditions for awarding contracts.

    Sections 1847(b)(6)(A)(i) and (b)(6)(A)(ii) of the Act provide that payment will not be made under Medicare Part B for items and services furnished under a CBP unless the supplier has submitted a bid to furnish those items and has been awarded a contract. Therefore, in order for a supplier that furnishes competitively bid items in a CBA to receive payment for those items, the supplier must have submitted a bid to furnish those particular items and must have been awarded a contract to do so.

    The April 10, 2007 final rule titled, “Medicare Program; Competitive Acquisition for Certain Durable Medical Equipment, Prosthetics, Orthotics, and Supplies (DMEPOS) and Other Issues”, finalized requirements for providers to submit bids under the DMEPOS CBP (§ 414.412(b)) (72 FR 17992, 18088). Section 414.412 outlines the requirements associated with submitting bids under the competitive bidding process. Furthermore, § 414.412(b)(2) states that the bids submitted for each item in a product category cannot exceed the payment amount that would otherwise apply to the item under subpart C or subpart D of part 414, which is the fee schedule amount. Therefore, under our current policy, bid amounts that are submitted under the CBP cannot exceed the fee schedule amount. Contracts cannot be awarded in a CBA if total payments under the contracts are expected to be greater than what would otherwise be paid. In the preamble of the CY 2015 final rule that implemented the methodologies to adjust fee schedule amounts using information from CBPs, we indicated that the adjusted fee schedule amounts become the new bid limits (79 FR 66232).

    Sections 1834(a)(1)(F)(ii) and (iii), 1834(h)(2)(H)(ii), and 1842(s)(3)(B) of the Act mandate adjustments to the fee schedule amounts for certain DMEPOS items furnished on or after January 1, 2016, in areas that are not CBAs, based on information from CBPs. Section 1842(s)(3)(B) of the Act also provides authority for making adjustments to the fee schedule amounts for enteral nutrients, equipment, and supplies (enteral nutrition) based on information from the CBPs. In the CY 2015 final rule (79 FR 66223), we finalized the methodologies for adjusting DMEPOS fee schedule amounts using information from CBPs at § 414.210(g).

    B. Summary of the Proposed Provisions, Public Comments, and Responses to Comments on the Bid Limits for Individual Items Under the Durable Medical Equipment, Prosthetics, Orthotics, and Supplies (DMEPOS) Competitive Bidding Program (CBP)

    The proposed rule, titled “End-Stage Renal Disease Prospective Payment System, Coverage and Payment for Renal Dialysis Services Furnished to Individuals with Acute Kidney Injury, End-Stage Renal Disease Quality Incentive Program, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program Bid Surety Bonds, State Licensure and Appeals Process for Breach of Contract Actions, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program and Fee Schedule Adjustments, Access to Care Issues for Durable Medical Equipment; and the Comprehensive End-Stage Renal Disease Care Model” (81 FR 42802 through 42880), was published in the Federal Register on June 30, 2016, with a comment period that ended on August 23, 2016. In that proposed rule, we noted that if the fee schedule amounts are adjusted as new SPAs are implemented under the CBPs, and these fee schedule amounts and subsequent adjusted fee schedule amounts continue to serve as the bid limits under the programs, the SPAs under the programs can only be lower under future competitions because the bidders cannot exceed the bid limits in the CBP (81 FR 42863). To continue using the adjusted fee schedule amounts as the bid limits for future competitions does not allow SPAs to fluctuate up or down as the cost of furnishing items and services goes up or down over time.

    Section 1847(b)(2)(A)(iii) of the Act prohibits the awarding of contracts under the program if total payments to contract suppliers in an area are expected to be more than would otherwise be paid. For the purpose of implementing section 1847(b)(2)(A)(iii) of the Act, we proposed to revise § 414.412(b) to use the unadjusted fee schedule amounts (the fee schedule amounts that would otherwise apply if no adjustments to the fee schedule amounts based on information from CBPs had been made) for the purpose of establishing limits on bids for individual items for future competitions (including re-competes) (81 FR 42863). We proposed this change because we believe the general purpose of the DMEPOS CBP is to establish reasonable payment amounts for DMEPOS items and services based on competitions among suppliers for furnishing these items and services, with bids from suppliers being based in part on the suppliers' costs of furnishing the items and services at that point in time. We believe the intent of the program is to replace unreasonably high fee schedule amounts for DMEPOS items and services with lower, more reasonable amounts as a result of the competitive bidding. We believe that as long as the amounts established under CBPs are lower than the fee schedule amounts that would otherwise apply had the DMEPOS CBP not been implemented, savings will continue to be generated by the programs.

    For competitions held thus far for contract periods starting on January 1, 2011, July 1, 2013, January 1, 2014, and July 1, 2016, the unadjusted fee schedule amounts were used as the bid limits for all items in all CBAs, and the SPAs for each subsequent competition were generally lower than the SPAs for the preceding competitions. We believe that competition for contracts under the programs will continue to keep bid amounts low and, together with utilizing unadjusted fee schedule amounts as bid limits, ensure that total payments under the program will be less than what would otherwise be paid. We believe that prices established through the competitions should be allowed to fluctuate both up and down over time as long as they do not exceed the previous fee schedule amounts that would otherwise have been paid if the CBP had not been implemented, and savings below the previous fee schedule amounts are achieved. This would not apply to drugs included in a CBP which would otherwise be paid under subpart I of part 414 of 42 CFR based on 95 percent of the average wholesale price in effect on October 1, 2003.

    In addition, the amount of the SPAs established under the program is only one factor affecting total payments made to suppliers for furnishing DMEPOS items and services. Although the bid limits were created and are used for implementation of section 1847(b)(2)(A)(iii) of the Act, they are not the only factor that affects total payments to suppliers. The DMEPOS CBP is effective in reducing fraud and abuse by limiting the number of entities that can submit claims for payment, while ensuring beneficiary access to necessary items and services in CBAs. Section 1847(b)(5) of the Act requires that payment to contract suppliers be made on an assignment-related basis and limits beneficiary cost sharing to 20 percent of the SPA. We will continue to take all of these factors into account before awarding contracts for subsequent competitions in order to determine if total payments to contract suppliers in an area are expected to be less than would otherwise be paid.

    In summary, we proposed to revise § 414.412(b) to specify that the bids submitted for each individual item of DMEPOS other than drugs cannot exceed the fee schedule amounts established in accordance with sections 1834(a), 1834(h), or 1842(s) of the Act for DME, off-the-shelf (OTS) orthotics, and enteral nutrition, respectively, as if adjustments to these amounts based on information from CBPs had not been made (81 FR 42863). Specifically, the bid limits for DME would be based on the 2015 fee schedule amounts established in accordance with section 1834(a)(1)(B)(ii) of the Act, prior to application of section 1834(a)(1)(F)(ii) and (iii) of the Act, but updated for subsequent years based on the factors provided at section 1834(a)(14) of the Act. In other words, the bid limits would be based on fee schedule amounts established in accordance with section 1834(a), without applying the adjustments mandated by section 1834(a)(1)(F)(ii) of the Act. The bid limits for OTS orthotics would also be based on the 2015 fee schedule amounts established in accordance with section 1834(h)(1)(B)(ii) of the Act, prior to application of section 1834(h)(1)(H), but updated for subsequent years based on the factors provided at section 1834(h)(4) of the Act. In other words, the bid limits would be based on fee schedule amounts established in accordance with section 1834(h), without applying the adjustments authorized by section 1834(h)(1)(H) of the Act. The bid limits for enteral nutrients, equipment, and supplies (enteral nutrition) would be based on the 2015 fee schedule amounts established in accordance with section 1842(s)(1) of the Act, prior to application of section 1842(s)(3), but updated for subsequent years based on the factors provided at section 1842(s)(1)(B)(ii) of the Act. In other words, the bid limits would be based on fee schedule amounts established in accordance with section 1842(s)(1), without applying the adjustments authorized by section 1842(s)(3)(B) of the Act (81 FR 42863).

    Finally, with respect to the alternative bidding rules proposed in section VII. above, when evaluating bids for a grouping of similar items in a product category submitted in the form of a single bid for the highest volume item in the grouping, or lead item, we proposed to use the weighted average fee schedule amounts for the grouping of similar items in order to establish the bid limit for the purpose of implementing this proposed provision (81 FR 42863). We proposed to revise § 414.412(b)(2) to use total nationwide allowed services for all areas for the individual items, initially from calendar year 2012, to weight the fee schedule amount for each item for the purpose of determining a bid limit for the lead item based on the weighted average fee schedule amounts for the entire grouping of similar items. This would ensure that the payment amounts established under the CBPs do not exceed the fee schedule amounts that would otherwise apply to the grouping of similar items as a whole. As discussed in the proposed rule, Table 33 below illustrates the data that would be used to calculate the bid limit for the lead item (code E0143) in the grouping of walkers for a CBA located in the state of Maryland using 2015 fee schedule amounts for illustration purposes. The item weight for each code is based on 2012 total nationwide allowed services for the code divided by total nationwide allowed services for 2012 for all of the codes in the grouping (81 FR 42864).

    Table 33—Data Used To Calculate Bid Limit for Lead Item for Walkers for Maryland HCPCS Features Total nationwide allowed services for 2012 2015
  • purchase fees
  • (MD)
  • Item weight
    E0143 (lead item) Folding With Wheels 958,112 $115.02 0.90734 E0135 Folding 56,399 77.51 0.05341 E0149 Heavy Duty With Wheels 23,144 213.53 0.02192 E0141 Rigid With Wheels 6,319 110.30 0.00598 E0148 Heavy Duty 4,366 121.56 0.00413 E0147 Heavy Duty With Braking & Variable Wheel Resistance 4,066 549.90 0.00385 E0140 With Trunk Support 1,483 345.08 0.00140 E0144 Enclosed With Wheels & Seat 1,275 304.80 0.00121 E0130 Rigid 788 67.19 0.00075 Total 1,055,952

    Summing the 2015 fee schedule amounts multiplied by the weights for each item results in a bid limit of $117.37 for lead item E0143. Bids submitted for the lead item E0143 for walkers for a CBA located in the state of Maryland would not be able to exceed $117.37 in this example. We therefore proposed to amend § 414.412(b) to establish this method for determining bid limits for lead items identified in accordance with section § 414.412(d)(2) in section VII. B and as referenced also in the proposed rule (81 FR 42864, 42877), which we are now finalizing.

    C. Response to Comments on Bid Limits for Individual Items Under the Durable Medical Equipment, Prosthetics, Orthotics, and Supplies (DMEPOS) Competitive Bidding Program (CBP)

    We solicited comments and we received approximately 13 public comments on our proposals, including comments from medical device manufacturers, suppliers, advocacy groups and coalitions, and the Medicare Payment Advisory Committee (MedPAC).

    The comments and our responses to the comments for these proposals are set forth below.

    Comment: Most commenters supported the bid limit provision that was proposed. MedPAC suggested that some adjustment to reflect competitive bid results should be factored in to the bid limit rather than using the unadjusted 2015 fee schedule amounts, but did not suggest what adjustment should be factored into the bid limits. In addition, commenters stated that the fee schedule amounts should continue to be adjusted in all parts of the country to take into account the information from the CBP.

    Response: We agree with commenters with the proposed provision on the bid limit to use the unadjusted 2015 fee schedule amounts. This will allow suppliers to factor in both increases and decreases in SPA. We believe the comment from MedPAC is reasonable; however, a specific recommendation for adjusting the bid limits based on this general comment was not provided. Therefore, we do not have a specific recommendation in the comments that we can act upon in establishing the final rule.

    Final Rule Action: After consideration of comments received on the proposed rule and for the reasons we discussed previously, we are finalizing the proposed § 414.412(b), without changes. This would allow suppliers to take into account both decreases and increases in costs in determining their bids, while ensuring that payments under the CBPs do not exceed the amounts that would otherwise be paid had the DMEPOS CBP not been implemented.

    IX. Access to Care Issues for DME A. Background

    The Medicare and Medicaid programs generally serve distinct populations, but more than ten million individuals (“dual eligible beneficiaries”) were enrolled in both programs in 2014.12 As a group, dual eligible beneficiaries comprise a population with complex chronic care needs and functional impairments.13 Compared to Medicare-only or Medicaid-only beneficiaries, dual eligible beneficiaries are more likely to experience multiple chronic health conditions, mental illness, functional limitations, and cognitive impairments.

    12Data Analysis Brief: Medicare-Medicaid Dual Enrollment from 2006 through 2013, Medicare-Medicaid Coordination Office (MMCO), Centers for Medicare and Medicaid Services, December 2014 at https://www.cms.gov/Medicare-Medicaid-Coordination/Medicare-and-Medicaid-Coordination/Medicare-,Medicaid-Coordination-Office/Downloads/DualEnrollment20062013.pdf.

    13 Overall these individuals have higher prevalence of many conditions (including, but not limited to diabetes, pulmonary disease, stroke, Alzheimer's disease, and mental illness) than their Medicare-only and Medicaid-only peers. Medicare-Medicaid enrollees' health costs are four times greater than all other people with Medicare. Medicare Medicaid Enrollee State Profile: The National Summary—2008, Centers for Medicare and Medicaid Services at https://www.cms.gov/Medicare-Medicaid-Coordination/Medicare-and-Medicaid-Coordination/Medicare-Medicaid-Coordination-Office/Downloads/2008NationalSummary.pdf.

    Both Medicare and Medicaid cover Durable Medical Equipment (DME), which can be essential to dual eligible beneficiaries' mobility, respiratory function, and activities of daily living. However, the programs' different eligibility, coverage, and supplier rules can impact access to medically-appropriate DME and repairs of existing equipment for the population enrolled in both benefits.

    B. Summary of Public Comments, and Responses to Comments on Access to Care Issues for DME

    The proposed rule, titled “End-Stage Renal Disease Prospective Payment System, Coverage and Payment for Renal Dialysis Services Furnished to Individuals with Acute Kidney Injury, End-Stage Renal Disease Quality Incentive Program, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program Bid Surety Bonds, State Licensure and Appeals Process for Breach of Contract Actions, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program and Fee Schedule Adjustments, Access to Care Issues for Durable Medical Equipment; and the Comprehensive End-Stage Renal Disease Care Model” (81 FR 42802 through 42880), was published in the Federal Register on June 30, 2016, with a comment period that ended on August 23, 2016. In that proposed rule, for Access to Care Issues for DME, we solicited public comment on the impacts of coordinating Medicare and Medicaid Durable Medical Equipment for dually eligible beneficiaries. We received approximately 36 public comments, including comments from individual beneficiaries, beneficiary advocates, providers, suppliers, and state organizations.

    In this final rule, we provide a summary of the public comments received and our response to them.

    C. Provisions of Request for Information

    CMS sought to examine how overlapping but differing coverage standards for DME under Medicare and Medicaid may affect access to care for beneficiaries and administrative processes for providers and suppliers. In response to a May 2011 Request for Information, CMS received over one hundred comments from a range of stakeholders regarding 29 areas of program alignment opportunities, including DME.14 In the intervening years, CMS has continued to engage stakeholders—including beneficiaries, payers, suppliers, and states—to understand opportunities and challenges caused by differing program requirements.

    14https://www.cms.gov/Medicare-Medicaid-Coordination/Medicare-and-Medicaid-Coordination/Medicare-Medicaid-Coordination-Office/Downloads/FederalRegisterNoticeforComment052011.pdf.

    According to stakeholders, a common barrier to DME access stems from conflicting approval processes among Medicare and Medicaid that can leave suppliers uncertain about whether and how either program will cover items. Medicare is the primary payer for DME and other medical benefits covered by both programs. Medicaid typically pays Medicare cost-sharing amounts and may cover DME that Medicare does not, including certain specialized equipment that promotes independent living. Medicaid pays secondary to most other legally liable payers, including Medicare, and requires those payers to pay to the limit of their legal liability before any Medicaid payment is available. Many of the Medicare requirements related to DME, including the definition and scope of the benefit, are mandated by the statute; therefore, we do not have the authority to bypass or alter these requirements. Medicare generally only processes claims after the equipment is delivered. Because suppliers lack assurance regarding how Medicare or Medicaid will cover DME at the point of sale—and dual eligible beneficiaries cannot pay out-of-pocket up front—suppliers may refuse to provide needed DME.

    Other barriers may emerge for beneficiaries who have Medicaid first and get DME prior to enrolling in Medicare. Stakeholders report that many individuals may have difficulty getting coverage for repairs on equipment obtained through Medicaid coverage, since Medicare will only pay for repairs after making a new medical necessity determination. Additionally, not all Medicaid-approved DME suppliers are Medicare-approved suppliers, meaning beneficiaries may need to change suppliers after enrolling in Medicare.

    CMS requested to receive additional information to help target efforts to promote timely access to DME benefits for people dually eligible for Medicare and Medicaid.

    We requested public input on the following issues related to DME access for dual eligible beneficiaries:

    • Obstacles to timely receipt of needed DME and repairs due to conflicting program requirements.

    • Challenges or opportunities faced by Medicaid beneficiaries who newly qualify for Medicare, including challenges related to new and preexisting items, repairs, and providers.

    • The percentage of Medicare competitive bidding contractors in the state which accept Medicaid.

    • The role of prior authorization policies under either program and whether these policies offer suppliers sufficient advance notice regarding coverage.

    • Impacts on beneficiaries from delayed access to needed equipment and repairs.

    • If access problems are more pronounced for certain categories of equipment, the categories of DME for which the access problems arise the most frequently or are most difficult to resolve.

    • Challenges faced by suppliers in meeting different supporting documentation and submission requirements.

    • Other prevalent access challenges due to DME program misalignments.

    We also invited feedback regarding potential regulatory or legislative reforms to address DME program misalignments including:

    • State Medicaid program policies that promote coordination of benefits and afford beneficiaries full access to benefits.

    • Strategies to promote access to timely, effective repairs, including from suppliers who that did not originally furnish the equipment.

    • Policies to address challenges faced when beneficiaries transition from Medicaid-only to dual eligible status.

    • Other ways to promote timely DME access for dual eligible beneficiaries, without introducing new program integrity risks or increasing total expenditures in either Medicare or Medicaid.

    We requested specific examples to be included, when possible, while avoiding the transmission of protected information, and to include a point of contact who can provide additional information upon request.

    The comments and our response to the comments for issues related to DME access for dual eligible beneficiaries are set forth below.

    Comments: Overall the comments reinforced that dual eligible beneficiaries face numerous challenges navigating the two programs to obtain new DME and repairs of existing equipment. Several commenters stated that the general lack of Medicaid reimbursement for the Medicare deductibles and coinsurances for Qualified Medicare Beneficiaries (that is, due to states opting for the “lessor of” policy, in which they may opt to only cover those costs to the extent that Medicaid payment rate exceeds what Medicare pays for the same item) results in supplier reluctance to serve dual eligible beneficiaries generally. Several commenters pointed out that beneficiaries with complex needs often need to use multiple suppliers to obtain all needed items, as well as face long wait times to receive items. Some commenters gave examples of beneficiaries unable to access needed DME due to limited supplier options with limited inventory, especially in rural and small communities. A few commenters offered examples of how beneficiaries face difficulties obtaining and repairing equipment while in a skilled nursing facility, which may delay discharge to the community. A few commenters reported problems obtaining repairs and backup equipment when necessary. Some commenters raised concerns about challenges that arise when suppliers selected through Medicare's competitive bidding program do not accept Medicaid.

    In addition to elaborating on the challenges faced, a number of commenters suggested potential changes to the administration of Medicare and Medicaid DME benefits. With respect to Medicare, some commenters suggested that CMS require that DME suppliers accept Medicaid as a condition of being selected in Medicare's competitive bidding program. One commenter suggested expansions to the Advance Determination of Medicare Coverage (ADMC) policy related to certain replacement parts. Many commenters support certain Medicare payment changes to promote easier access to needed repairs. Some commenters suggested establishing a Medicare transition policy for DME similar to the Part D transition policy that would cover suppliers and certain DME.

    Commenters also suggested changes to Medicaid administrative processes. Many commenters suggested a Medicaid prior authorization process that assures suppliers of Medicaid coverage if Medicare were to deny coverage. A few commenters suggested clarifying that Medicare denial should not be required for items Medicare never covers. Finally, some commenters suggested that any such changes apply as well to Medicaid managed care organizations that enroll dual eligible beneficiaries and are contracted to provide Medicaid DME coverage.

    Response: We appreciate the range and depth of comments and suggestions we received. We will consider these comments carefully as we contemplate future policies. We are also exploring ways to share best practices with the State Medicaid Agencies to promote more efficient and effective “wrap around” coverage at the state level.

    X. Comprehensive End-Stage Renal Disease Care Model and Future Payment Models A. Background

    The Comprehensive ESRD Care (CEC) Model is a CMS test of a dialysis-specific Accountable Care Organization (ACO) model. In the model, dialysis clinics, nephrologists and other providers join together to create an End-Stage Renal Disease (ESRD) Seamless Care Organization (ESCO) to coordinate care for aligned beneficiaries. ESCOs are accountable for clinical quality outcomes and financial outcomes measured by Medicare Part A and B spending, including all spending on dialysis services for their aligned ESRD beneficiaries. This model encourages dialysis providers to think beyond their traditional roles in care delivery and supports them as they provide patient-centered care that will address beneficiaries' health needs, both in and outside of the dialysis clinic.

    CMS sought input on innovative approaches to care delivery and financing for beneficiaries with ESRD. We explained that this input could include ideas related to innovations that would go above and beyond the Comprehensive ESRD Care CEC Model with regard to financial incentives, populations or providers engaged, or the scale of change, among other topics. We stated that we would consider information received as we developed future payment models in this area, and as we launched solicitation for a second round of entry into the CEC Model to begin on January 1, 2017.

    B. Summary of the Proposed Provisions, Public Comments, and Responses to Comments on the Comprehensive End-Stage Renal Disease Care Model and Future Payment Models

    The proposed rule, titled “End-Stage Renal Disease Prospective Payment System, Coverage and Payment for Renal Dialysis Services Furnished to Individuals with Acute Kidney Injury, End-Stage Renal Disease Quality Incentive Program, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program Bid Surety Bonds, State Licensure and Appeals Process for Breach of Contract Actions, Durable Medical Equipment, Prosthetics, Orthotics and Supplies Competitive Bidding Program and Fee Schedule Adjustments, Access to Care Issues for Durable Medical Equipment; and the Comprehensive End-Stage Renal Disease Care Model” (81 FR 42802 through 42880), was published in the Federal Register on June 30, 2016, with a comment period that ended on August 23, 2016. In that proposed rule, for the Comprehensive End-Stage Renal Disease Care Model and Future Payment Models, we sought comments on a range of issues affecting the development of alternative payment model (APM) and advanced APM related to the care of beneficiaries with kidney disease. We received approximately 21 public comments, including comments from ESRD facilities; national renal groups, nephrologists and patient organizations; patients and care partners; manufacturers; and nurses.

    We also noted a solicitation for new entrants to the CEC model, which has since closed. New ESCOs will be announced on or before January 1, 2017, when they begin participation in the model.

    C. Provisions of the Notice

    Section 1115A of the Social Security Act (the Act), as added by section 3021 of the Affordable Care Act, authorizes the Innovation Center to test innovative payment and service delivery models that reduce spending under Medicare, Medicaid or The Children's Health Insurance Program (CHIP), while preserving or enhancing the quality of care. We sought public input to gather responses to the following questions that will help us to develop and refine innovative payment models related to kidney care.

    Questions:

    1. How could participants in alternative payment models (APMs) and advanced APMs coordinate care for beneficiaries with chronic kidney disease and to improve their transition into dialysis?

    2. How could participants in APMs and advanced APMs target key interventions for beneficiaries at different stages of chronic kidney disease?

    3. How could participants in APMs and advanced APMs better promote increased rates of renal transplantation?

    4. How could CMS build on the CEC Model or develop alternative approaches for improving the quality of care and reducing costs for ESRD beneficiaries?

    5. Are there specific innovations that are most appropriate for smaller dialysis organizations?

    6. How could primary-care based models better integrate with APMs or advanced APMs focused on kidney care to help prevent development of chronic kidney disease in patients and progression to ESRD? Primary-care based models may include patient-centered medical homes or other APMs.

    7. How could APMs and advanced APMs help reduce disparities in rates of chronic kidney disease (CKD)/ESRD and adverse outcomes among racial/ethnic minorities?

    8. Are there innovative ways APMs and advanced APMs can facilitate changes in care delivery to improve the quality of life for CKD and ESRD patients?

    9. Are there specific innovations that are most appropriate for evaluating patients for suitability for home dialysis and promoting its use in appropriate populations?

    10. Are there specific innovations that could most effectively be tested in a potential mandatory model?

    Additional information on the Comprehensive ESRD Care Model is located at: innovation.cms.gov/initiatives/comprehensive-ESRD-care.

    The comments and our responses to the comments are set forth below.

    Comment: Several commenters recognized the potential value of APM and advanced APM in the care of beneficiaries with CKD, ESRD and renal transplant. Commenters discussed the structures that might be most effective for such models, as well as the role of payment incentives, quality measures, and waivers of existing regulations. Several commenters identified attributes of existing models and programs that would be helpful in such models. In addition, several commenters described optimal care patterns around the beneficiaries' transition from CKD to ESRD and renal replacement therapy or transplant.

    Response: We thank commenters for their suggestions and input. We agree that there are a number of opportunities to improve the care of and reduce the costs associated with beneficiaries with kidney disease and we appreciate the detailed suggestions offered for such improvement, however, we are not finalizing at this time. We intend to develop and address comments in future rulemaking.

    XI. Technical Correction for 42 CFR 413.194 and 413.215

    In the CY 2013 ESRD PPS final rule (77 FR 67520), we revised § 413.89(h)(3) to set forth the percentage reduction in allowable bad debt payment required by section 1861(v)(1)(W) of the Act for ESRD facilities for cost reporting periods beginning during fiscal year 2013, fiscal year 2014 and subsequent fiscal years. We also revised § 413.89(h)(3) to set forth the applicability of the cap on bad debt reimbursement to ESRD facilities for cost reporting periods beginning between October 1, 2012 and December 31, 2012. In addition, in that rule, we removed and reserved § 413.178, since there were revised provisions set out at § 413.89.

    As a part of these revisions, we intended to correct the cross-reference in §§ 413.194 and 413.215 so that § 413.89(h)(3) was referenced instead of § 413.178. We inadvertently omitted the regulations text that would have made those changes. Therefore, we proposed a technical correction to revise the regulations text at §§ 413.194 and 413.215 to correct the cross-reference to the Medicare bad debt reimbursement regulation, so that §§ 413.194 and 413.215 would reference 42 CFR 413.89(h)(3) instead of the current outdated reference to § 413.178.

    We did not receive any comments on our proposed technical correction to revise the regulations text at §§ 413.194 and 413.215, therefore, we are finalizing this revision as proposed.

    XII. Waiver of Proposed Rulemaking

    We ordinarily publish a notice of proposed rulemaking in the Federal Register and invite public comment prior to a rule taking effect in accordance with section 553(b) of the Administrative Procedure Act (APA) (5 U.S.C. 553(b)) and section 1871(b)(1) of the Act. We can waive this procedure, however, if the agency finds that the notice and comment procedure is impracticable, unnecessary, or contrary to the public interest and incorporates a statement of the finding and reasons in the rule. See section 553(b)(B) of the APA and section 1871(b)(2)(C) of the Act.

    We find it unnecessary to undertake notice and comment rulemaking in this instance for the additional changes we are making to the definition of “hearing officer” in § 414.402, because these are merely technical edits in order to conform the definition to the revised regulation we are finalizing at § 414.423, which was promulgated under the notice and comment rulemaking procedures. Removing the reference to “contract terminations” and the abbreviation “(HO)” under the existing definition of “hearing officer” will reconcile the definition with the terminology and appeals process we are adopting in this final rule and thus, makes additional notice and comment unnecessary. Therefore, under section 553(b)(B) and section 1871(b)(1) of the Act, for good cause, we waive notice and comment procedures.

    XIII. Advancing Health Information Exchange

    HHS has a number of initiatives designed to improve health and health care quality through the adoption of health information technology (health IT) and nationwide health information exchange. As discussed in the August 2013 Statement “Principles and Strategies for Accelerating Health Information Exchange” (available at http://www.healthit.gov/sites/default/files/acceleratinghieprinciples_strategy.pdf), HHS believes that all individuals, their families, their healthcare and social service providers, and payers should have consistent and timely access to health information in a standardized format that can be securely exchanged between the patient, providers, and others involved in the individual's care. Health IT that facilitates the secure, efficient, and effective sharing and use of health-related information when and where it is needed is an important tool for settings across the continuum of care, including ESRD facilities.

    The Office of the National Coordinator for Health Information Technology (ONC) has released a document entitled “Connecting Health and Care for the Nation: A Shared Nationwide Interoperability Roadmap Version 1.0 (Roadmap) (available at https://www.healthit.gov/sites/default/files/hie-interoperability/nationwide-interoperability-roadmap-final-version-1.0.pdf) which describes barriers to interoperability across the current health IT landscape, the desired future state that the industry believes will be necessary to enable a learning health system, and a suggested path for moving from the current state to the desired future state. In the near term, the Roadmap focuses on actions that will enable a majority of individuals and providers across the care continuum to send, receive, find and use a common set of electronic clinical information at the nationwide level by the end of 2017. Moreover, the vision described in the Roadmap significantly expands the types of electronic health information, information sources, and information users well beyond clinical information derived from electronic health records (EHRs). This shared strategy is intended to reflect important actions that both public and private sector stakeholders can take to enable nationwide interoperability of electronic health information such as: (1) Establishing a coordinated governance framework and process for nationwide health IT interoperability; (2) improving technical standards and implementation guidance for sharing and using a common clinical data set; (3) enhancing incentives for sharing electronic health information according to common technical standards, starting with a common clinical data set; and (4) clarifying privacy and security requirements that enable interoperability.

    In addition, ONC has released the 2016 Interoperability Standards Advisory (available at https://www.healthit.gov/sites/default/files/2016-interoperability-standards-advisory-final-508.pdf), which provides a list of the best available standards and implementation specifications to enable priority health information exchange functions. Providers, payers, and vendors are encouraged to take these “best available standards” into account as they implement interoperable health information exchange across the continuum of care.

    We encourage stakeholders to utilize health information exchange and certified health IT to effectively and efficiently help providers improve internal care delivery practices, support management of care across the continuum, enable the reporting of electronically specified clinical quality measures, and improve efficiencies and reduce unnecessary costs. As adoption of certified health IT increases and interoperability standards continue to mature, HHS will seek to reinforce standards through relevant policies and programs.

    XV. Collection of Information Requirements A. Legislative Requirement for Solicitation of Comments

    Under the Paperwork Reduction Act of 1995, we are required to provide 30-day notice in the Federal Register and solicit public comment before a collection of information requirement is submitted to the Office of Management and Budget (OMB) for review and approval. In order to fairly evaluate whether an information collection should be approved by OMB, section 3506(c)(2)(A) of the Paperwork Reduction Act of 1995 requires that we solicit comment on the following issues:

    • The need for the information collection and its usefulness in carrying out the proper functions of our agency.

    • The accuracy of our estimate of the information collection burden.

    • The quality, utility, and clarity of the information to be collected.

    • Recommendations to minimize the information collection burden on the affected public, including automated collection techniques.

    B. Requirements in Regulation Text

    In section II and III of this final rule, we include changes to the regulatory text for the ESRD PPS in CY 2017 as well as the inclusion of subpart K to part 494 for AKI. However, we note that those changes do not impose any new information collection requirements.

    In section V of this final rule, we discussed changes to the DMEPOS Competitive Bidding Program. Section V.B.1 discusses the changes to the program relative to the bid surety bond requirements imposed at § 414.412. As a result of the new bid surety bond requirements, we have revised the information collection request (ICR) associated with the DMEPOS Competitive Bidding Program. The ICR is currently approved under OMB control number 0938-1016 (CMS-10169). Specifically, we have revised Form A (Application for DMEPOS Competitive Bidding Program) in the ICR to account for the new bid surety bond requirements. The revised form was under development and not available for public review and comment when the DMEPOS Competitive Bidding Program proposed rule published. Therefore, we have published a separate 60-day Federal Register notice to announce the changes to the ICR. The notice published on October 14, 2016 (81 FR 71100). The notice contains instructions on how to both obtain copies of and submit comments on the revised ICR. Copies of the revised ICR can be obtained at https://www.cms.gov/Regulations-and-Guidance/Legislation/PaperworkReductionActof1995/PRA-Listing-Items/CMS-10169.html?DLPage=1&DLEntries=10&DLSort=1&DLSortDir=descending. At the conclusion of the 60-day public comment period, we will review all public comments (if applicable) and then publish a 30-day Federal Register notice to announce the submission to OMB as well as another public comment period.

    C. Additional Information Collection Requirements

    This final rule does not impose any new information collection requirements in the regulation text, as specified above. However, this final rule does make reference to several associated information collections that are not discussed in the regulation text contained in this document. The following is a discussion of these information collections.

    1. ESRD QIP a. Wage Estimates

    In the CY 2016 ESRD PPS Final Rule (80 FR 69069), we stated that it was reasonable to assume that Medical Records and Health Information Technicians, who are responsible for organizing and managing health information data,15 are the individuals tasked with submitting measure data to CROWNWeb and NHSN for purposes of the Data Validation Studies rather than a Registered Nurse, whose duties are centered on providing and coordinating care for patients.16 The mean hourly wage of a Medical Records and Health Information Technician is $18.68 per hour. Under OMB Circular 76-A, in calculating direct labor, agencies should not only include salaries and wages, but also “other entitlements” such as fringe benefits.17 This Circular provides that the civilian position full fringe benefit cost factor is 36.25 percent. Therefore, using these assumptions, we estimate an hourly labor cost of $25.45 as the basis of the wage estimates for all collection of information calculations in the ESRD QIP.

    15http://www.bls/gov/ooh/healthcare/medical-records-and-health-information-technicians.htm.

    16http://www.bls.gov/ooh/healthcare/registered-nurses.htm.

    17http://www.whitehouse.gov/omb/circulars_a076_a76_incl_tech_correction.

    b. Time Required To Submit Data Based on Reporting Requirements

    In the CY 2016 ESRD PPS Final Rule (80 FR 69070), we estimated that the time required to submit measure data using CROWNWeb is 2.5 minutes per data element submitted, which takes into account the small percentage of data that is manually reported, as well as the human interventions required to modify batch submission files such that they meet CROWNWeb's internal data validation requirements.

    c. Data Validation Requirements for the PY 2019 ESRD QIP

    In our proposed rule (81 FR 42867), we outlined our data validation proposal for PY 2019. Specifically, for the CROWNWeb validation, we proposed to randomly sample records from 300 facilities as part of our continuing pilot data-validation program. Each sampled facility would be required to produce approximately 10 records, and the sampled facilities will be reimbursed by our validation contractor for the costs associated with copying and mailing the requested records. The burden associated with these validation requirements is the time and effort necessary to submit the requested records to a CMS contractor. We estimate that it will take each facility approximately 2.5 hours to comply with this requirement. If 300 facilities are asked to submit records, we estimate that the total combined annual burden for these facilities will be 750 hours (300 facilities × 2.5 hours). Since we anticipate that Medical Records and Health Information Technicians or similar administrative staff would submit this data, we estimate that the aggregate cost of the CROWNWeb data validation would be approximately $19,088 (750 hours × $25.45/hour) total of approximately $64 ($19,088/300 facilities) per facility in the sample. The burden associated with these requirements is captured in an information collection request (OMB control number 0938-1289).

    Under the proposed data validation study for validating data reported to the NHSN Dialysis Event Module, we proposed to randomly select 35 facilities. A CMS contractor will send these facilities requests for medical records for all patients with “candidate events” during the evaluation period. Overall, we estimate that, on average, quarterly lists will include two positive blood cultures per facility, but we recognize these estimates may vary considerably from facility to facility. We estimate that it will take each facility approximately 60 minutes to comply with this requirement (30 minutes from each of the two quarters in the evaluation period). If 35 facilities are asked to submit records, we estimate that the total combined annual burden for these facilities will be 35 hours (35 facilities × 1 hour). Since we anticipate that Medical Records and Health Information Technicians or similar administrative staff would submit this data, we estimate that the aggregate cost of the NHSN data validation would be $890.75 (35 hours × $25.45/hour) total of $25.45 ($890.75/35 facilities) per facility in the sample. The burden associated with these requirements is captured in an information collection request (OMB control number 0938-NEW).

    d. Ultrafiltration Rate Reporting Measure

    We proposed to include, beginning with the PY 2020 ESRD QIP, a reporting measure requiring facilities to report in CROWNWeb an ultrafiltration rate at least once per month for each qualifying patient. We estimate the burden associated with this measure to be the time and effort necessary for facilities to collect and submit the information required for the Ultrafiltration Rate Reporting Measure. We estimated that approximately 6,454 facilities will treat 548,430 ESRD patients nationwide in PY 2020. The Ultrafiltration Rate Reporting Measure requires facilities to report 13 elements per patient per month (156 elements per patient per year) and we estimate it will take facilities approximately 0.042 hours (2.5 minutes) to submit data for each data element. Therefore, the estimated total annual burden associated with reporting this measure in PY 2020 is approximately 3,593,313 hours (548,430 ESRD patients nationwide × 156 data elements/year × 0.042 hours per element), or approximately 553 hours per facility. We anticipate that Medical Records and Health Information Technicians or similar administrative staff will be responsible for this reporting. We therefore believe the cost for all ESRD facilities to comply with the reporting requirements associated with the ultrafiltration rate reporting measure would be approximately $91,449,815.80 (3,593,313 × $25.45/hour), or $14,082.20 per facility. The burden associated with these requirements is captured in an information collection request (OMB control number 0938-NEW).

    We sought comments on the Collection of Information proposals and did not receive any comments. Therefore, we are finalizing as proposed.

    XVI. Economic Analyses A. Regulatory Impact Analysis 1. Introduction

    We have examined the impacts of this rule as required by Executive Order 12866 on Regulatory Planning and Review (September 30, 1993), Executive Order 13563 on Improving Regulation and Regulatory Review (January 18, 2011), the Regulatory Flexibility Act (RFA) (September 19, 1980, Pub. L. 96-354), section 1102(b) of the Social Security Act, section 202 of the Unfunded Mandates Reform Act of 1995 (March 22, 1995; Pub. L. 104-4), Executive Order 13132 on Federalism (August 4, 1999) and the Congressional Review Act (5 U.S.C. 804(2).

    Executive Orders 12866 and 13563 direct agencies to assess all costs and benefits of available regulatory alternatives and, if regulation is necessary, to select regulatory approaches that maximize net benefits (including potential economic, environmental, public health and safety effects, distributive impacts, and equity). Section 3(f) of Executive Order 12866 defines a “significant regulatory action” as an action that is likely to result in a rule: (1) Having an annual effect on the economy of $100 million or more in any 1 year, or adversely and materially affecting a sector of the economy, productivity, competition, jobs, the environment, public health or safety, or state, local or tribal governments or communities (also referred to as economically significant); (2) creating a serious inconsistency or otherwise interfering with an action taken or planned by another agency; (3) materially altering the budgetary impacts of entitlement grants, user fees, or loan programs or the rights and obligations of recipients thereof; or (4) raising novel legal or policy issues arising out of legal mandates, the President's priorities, or the principles set forth in the Executive Order.

    A regulatory impact analysis (RIA) must be prepared for major rules with economically significant effects ($100 million or more in any 1 year). This rule is not economically significant within the meaning of section 3(f)(1) of the Executive Order, since it does not meet the $100 million threshold. However, OMB has determined that the actions are significant within the meaning of section 3(f)(4) of the Executive Order. Therefore, OMB has reviewed these final regulations, and the Departments have provided the following assessment of their impact.

    We sought comments on the Regulatory Impact Analysis but did not receive any comments. Therefore we are not making any changes at this time and are finalizing as proposed.

    2. Statement of Need

    This rule finalizes a number of annual updates and several policy changes to the ESRD PPS in CY 2017. The annual updates include the CY 2017 wage index values, the wage index budget-neutrality adjustment factor, and outlier payment threshold amounts. In addition to these annual updates, we are changing the home dialysis training policy. Failure to publish this final rule by November 1, 2016, would result in ESRD facilities not receiving appropriate payments in CY 2017 for renal dialysis services furnished to ESRD patients in accordance with section 1861(s)(2)(F) of the Act.

    This rule finalizes the provisions in TPEA which provide for coverage and payment for renal dialysis services furnished by ESRD facilities to individuals with AKI. Failure to publish this final rule by November 1, 2016 would result in a failure to comply with the requirements of the Act, as added by the TPEA, including ESRD facilities not receiving payment for furnishing renal dialysis services to patients with AKI.

    This rule finalizes requirements for the ESRD QIP, including adopting a measure set for the PY 2020 program, as directed by section 1881(h) of the Act. Failure to finalize requirements for the PY 2020 ESRD QIP would prevent continuation of the ESRD QIP beyond PY 2019. In addition, finalizing requirements for the PY 2020 ESRD QIP provides facilities with more time to review and fully understand new measures before their implementation in the ESRD QIP.

    This rule finalizes a requirement for the DMEPOS CBP for bid surety bonds and state licensure in accordance with section 1847 of the Act, as amended by section 522(a) of MACRA. The rule also finalizes an appeals process for all breach of contract actions CMS may take.

    This rule also finalizes a method for adjusting DMEPOS fee schedule amounts for similar items with different features using information from the DMEPOS CBPs, a method for determining single payment amounts for similar items with different features under the DMEPOS CBPs, and revising bid limits for individual items under DMEPOS CBP.

    3. Overall Impact

    We estimate that the finalized revisions to the ESRD PPS will result in an increase of approximately $80 million in payments to ESRD facilities in CY 2017, which includes the amount associated with updates to the outlier thresholds, home dialysis training policy, and updates to the wage index. We estimate approximately $2.0 million that would now be paid to ESRD facilities for dialysis treatments provided to AKI beneficiaries.

    For PY 2019, we anticipate that the new burdens associated with the collection of information requirements will be approximately $21 thousand, totaling an overall impact of approximately $15.5 million as a result of the PY 2019 ESRD QIP.18 For PY 2020, we estimate that the final requirements related to the ESRD QIP will cost approximately $91 million dollars, and the payment reductions will result in a total impact of approximately $22 million across all facilities, resulting in a total impact from the proposed ESRD QIP of approximately $113 million.

    18 We note that the aggregate impact of the PY 2018 ESRD QIP was included in the CY 2015 ESRD PPS final rule (79 FR 66256 through 66258). The previously finalized aggregate impact of $15.5 million reflects the PY 2019 estimated payment reductions and the collection of information requirements for the NHSN Healthcare Personnel Influenza Vaccination reporting measure.

    As explained previously in this final rule, we anticipate that DMEPOS CBP bidding entities will be impacted by the bid surety bond requirement. Bidding entities will be required to purchase and provide proof of a bid surety bond for each CBA in which they bid. We estimate that the total cost for all bidding suppliers in Round 2019 will be $13,000,000. The state licensure requirement will have no new impact on the supplier community because this is already a basic supplier eligibility requirement at § 414.414(b)(3), and the appeals process for breach of contract actions may have a beneficial, positive impact on suppliers.

    Overall, the bid surety bond requirement may have a positive financial impact on the CBP as we anticipate that the requirement will provide an additional incentive for bidding entities to submit substantiated bids. However, there will be an administrative burden for implementation of the bid surety bond requirement for CMS. We expect minimal administrative costs associated with the state licensure and appeals process for breach of DMEPOS CBP contract proposed rules.

    We do not anticipate that the DMEPOS Competitive Bidding regulations we are finalizing will have an impact on Medicare beneficiaries.

    We estimate that our final methodology for adjusting DMEPOS fee schedule amounts for similar items with different features using information from the DMEPOS CBPs, changes for determining single payment amounts for similar items with different features under the DMEPOS CBPs, and revisions to the bid limits for items under the DMEPOS CBP will have no significant impact on the suppliers, beneficiaries, Part B trust fund and economy as a whole.

    B. Detailed Economic Analysis 1. CY 2017 End-Stage Renal Disease Prospective Payment System a. Effects on ESRD Facilities

    To understand the impact of the changes affecting payments to different categories of ESRD facilities, it is necessary to compare estimated payments in CY 2016 to estimated payments in CY 2017. To estimate the impact among various types of ESRD facilities, it is imperative that the estimates of payments in CY 2016 and CY 2017 contain similar inputs. Therefore, we simulated payments only for those ESRD facilities for which we are able to calculate both current payments and new payments.

    For this final rule, we used the June 2016 update of CY 2015 National Claims History file as a basis for Medicare dialysis treatments and payments under the ESRD PPS. We updated the 2015 claims to 2016 and 2017 using various updates. The updates to the ESRD PPS base rate are described in section II.B.3 of this final rule. Table 34 shows the impact of the estimated CY 2017 ESRD payments compared to estimated payments to ESRD facilities in CY 2016.

    Table 34—Impact of Changes in Payment to ESRD Facilities for CY 2017 Final Rule [Impact of changes in payments to ESRD Facilities for CY 2017 ESRD final rule] [Percent change in total payments to ESRD facilities (both program and beneficiaries)] Facility Type Number of
  • Facilities
  • Number of treatments
  • (in millions)
  • Effect of 2017 changes in outlier policy
  • (%)
  • Effect of 2017 changes in wage indexes
  • (%)
  • Effect of 2017 changes in payment rate update
  • (%)
  • Effect of total 2017 proposed changes (outlier, wage
  • indexes, training
  • adjustment and
  • routine updates to the
  • payment rate)
  • (%)
  • A B C D E F All Facilities 6,542 44.5 0.2 0.0 0.55 0.73 Type: Freestanding 6,106 42.0 0.2 0.0 0.55 0.7 Hospital based 436 2.5 0.3 0.1 0.55 0.9 Ownership Type: Large dialysis organization 4,606 31.7 0.2 0.0 0.55 0.7 Regional chain 999 6.9 0.2 0.0 0.54 0.7 Independent 578 3.9 0.1 0.0 0.54 0.7 Hospital based 1 358 2.1 0.3 0.1 0.55 0.9 Geographic Location: Rural 1,225 6.4 0.2 0.1 0.54 0.9 Urban 5,317 38.2 0.2 0.0 0.55 0.7 Census Region: East North Central 1,056 6.2 0.2 −0.1 0.55 0.7 East South Central 528 3.3 0.2 −0.1 0.54 0.7 Middle Atlantic 713 5.5 0.2 −0.1 0.54 0.7 Mountain 375 2.2 0.1 −0.1 0.55 0.5 New England 183 1.4 0.2 −0.5 0.56 0.2 Pacific 2 790 6.3 0.1 0.5 0.55 1.2 Puerto Rico and Virgin Islands 51 0.3 0.2 −0.3 0.54 0.5 South Atlantic 1,485 10.5 0.2 −0.2 0.56 0.6 West North Central 473 2.3 0.2 0.0 0.56 0.7 West South Central 888 6.4 0.2 0.1 0.54 0.8 Facility Size: Less than 4,000 treatments 3 1,414 3.2 0.2 0.1 0.57 0.8 4,000 to 9,999 treatments 2,424 12.3 0.2 0.0 0.54 0.7 10,000 or more treatments 2,683 29.0 0.2 0.0 0.55 0.7 Unknown 21 0.0 0.3 0.2 0.59 1.0 Percentage of Pediatric Patients: Less than 2% 6,435 44.2 0.2 0.0 0.55 0.7 Between 2% and19% 41 0.3 0.2 0.0 0.59 0.7 Between 20% and 49% 9 0.0 0.0 0.2 0.52 0.7 More than 50% 57 0.1 0.0 −0.1 0.52 0.4 1 Includes hospital-based ESRD facilities not reported to have large dialysis organization or regional chain ownership. 2 Includes ESRD facilities located in Guam, American Samoa and the Northern Mariana Islands. 3 Of the 1,414 ESRD facilities with less than 4,000 treatments, only 352 qualify for the low-volume adjustment. The low-volume adjustment is mandated by Congress, and is not applied to pediatric patients. The impact to these low-volume facilities is a 0.8 percent increase in payments. Note: Totals do not necessarily equal the sum of rounded parts, as percentages are multiplicative, not additive.

    Column A of the impact table indicates the number of ESRD facilities for each impact category and column B indicates the number of dialysis treatments (in millions). The overall effect of the final changes to the outlier payment policy described in section II.B.3.c of this final rule is shown in column C. For CY 2017, the impact on all ESRD facilities as a result of the changes to the outlier payment policy would be a 0.73 percent increase in estimated payments. Nearly all ESRD facilities are anticipated to experience a positive effect in their estimated CY 2017 payments as a result of the outlier policy changes.

    Column D shows the effect of the final CY 2017 wage indices. The categories of types of facilities in the impact table show changes in estimated payments ranging from a 0.0 percent decrease to a 0.1 percent increase due to these updates.

    Column E shows the effect of the final ESRD PPS payment rate update of 0.55 percent. This update reflects the final ESRDB market basket percentage increase factor for CY 2017 of 2.1 percent, the 1.25 percent reduction as required by the section 1881(b)(14)(F)(i)(I) of the Act, and the MFP adjustment of 0.3 percent.

    Column F reflects the overall impact, that is, the effects of the outlier policy changes, the wage index, the effect of the change in the home dialysis training add-on from $50.16 to $95.60 and the effect of the payment rate update. We expect that overall ESRD facilities will experience a 0.73 percent increase in estimated payments in 2017. The categories of types of facilities in the impact table show impacts ranging from an increase of 0.7 percent to an increase of 0.9 percent in their 2017 estimated payments.

    b. Effects on Other Providers

    Under the ESRD PPS, Medicare pays ESRD facilities a single bundled payment for renal dialysis services, which may have been separately paid to other providers (for example, laboratories, durable medical equipment suppliers, and pharmacies) by Medicare prior to the implementation of the ESRD PPS. Therefore, in CY 2017, we estimate that the ESRD PPS would have zero impact on these other providers.

    c. Effects on the Medicare Program

    We estimate that Medicare spending (total Medicare program payments) for ESRD facilities in CY 2017 would be approximately $9.6 billion. This estimate takes into account a projected increase in fee-for-service Medicare dialysis beneficiary enrollment of 1.4 percent in CY 2017.

    d. Effects on Medicare Beneficiaries

    Under the ESRD PPS, beneficiaries are responsible for paying 20 percent of the ESRD PPS payment amount. As a result of the projected 0.73 percent overall increase in the ESRD PPS payment amounts in CY 2017, we estimate that there will be an increase in beneficiary co-insurance payments of 4.2 percent in CY 2017, which translates to approximately $10 million.

    e. Alternatives Considered

    In section II.B.2, we finalized a change to the home dialysis training add-on based on the average number of hours for PD and HD and weighted by the percentage of total treatments for each modality. We considered an approach to update the current training add-on amount annually using the market basket increase or the wage and price proxy in the market basket. However, under either approach, the increase to the training add-on payment was small and would not incentivize home dialysis training.

    2. Coverage and Payment for Renal Dialysis Services Furnished to Individuals with AKI a. Effects on ESRD Facilities

    We analyzed CY 2015 hospital outpatient claims to identify the number of treatments furnished historically for AKI patients. We identified 8,047 outpatient dialysis treatments for beneficiaries with AKI that were furnished in CY 2015. We then inflated the 8,047 treatments to 2017 values using estimated population growth for fee-for service non-ESRD beneficiaries. This results in an estimated 8,234 treatments that would now be paid to ESRD facilities for furnishing dialysis to beneficiaries with AKI. Using the CY 2017 ESRD base rate of $231.55 and an average wage index multiplier, we estimate approximately $2.0 million that would now be paid to ESRD facilities for dialysis treatments provided to AKI beneficiaries.

    Ordinarily, we would provide a table showing the impact of this provision on various categories of ESRD facilities. Because we have no way to project how many patients with AKI requiring dialysis will choose to have dialysis treatments at an ESRD facility, we are unable to provide a table at this time.

    We note that in the CY 2017 ESRD PPS proposed rule (81 FR 42870), we stated that we identified 7,155 outpatient claims with AKI that also had dialysis treatments that were furnished in CY 2015. This is an incorrect statement. We should have stated that we identified 7,155 outpatient dialysis treatments for beneficiaries with AKI.

    b. Effects on Other Providers

    Under section 1834(r) of the Act, as added by section 808(b) of TPEA, we are finalizing a payment rate for renal dialysis services furnished by ESRD facilities to beneficiaries with AKI. The only two Medicare providers authorized to provide these outpatient renal dialysis services are hospital outpatient departments and ESRD facilities. The decision about where the renal dialysis services are furnished is made by the patient and their physician. Therefore, this proposal will have zero impact on other Medicare providers.

    c. Effects on the Medicare Program

    We anticipate an estimated $2.0 million being redirected from hospital outpatient departments to ESRD facilities in CY 2017 as a result of some AKI patients receiving renal dialysis services in the ESRD facility at the lower ESRD PPS base rate versus continuing to receive those services in the hospital outpatient setting.

    d. Effects on Medicare Beneficiaries

    Currently, beneficiaries have a 20 percent co-insurance obligation when they receive AKI dialysis in the hospital outpatient setting. When these services are furnished in an ESRD facility, the patients would continue to be responsible for a 20 percent co-insurance. Because the AKI dialysis payment rate paid to ESRD facilities is lower than the Outpatient Prospective Payment System's payment amount, we would expect beneficiaries to pay less co-insurance when AKI dialysis is furnished by ESRD facilities.

    e. Alternatives Considered

    In section III.B.2 of this final rule, we finalize policy related to the implementation of section 808(b) of TPEA, which amended section 1834 by adding a new paragraph (r) which provides payment for renal dialysis services furnished by ESRD facilities to beneficiaries with AKI. We considered adjusting the AKI payment rate by including the ESRD PPS case-mix adjustments, other adjustments at 1881(b)(14)(D), as well as not paying separately for AKI specific drugs and labs. We ultimately determined that treatment for AKI is substantially different from treatment for ESRD and the case-mix adjustments applied to ESRD patients may not be applicable to AKI patients and as such, including those policies and adjustment would be inappropriate at this time.

    3. End-Stage Renal Disease Quality Incentive Program a. Effects of the PY 2020 ESRD QIP

    The ESRD QIP provisions are intended to prevent possible reductions in the quality of ESRD dialysis facility services provided to beneficiaries as a result of payment changes under the ESRD PPS.

    The methodology that we proposed using to determine a facility's TPS for the PY 2020 ESRD QIP is described in sections III.F.6 and III.F.7 of this final rule. Any reductions in ESRD PPS payments as a result of a facility's performance under the PY 2020 ESRD QIP would apply to ESRD PPS payments made to the facility in CY 2020.

    We estimate that, of the total number of dialysis facilities (including those not receiving a TPS), approximately 42 percent or 2,710 of the facilities would likely receive a payment reduction in PY 2020. Facilities that do not receive a TPS are not eligible for a payment reduction.

    In conducting our impact assessment, we have assumed that there will be 6,453 dialysis facilities paid through the PPS. Table 35 shows the overall estimated distribution of payment reductions resulting from the PY 2020 ESRD QIP.

    Table 35—Estimated Distribution of PY 2020 ESRD QIP Payment Reductions Payment
  • reduction
  • Number of facilities Percent of facilities
  • (%)
  • 0.0% 3311 55.0 0.5% 1538 25.5 1.0% 832 13.8 1.5% 269 4.5 2.0% 71 1.2 Note: This table excludes 432 facilities that we estimate will not receive a payment reduction because they will not report enough data to receive a Total Performance Score.

    To estimate whether or not a facility would receive a payment reduction in PY 2020, we scored each facility on achievement and improvement on several measures we have previously finalized and for which there were available data from CROWNWeb and Medicare claims. Measures used for the simulation are shown in Table 36.

    Table 36—Data Used To Estimate PY 2020 ESRD QIP Payment Reductions Measure Period of time used to calculate achievement thresholds, performance standards, benchmarks, and improvement thresholds Performance period Vascular Access Type: %Fistula Jan 2014-Dec 2014 Jan 2015-Dec 2015. %Catheter Jan 2014-Dec 2014 Jan 2015-Dec 2015. Kt/V Composite Jan 2014-Dec 2014 Jan 2015-Dec 2015. Hypercalcemia Jan 2014-Dec 2014 Jan 2015-Dec 2015. Standardized Transfusion Ratio Jan 2014-Dec 2014 Jan 2015-Dec 2015. ICH CAHPS Survey Jan 2015-Dec 2015 Jan 2015-Dec 2015. Standardized Readmission Ratio Jan 2014-Dec 2014 Jan 2015-Dec 2015. NHSN Bloodstream Infection Jan 2014-Dec 2014 Jan 2015-Dec 2015. SHR Jan 2014-Dec 2014 Jan 2015-Dec 2015.

    Clinical measure topic areas with less than 11 cases for a facility were not included in that facility's Total Performance Score. Each facility's Total Performance Score was compared to an estimated minimum Total Performance Score and an estimated payment reduction table that were consistent with the proposals outlined in section III.G.9 of this final rule. Facility reporting measure scores were estimated using available data from CY 2015. Facilities were required to have a score on at least one clinical and one reporting measure in order to receive a Total Performance Score.

    To estimate the total payment reductions in PY 2020 for each facility resulting from the proposed rule, we multiplied the total Medicare payments to the facility during the 1-year period between January 2015 and December 2015 by the facility's estimated payment reduction percentage expected under the ESRD QIP, yielding a total payment reduction amount for each facility: (Total ESRD payment in January 2015 through December 2015 times the estimated payment reduction percentage). For PY 2020, the total payment reduction for all of the 2,710 facilities expected to receive a reduction is approximately $32 million ($31,581,441). Further, we estimate that the total costs associated with the collection of information requirements for PY 2020 described in section VIII.1.b of this final rule would be approximately $91 million for all ESRD facilities. As a result, we estimate that ESRD facilities will experience an aggregate impact of approximately $123 million ($91,449,815 + $31,581,441= $123,031,256) in PY 2020, as a result of the PY 2020 ESRD QIP.

    Table 37 below shows the estimated impact of the finalized ESRD QIP payment reductions to all ESRD facilities for PY 2020. The table details the distribution of ESRD facilities by facility size (both among facilities considered to be small entities and by number of treatments per facility), geography (both urban/rural and by region), and by facility type (hospital based/freestanding facilities). Given that the time periods used for these calculations will differ from those we proposed to use for the PY 2020 ESRD QIP, the actual impact of the PY 2020 ESRD QIP may vary significantly from the values provided here.

    Lastly, we note that the facilities located in the US Territories and earning a payment penalty are primarily urban, Large Dialysis Organizations and we wish to confirm that we will work through the ESRD Networks to address issues of quality of care at these locations.

    Table 37—Impact of QIP Payment Reductions to ESRD Facilities for PY 2020 Number of
  • facilities
  • Number of treatments 2014
  • (in millions)
  • Number of
  • facilities with
  • QIP score
  • Number of
  • facilities expected to receive a payment reduction
  • Payment
  • reduction
  • (percent change in total ESRD payments)
  • All Facilities 6,453 40.0 6,021 2,710 −0.35 Facility Type: Freestanding 6,022 37.8 5,853 2,661 −0.36 Hospital-based 431 2.2 168 49 −0.22 Ownership Type: Large Dialysis 4,541 28.6 4,433 2,025 −0.35 Regional Chain 989 6.2 929 344 −0.27 Independent 568 3.5 536 300 −0.53 Hospital-based (non-chain) 354 1.8 123 41 −0.26 Unknown 1 0.0 0 0 Facility Size: Large Entities 5,530 34.8 5,362 2,369 −0.34 Small Entities 1 922 5.2 659 341 −0.48 Unknown 1 0.0 0 0 Rural Status: (1) Yes 1,260 6.0 1,146 355 −0.22 (2) No 5,193 34.0 4,875 2,355 −0.38 Census Region: Northeast 881 6.2 785 362 −0.35 Midwest 1,511 7.6 1,356 593 −0.34 South 2,853 18.2 2,744 1,356 −0.39 West 1,143 7.6 1,084 362 −0.25 US Territories 2 65 0.4 52 37 −0.52 Census Division: Unknown 1 0.0 0 0 East North Central 1,045 5.5 951 471 −0.40 East South Central 522 3.0 515 209 −0.32 Middle Atlantic 702 4.9 623 317 −0.40 Mountain 368 2.0 336 83 −0.17 New England 182 1.3 164 47 −0.17 Pacific 782 5.7 753 282 −0.28 South Atlantic 1,458 9.4 1,389 771 −0.44 West North Central 469 2.1 406 123 −0.21 West South Central 875 5.8 841 376 −0.36 US Territories 2 49 0.3 43 31 −0.53 Facility Size (# of total treatments): Less than 4,000 treatments 1,211 2.7 1,006 376 −0.33 4,000-9,999 treatments 2,401 11.0 2,324 938 −0.32 Over 10,000 treatments 2,680 26.1 2,603 1,342 −0.38 Unknown 161 0.2 88 54 −0.60 1 Small Entities include hospital-based and satellite facilities and non-chain facilities based on DFC self-reported status. 2 Includes Puerto Rico and Virgin Islands.
    4. DMEPOS Competitive Bidding Bid Surety Bond, State Licensure and Appeals Process for Breach of DMEPOS Competitive Bidding Program Contract Actions a. Effects on Competitive Bidding Program Suppliers

    Bid Surety Bonds. It is difficult to estimate the precise financial impact the bid surety bond requirement will have on competitive bidding entities as this type of bond is not currently available. Based on our research of the bond industry, as well as the structure of the existing CMS DMEPOS surety bond requirement for all DMEPOS suppliers, we anticipate that the cost to obtain a bid surety bond will be based on a percentage of the total bond amount. This percentage may be adjusted by the authorized surety based upon certain criteria such as: (1) The number of bid surety bonds purchased by a bidding entity, (2) the credit score of the bidding entity and, (3) the prior contracting experience the bidding entity has had with the DMEPOS CBP, that is, history of accepting/rejecting contracts.

    For instance, an authorized surety may establish a preliminary charge amount of 2 percent of the total bond amount to obtain a $50,000 bid surety bond. We anticipate that the authorized surety may adjust their charge percentage based on the number of CBAs in which a bidding entity bids, that is, a bulk discount. Bidding entities that purchase multiple bid surety bonds from the authorized surety would likely receive a reduced charge per bid surety bond as compared to a bidding entity that only purchases a single bid surety bond. We also expect that authorized sureties will evaluate each bidding entity's credit score(s) to either establish an appropriate charge percentage or to decide not to issue a bond if the bidding entity's credit score is too low. Lastly, we anticipate that an authorized surety may also request documentation from prior rounds of bidding to understand the bidding entity's experience with contract acceptance. Bidding entities that have accepted more contract offers in the prior round without any contract rejections may be viewed by an authorized surety as less risky than a bidding entity who has rejected numerous contract offers with few or no contract acceptance.

    On January 1, 2019, CMS will be combining all CBAs into a consolidated round of competition. As a result, we estimate the aggregate total out of pocket cost for bidding entities to bid in this competition to be $13,000,000. This estimate is based upon the approximately 13,000 distinct bidders for CBAs included in both the Round 2 Recompete and Round 1 2017 multiplied by a $1,000 per bid surety bond price. Given the unknown variables with this new type of bond, we sought comments on how the authorized sureties will set the purchase amount for bidding entities in order to finalize a more accurate estimate. We received one comment which stated that a “surety will review the capabilities and financial strength of the bid surety bond applicants and provide bid surety bonds only to those entities that the surety has determined are capable of performing the underlying obligation”. Overall, in response to the comments, we revised the bid bond amount from $100,000 in the proposed rule to $50,000 in this final rule and use the assumption that purchase price for a bid surety bond will be approximately $1,000 per CBA. We believe that there will be many variables that will impact the bidder's out of pocket cost to purchase a bid surety bond(s) and as such, believe that by lowering the bid surety bond amount that this will in turn lower the overall impact and lessen the burden for bidders.

    We do anticipate that there will be an impact on small suppliers. We sought comments on whether we should have a reduced bid surety bond amount for a particular subset of suppliers, for example, small suppliers as defined by the CBP. In terms of a small supplier obtaining a bond, the Small Business Administration (SBA) has a statement on their Web site stating that their guarantee “encourages surety companies to bond small businesses,” and as such we anticipate that small suppliers will be able to reach out to the SBA if they encounter difficulty in obtaining a bond. As a result of the implementation of the final rule, we anticipate that this requirement may deter some suppliers from bidding, which would result in a lower number of bids submitted to the DMEPOS CBP.

    State Licensure. Contract suppliers in the CBP are already required to have the proper state licensure in order to be eligible for a contract award. We do not anticipate that conforming the language of the regulation to the language in section 1847(b)(2)(A), as added by section 522(a) of MACRA, will have any additional impact beyond what is already being imposed on suppliers.

    Appeals Process for Breach of DMEPOS Competitive Bidding Program Contract Actions. We believe the expansion of the appeal rights for breach of contract may have a positive impact on contract suppliers by providing the formal opportunity to appeal any of the actions that CMS may take as a result of a breach of contract.

    b. Effects on the Medicare Program

    Bid Surety Bonds. We anticipate that the bid surety bond requirement will result in bidding entities being more conscientious when formulating their bid amounts. In addition, given the already high historic contract acceptance rate exceeding 90 percent per round, we anticipate that the bid surety bond provision will result in an even higher rate of contract acceptance.

    We anticipate that this regulation may deter some bidding entities from bidding, which would result in a lower number of bids submitted to the DMEPOS CBP. This reduction could reduce competition and lead to a decreased number of contract suppliers and, as a result, less savings from the program.

    Additionally, we expect that there will be an administrative burden for implementing the bid surety bond requirement, which includes educating bidding entities, updating CMS bidding and contracting systems, and verifying that the bonds are valid.

    State Licensure. We do not anticipate that conforming the language of the regulation to the language in section 1847(b)(2)(A), as added by section 522(a) of MACRA, will have any additional impact beyond what is already being imposed on suppliers. Therefore, the burden of meeting this statutory requirement has already been estimated in previous regulations and this revision to the regulation does not add to the burden.

    Appeals Process for Breach of DMEPOS Competitive Bidding Program Contract Actions. We expect that there may be some de minimis costs to expand the appeals process. We anticipate that overall this final rule will have a positive impact on the program by allowing suppliers a full appeals process for any breach of contract action that CMS may take pursuant to § 414.422(g)(2).

    c. Effects on Medicare Beneficiaries

    The final CBP requirements for bid surety bond, state licensure and appeals process for breach of contract actions are not expected to have an impact on Medicare beneficiaries.

    d. Alternatives Considered

    Section 1847(a)(1)(G) of the Act, as amended by section 522(a) of MACRA, provides that a bidding entity may not submit a bid for a CBA unless, as of the deadline for bid submission, the entity has (1) obtained a bid surety bond, and (2) provided proof of having obtained the bid surety bond for each CBA associated with its bid(s) in a form specified by the Secretary. No alternatives to this bid surety bond requirement were considered. However, while we proposed that the bid surety bond be in an amount of $100,000, we sought comments on whether a lower bond amount for a certain subset of bidding entities, for example, small suppliers as defined by 42 CFR 414.402, would be appropriate. In finalizing the rule we determined that the bid surety bond will be set at $50,000 for all bidding entities based on comments received. No alternatives were considered for the state licensure requirement, as § 414.414(b)(3) of the regulations already requires suppliers to have all applicable state and local licenses.

    For appeals for breach of contract actions, we believe that it would be beneficial to expand the appeals process to any of the breach of contract actions that CMS may take pursuant to § 414.422(g)(2). The alternative we considered is to retain the current appeals process for terminations, and allow suppliers to appeal other breach of contract actions through an informal sub-regulatory process or a process similar to the existing appeals process. However, in order to provide an opportunity for notice and comment, we believe that the better option is to revise the current regulations to allow for a clear and defined appeals process for any breach of contract action that CMS may take.

    5. Other DMEPOS Provisions a. Effects of the Method for Adjusting DMEPOS Fee Schedule Amounts for Similar Items With Different Features Using Information From the DMEPOS Competitive Bidding Programs

    For this final rule, we estimate that the method for adjusting DMEPOS fee schedule amounts for certain groupings of similar items with different features using information from the DMEPOS CBPs will generate small savings by lowering the price of similar items to be equal to the weighted average of the SPAs for the items based on the item weights assigned under competitive bidding. The reduced price causes lower copayments to the beneficiary. We believe our final policy will also prevent beneficiaries from potentially receiving lower cost items at higher coinsurance rates. Suppliers will be impacted little by the methodological change because the final methodology we are adopting has a small saving attached to it.

    b. Effects of the Final Rules Determining Single Payment Amounts for Similar Items With Different Features Under the DMEPOS Competitive Bidding Program

    In this final rule, we estimate that the method for determining single payment amounts for certain groupings of similar items with different features under the DMEPOS CBPs will generate small savings by not allowing SPAs for certain similar items without features to be priced higher than items with features. Our final policy will benefit beneficiaries who would have lower coinsurance payments as a result of this proposal. We also believe this methodology will prevent beneficiaries from potentially receiving lower cost items at higher coinsurance rates. Suppliers will have a reduced administrative burden due to the fact that bidding is simplified.

    c. Effects of the Revision to the Bid Limits Under the DMEPOS Competitive Bidding Program

    In this final rule, we estimate the bid limits for items under the DMEPOS CBP will not have a significant fiscal impact on the Medicare program because we anticipate little change in Medicare payment due to the revised bid limits. This revision will provide clearer limits. We estimate our revision to the bid limits at the unadjusted fee level would have little fiscal impact in that competitions will continue to reduce prices. This final rule will benefit suppliers and beneficiaries because payments will be allowed to fluctuate somewhat to account for increases in the costs of furnishing items, including newer technology items.

    C. Accounting Statement

    As required by OMB Circular A-4 (available at http://www.whitehouse.gov/omb/circulars_a004_a-4), in Table 38, we have prepared an accounting statement showing the classification of the transfers and costs associated with the various provisions of this final rule.

    Table 38—Accounting Statement: Classification of Estimated Transfers and Costs/Savings Category Transfers ESRD PPS and AKI for CY 2017 Annualized Monetized Transfers $80 million. From Whom to Whom Federal government to ESRD providers. Category Transfers Increased Beneficiary Co-insurance Payments $10 million. From Whom to Whom Beneficiaries to ESRD providers. ESRD QIP for PY 201919 Category Transfers Annualized Monetized Transfers −$15.5 million. Category Costs Annualized Monetized ESRD Provider Costs $21 thousand. ESRD QIP for PY 2020 Category Transfers Annualized Monetized Transfers −$31 million. From Whom to Whom Federal government to ESRD providers. Category Costs Annualized Monetized ESRD Provider Costs $91 million. DME Provisions Category Transfer Estimates Year dollar Discount rate Annualized Monetized Transfer on Beneficiary Cost Sharing (in $Millions) −$1.9
  • −$1.9
  • 2016
  • 2016
  • 7%
  • 3%
  • From Whom to Whom Beneficiaries to Medicare providers Transfers Estimates Year dollar Discount rate Annualized Monetized Transfer Payments (in $Millions) −$7.5
  • −$7.8
  • 2016
  • 2016
  • 7%
  • 3%
  • From Whom to Whom Federal government to Medicare providers.

    19 We note that the aggregate impact of the PY 2018 ESRD QIP was included in the CY 2015 ESRD PPS final rule (79 FR 66256 through 66258). The values presented here capture those previously finalized impacts plus the collection of information requirements related for PY 2018 presented in this notice of proposed rulemaking.

    XVII. Regulatory Flexibility Act Analysis

    The Regulatory Flexibility Act (September 19, 1980, Pub. L. 96-354) (RFA) requires agencies to analyze options for regulatory relief of small entities, if a rule has a significant impact on a substantial number of small entities. For purposes of the RFA, small entities include small businesses, nonprofit organizations, and small governmental jurisdictions. Approximately 14 percent of ESRD dialysis facilities are considered small entities according to the Small Business Administration's (SBA) size standards, which classifies small businesses as those dialysis facilities having total revenues of less than $38.5 million in any 1 year. Individuals and States are not included in the definitions of a small entity. For more information on SBA's size standards, see the Small Business Administration's Web site at http://www.sba.gov/content/small-business-size-standards (Kidney Dialysis Centers are listed as 621492 with a size standard of $38.5 million).

    We do not believe ESRD facilities are operated by small government entities such as counties or towns with populations of 50,000 or less, and therefore, they are not enumerated or included in this estimated RFA analysis. Individuals and States are not included in the definition of a small entity.

    For purposes of the RFA, we estimate that approximately 14 percent of ESRD facilities are small entities as that term is used in the RFA (which includes small businesses, nonprofit organizations, and small governmental jurisdictions). This amount is based on the number of ESRD facilities shown in the ownership category in Table 34. Using the definitions in this ownership category, we consider the 578 facilities that are independent and the 358 facilities that are shown as hospital-based to be small entities. The ESRD facilities that are owned and operated by LDOs and regional chains would have total revenues of more than $38.5 million in any year when the total revenues for all locations are combined for each business (individual LDO or regional chain), and are not, therefore, included as small entities.

    For the ESRD PPS updates in this final rule, a hospital-based ESRD facility (as defined by ownership type) is estimated to receive a 0.9 percent increase in payments for CY 2017. An independent facility (as defined by ownership type) is also estimated to receive a 0.7 percent increase in payments for CY 2017.

    We are unable to estimate whether patients will go to ESRD facilities for AKI dialysis, however, we have estimated there is a potential for $2.0 million in payment for AKI dialysis treatments that could potentially be furnished in ESRD facilities. As a result, this final rule is not estimated to have a significant impact on small entities.

    We estimate that of the 2,710 ESRD facilities expected to receive a payment reduction in the PY 2020 ESRD QIP, 341 are ESRD small entity facilities. We present these findings in Table 35 (“Estimated Distribution of PY 2020 ESRD QIP Payment Reductions”) and Table 37 (“Impact of Proposed QIP Payment Reductions to ESRD Facilities for PY 2020”) above. We estimate that payment reductions will average approximately $11,653 per facility across the 2,710 facilities receiving a payment reduction, and $13,675.56 for each small entity facility. Using our estimates of facility performance, we also estimated the impact of payment reductions on ESRD small entity facilities by comparing the total estimated payment reductions for 922 small entity facilities with the aggregate ESRD payments to all small entity facilities. We estimate that there are a total of 922 small entity facilities, and that the aggregate ESRD PPS payments to these facilities will decrease 0.48 percent in PY 2020.

    We anticipate that the bid surety bond provision will have an impact on all suppliers, including small suppliers; therefore, we requested comments regarding the bid bond amount. No comments were received from small suppliers. The state licensure and appeal of preclusion rules are not expected to have an impact on any supplier.

    We expect that finalizing our proposals for a method for adjusting DMEPOS fee schedule amounts for certain groupings of similar items with different features using information from the DMEPOS CBPs, our final change for submitting bids for a grouping of two or more similar items with different features, our final policy for determining single payment amounts for similar items with different features under the DMEPOS CBPs, and our revision to the bid limits for items under the DMEPOS CBP will not have a significant impact on a substantial number of small suppliers. Although suppliers furnishing items and services outside CBAs do not have to compete and be awarded contracts in order to continue furnishing these items and services, the fee schedule amounts for these items and services will be more equitable using the proposals established as a result of this rule. We believe that these rules will have a positive impact on suppliers because it reduces the burden and time it takes for suppliers to submit bids and data entry. It will also allow for suppliers to furnish items necessary to beneficiaries while getting compensated a reasonable payment.

    Therefore, the Secretary has determined that this final rule would not have a significant economic impact on a substantial number of small entities. We solicited comments on the RFA analysis provided and did not receive comments.

    In addition, section 1102(b) of the Act requires us to prepare a regulatory impact analysis if a rule may have a significant impact on the operations of a substantial number of small rural hospitals. Any such regulatory impact analysis must conform to the provisions of section 604 of the RFA. For purposes of section 1102(b) of the Act, we define a small rural hospital as a hospital that is located outside of a metropolitan statistical area and has fewer than 100 beds. We do not believe this final rule will have a significant impact on operations of a substantial number of small rural hospitals because most dialysis facilities are freestanding. While there are 139 rural hospital-based ESRD facilities, we do not know how many of them are based at hospitals with fewer than 100 beds. However, overall, the 139 rural hospital-based ESRD facilities will experience an estimated 0.1 percent increase in payments. As a result, this final rule is not estimated to have a significant impact on small rural hospitals. Therefore, the Secretary has determined that this final rule would not have a significant impact on the operations of a substantial number of small rural hospitals.

    XVIII. Unfunded Mandates Reform Act Analysis

    Section 202 of the Unfunded Mandates Reform Act of 1995 (UMRA) also requires that agencies assess anticipated costs and benefits before issuing any rule whose mandates require spending in any 1 year of $100 million in 1995 dollars, updated annually for inflation. In 2016, that is approximately $146 million. This final rule does not include any mandates that would impose spending costs on State, local, or Tribal governments in the aggregate, or by the private sector, of $141 million.

    XIX. Federalism Analysis

    Executive Order 13132 on Federalism (August 4, 1999) establishes certain requirements that an agency must meet when it promulgates a proposed rule (and subsequent final rule) that imposes substantial direct requirement costs on State and local governments, preempts State law, or otherwise has Federalism implications. We have reviewed this final rule under the threshold criteria of Executive Order 13132, Federalism, and have determined that it will not have substantial direct effects on the rights, roles, and responsibilities of States, local or Tribal governments.

    XX. Congressional Review Act

    This final rule is subject to the Congressional Review Act provisions of the Small Business Regulatory Enforcement Fairness Act of 1996 (5 U.S.C. 801 et seq.) and has been transmitted to the Congress and the Comptroller General for review.

    In accordance with the provisions of Executive Order 12866, this final rule was reviewed by the Office of Management and Budget.

    List of Subjects 42 CFR Part 413

    Health facilities, Kidney diseases, Medicare, Reporting and recordkeeping requirements.

    42 CFR Part 414

    Administrative practice and procedure, Health facilities, Health professions, Kidney diseases,

    Medicare, Reporting and recordkeeping requirements

    42 CFR Part 494

    Conditions for coverage for end-stage renal disease facilities.

    For the reasons set forth in the preamble, the Centers for Medicare & Medicaid Services amends 42 CFR chapter IV as set forth below:

    PART 413—PRINCIPLES OF REASONABLE COST REIMBURSEMENT; PAYMENT FOR END-STAGE RENAL DISEASE SERVICES; OPTIONAL PROSPECTIVELY DETERMINED PAYMENT RATES FOR SKILLED NURSING FACILITIES; PAYMENT FOR ACUTE KIDNEY INJURY DIALYSIS 1. The authority citation for part 413 is revised to read as follows: Authority:

    42 U.S.C. 1302; 42 U.S.C. 1395d(d); 42 U.S.C. 1395f(b); 42 U.S.C. 1395g; 42 U.S.C. 1395l(a), (i), and (n); 42 U.S.C. 1395x(v); 42 U.S.C. 1395hh; 42 U.S.C. 1395rr; 42 U.S.C. 1395tt; 42 U.S.C. 1395ww; sec. 124 of Public Law 106-113, 113 Stat. 1501A- 332; sec. 3201 of Public Law 112-96, 126 Stat. 156; sec. 632 of Public Law 112-240, 126 Stat. 2354; sec. 217 of Public Law 113-93, 129 Stat. 1040; sec. 204 of Public Law 113-295, 128 Stat. 4010; and sec. 808 of Public Law 114-27, 129 Stat. 362.

    2. The heading for part 413 is revised to read as set forth above: 3. Section 413.194 is amended by revising paragraph (a)(1) to read as follows:
    § 413.194 Appeals.

    (a) * * *

    (1) A facility that disputes the amount of its allowable Medicare bad debts reimbursed by CMS under § 413.89(h)(3) may request review by the contractor or the Provider Reimbursement Review Board (PRRB) in accordance with subpart R to part 405 of this chapter.

    4. Section 413.215 is amended by revising paragraph (b) to read as follows:
    § 413.215 Basis of payment.

    (b) In addition to the per-treatment payment amount, as described in paragraph (a) of this section, the ESRD facility may receive payment for bad debts of Medicare beneficiaries as specified in § 413.89(h)(3).

    5. Add subpart K to part 413 to read as follows: Subpart K—Payment for Acute Kidney Injury (AKI) Dialysis Sec. 413.370 Scope. 413.371 Definition. 413.372 AKI dialysis payment rate. 413.373 Other adjustments to the AKI dialysis payment rate 413.374 Renal dialysis services included in the AKI dialysis payment rate 413.375 Notification of changes in rate-setting methodologies and payment rates. Subpart K—Payment for Acute Kidney Injury (AKI) Dialysis
    § 413.370 Scope.

    This subpart implements section 1834(r) of the Act by setting forth the principles and authorities under which CMS is authorized to establish a payment amount for renal dialysis services furnished to beneficiaries with an acute kidney injury in or under the supervision of an ESRD facility that meets the conditions of coverage in part 494 of this chapter and as defined in § 413.171.

    § 413.371 Definition.

    For purposes of the subpart, the following definition applies:

    Individual with acute kidney injury. The term individual with acute kidney injury means an individual who has acute loss of renal function and does not receive renal dialysis services for which payment is made under section 1881(b)(14) of the Act.

    § 413.372 AKI dialysis payment rate.

    The amount of payment for AKI dialysis services shall be the base rate for renal dialysis services determined for such year under section 1881(b)(14), that is, the ESRD base rate as set forth in § 413.220, updated by the ESRD bundled market basket percentage increase factor minus a productivity adjustment as set forth in § 413.196(d)(1), adjusted for wages as set forth in § 413.231, and adjusted by any other amounts deemed appropriate by the Secretary under § 413.373.

    § 413.373 Other adjustments to the AKI dialysis payment rate

    The payment rate for AKI dialysis may be adjusted by the Secretary (on a budget neutral basis for payments under section 1834(r)) by any other adjustment factor under subparagraph (D) of section 1881(b)(14) of the Act.

    § 413.374 Renal dialysis services included in the AKI dialysis payment rate

    (a) The AKI dialysis payment rate applies to renal dialysis services (as defined in subparagraph (B) of section 1881(b)(14) of the Act) furnished under Part B by a renal dialysis facility or provider of services paid under section 1881(b)(14) of the Act.

    (b) Other items and services furnished to beneficiaries with AKI that are not considered to be renal dialysis services as defined in § 413.171, but that are related to their dialysis treatment as a result of their AKI, would be separately payable, that is, drugs, biologicals, laboratory services, and supplies that ESRD facilities are certified to furnish and that would otherwise be furnished to a beneficiary with AKI in a hospital outpatient setting.

    § 413.375 Notification of changes in rate-setting methodologies and payment rates.

    (a) Changes to the methodology for payment for renal dialysis services furnished to beneficiaries with AKI as well as any adjustments to the AKI payment rate other than wage index will be adopted through notice and comment rulemaking.

    (b) Annual updates in the AKI dialysis payment rate as described in § 413.372 that do not include those changes described in paragraph (a) of this section are announced by notice published in the Federal Register without opportunity for public comment.

    (c) Effective for cost reporting periods beginning on or after January 1, 2017, on an annual basis CMS updates the AKI dialysis payment rate.

    PART 414—PAYMENT FOR PART B MEDICAL AND OTHER HEALTH SERVICES 7. The authority citation for part 414 continues to read as follows: Authority:

    Secs. 1102, 1871, and 1881(b)(1) of the Social Security Act (42 U.S.C. 1302, 1395hh, and 1395rr(b)(1)).

    8. Section 414.210 is amended by revising paragraph (g)(6) to read as follows:
    § 414.210 General payment rules.

    (g) * * *

    (6) Adjustments of single payment amounts resulting from price inversions under the DMEPOS Competitive Bidding Program. (i) In situations where a price inversion defined in § 414.402 occurs under the DMEPOS Competitive Bidding Program in a competitive bidding area (CBA) following a competition for a grouping of similar items identified in paragraph (g)(6)(ii) of this section, prior to adjusting the fee schedule amounts under paragraph (g) of this section the single payment amount for each item in the grouping of similar items in the CBA is adjusted to be equal to the weighted average of the single payment amounts for the items in the grouping of similar items in the CBA.

    (ii) The groupings of similar items subject to this rule include—

    (A) Hospital beds (HCPCS codes E0250, E0251, E0255, E0256, E0260, E0261, E0290, E0291, E0292, E0293, E0294, E0295, E0301, E0302, E0303, and E0304).

    (B) Mattresses and overlays (HCPCS codes E0277, E0371, E0372, and E0373)

    (C) Power wheelchairs (HCPCS codes K0813, K0814, K0815, K0816, K0820, K0821, K0822, and K0823).

    (D) Seat lift mechanisms (HCPCS codes E0627 and E0629).

    (E) TENS devices (HCPCS codes E0720 and E0730).

    (F) Walkers (HCPCS codes E0130, E0135, E0141, and E0143).

    (iii) The weight for each item (HCPCS code) used in calculating the weighted average described in paragraph (g)(6)(ii) of this section is equal to the proportion of total nationwide allowed services furnished in calendar year 2012 for the item (HCPCS code) in the grouping of similar items, relative to the total nationwide allowed services furnished in calendar year 2012 for each of the other items (HCPCS codes) in the grouping of similar items.

    9. Section 414.402 is amended by revising the definition of “Hearing officer” and adding the definitions of “Bidding entity,” “Price Inversion,” and “Total nationwide allowed services” in alphabetical order to read as follows:
    § 414.402 Definitions.

    Bidding entity means the entity whose legal business name is identified in the “Form A: Business Organization Information” section of the bid.

    Hearing officer means an individual, who was not involved with the CBIC recommendation to take action for a breach of a DMEPOS Competitive Bidding Program contract, who is designated by CMS to review and make an unbiased and independent recommendation when there is an appeal of CMS's initial determination to take action for a breach of a DMEPOS Competitive Bidding Program contract.

    Price inversion means any situation where the following occurs: One item (HCPCS code) in a grouping of similar items (e.g., walkers, enteral infusion pumps, or power wheelchairs) in a product category includes a feature that another, similar item in the same product category does not have (e.g., wheels, alarm, or Group 2 performance); the average of the 2015 fee schedule amounts (or initial, unadjusted fee schedule amounts for subsequent years for new items) for the code with the feature is higher than the average of the 2015 fee schedule amounts for the code without the feature; and, following a competition, the SPA for the code with the feature is lower than the SPA for the code without that feature.

    Total nationwide allowed services means the total number of services allowed for an item furnished in all states, territories, and the District of Columbia where Medicare beneficiaries reside and can receive covered DMEPOS items and services.

    10. Section 414.412 is amended by revising paragraphs (b)(2) and (d) and adding paragraph (h) to read as follows:
    § 414.412 Submission of bids under a competitive bidding program.

    (b) * * *

    (2) The bids submitted for each item in a product category cannot exceed the payment amount that would otherwise apply to the item under subpart C of this part, without the application of § 414.210(g), or subpart D of this part, without the application of § 414.105, or subpart I of this part. The bids submitted for items in accordance with paragraph (d)(2) of this section cannot exceed the weighted average, weighted by total nationwide allowed services, as defined in § 414.202, of the payment amounts that would otherwise apply to the grouping of similar items under subpart C of this part, without the application of § 414.210(g), or subpart D of this part, without the application of § 414.105.

    (d) Separate bids. (1) Except as provided in paragraph (d)(2) of this section, for each product category that a supplier is seeking to furnish under a Competitive Bidding Program, the supplier must submit a separate bid for each item in that product category.

    (2) An exception to paragraph (d)(1) of this section can be made in situations where price inversions defined in § 414.402 have occurred in past competitions for items within groupings of similar items within a product category. In these situations, an alternative method for submitting bids for these combinations of codes may be announced at the time the competition begins. Under this alternative method, the combination of codes for the similar items is the item for bidding purposes, as defined under § 414.402. Suppliers submit bids for the code with the highest total nationwide allowed services for calendar year 2012 (the “lead item”) within the grouping of codes for similar items, and the bids for this code are used to calculate the single payment amounts for this code in accordance with § 414.416(b)(1). The bids for this code would also be used to calculate the single payment amounts for the other codes within the grouping of similar items in accordance with § 414.416(b)(3). For subsequent competitions, the lead item is identified as the code with the highest total nationwide allowed services for the most recent and complete calendar year that precedes the competition. The groupings of similar items subject to this rule include—

    (i) Hospital beds (HCPCS codes E0250, E0251, E0255, E0256, E0260, E0261, E0266, E0265, E0290, E0291, E0292, E0293, E0294, E0295, E0296, E0297, E0301, E0302, E0303, and E0304).

    (ii) Mattresses and overlays (HCPCS codes E0277, E0371, E0372, and E0373).

    (iii) Power wheelchairs (HCPCS codes K0813, K0814, K0815, K0816, K0820, K0821, K0822, K0823, K0824, K0825, K0826, K0827, K0828, and K0829).

    (iv) Seat lift mechanisms (HCPCS codes E0627 and E0629).

    (v) TENS devices (HCPCS codes E0720 and E0730).

    (vi) Walkers (HCPCS codes E0130, E0135, E0140, E0141, E0143, E0144, E0147, E0148, and E0149).

    (h) Requiring bid surety bonds for bidding entities—(1) Bidding requirements. For competitions beginning on or after January 1, 2017, and no later than January 1, 2019, a bidding entity may not submit a bid(s) for a CBA unless it obtains a bid surety bond for the CBA from an authorized surety on the Department of the Treasury's Listing of Certified Companies and provides proof of having obtained the bond by submitting a copy to CMS by the deadline for bid submission.

    (2) Bid surety bond requirements. (i) The bid surety bond issued must include at a minimum:

    (A) The name of the bidding entity as the principal/obligor;

    (B) The name and National Association of Insurance Commissioners number of the authorized surety;

    (C) CMS as the named obligee;

    (D) The conditions of the bond as specified in paragraph (h)(3) of this section;

    (E) The CBA covered by the bond;

    (F) The bond number;

    (G) The date of issuance; and

    (H) The bid bond value of $50,000.00.

    (ii) The bid surety bond must be maintained until it is either collected upon due to forfeiture or the liability is returned for not meeting bid forfeiture conditions.

    (3) Forfeiture of bid surety bond. (i) When a bidding entity is offered a contract for a CBA/product category (“competition”) and its composite bid for the competition is at or below the median composite bid rate for all bidding entities included in the calculation of the single payment amounts within the competition and the bidding entity does not accept the contract offer, its bid surety bond submitted for that CBA will be forfeited and CMS will collect on the bond via Electronic Funds Transfer (EFT) from the respective bonding company. As one bid surety bond is required for each CBA in which the bidding entity is submitting a bid, the failure to accept a contract offer for any product category within the CBA when the entity's bid is at or below the median composite bid rate will result in forfeiture of the bid surety bond for that CBA.

    (ii) Where the bid(s) does not meet the specified forfeiture conditions in paragraph (h)(3)(i) of this section, the bid surety bond liability will be returned within 90 days of the public announcement of contract suppliers for the CBA. CMS will notify the bidding entity that it did not meet the specified forfeiture requirements and the bid surety bond will not be collected by CMS.

    (4) Penalties. (i) A bidding entity that has been determined to have falsified its bid surety bond may be prohibited from participation in the DMEPOS Competitive Bidding Program for the current round of the Competitive Bidding Program in which it submitted a bid and also from participating in the next round of the Competitive Bidding Program. Offending suppliers will also be referred to the Office of Inspector General and Department of Justice for further investigation.

    (ii) A bidding entity, whose composite bid is at or below the median composite bid rate, that—

    (A) Accepts a contract award; and

    (B) Is found to be in breach of contract for nonperformance of the contract to avoid forfeiture of the bid surety bond will have its contract terminated and will be precluded from participation in the in the next round of the DMEPOS Competitive Bidding Program.

    11. Section 414.414 is amended by revising paragraph (b)(3) to read as follows:
    § 414.414 Conditions for awarding contracts.

    (b) * * *

    (3) Each supplier must have all State and local licenses required to perform the services identified in the request for bids. CMS may not award a contract to any entity in a CBA unless the entity meets applicable State licensure requirements.

    12. Section 414.416 is amended by adding paragraph (b)(3) to read as follows:
    § 414.416 Determination of competitive bidding payment amounts.

    (b) * * *

    (3) In the case of competitions where bids are submitted for an item that is a combination of codes for similar items within a product category as identified under § 414.412(d)(2), the single payment amount for each code within the combination of codes is equal to the single payment amount for the lead item or code with the highest total nationwide allowed services multiplied by the ratio of the average of the 2015 fee schedule amounts for all areas (i.e., all states, the District of Columbia, Puerto Rico, and the United States Virgin Islands) for the code to the average of the 2015 fee schedule amounts for all areas for the lead item.

    13. Section 414.422 is amended by revising paragraph (g) to read as follows:
    § 414.422 Terms of contracts.

    (g) Breach of contract. (1) Any deviation from contract requirements, including a failure to comply with governmental agency or licensing organization requirements, constitutes a breach of contract.

    (2) In the event a contract supplier breaches its contract, CMS may take one or more of the following actions, which will be specified in the notice of breach of contract:

    (i) Suspend the contract supplier's contract;

    (ii) Terminate the contract;

    (iii) Preclude the contract supplier from participating in the competitive bidding program; or

    (iv) Avail itself of other remedies allowed by law.

    14. Section 414.423 is revised to read as follows:
    § 414.423 Appeals process for breach of a DMEPOS competitive bidding program contract actions.

    This section implements an appeals process for suppliers that CMS has determined are in breach of their Medicare DMEPOS Competitive Bidding Program contract and where CMS has issued a notice of breach of contract indicating its intent to take action(s) pursuant to § 414.422(g)(2).

    (a) Breach of contract. CMS may take one or more of the actions specified in § 414.422(g)(2) as a result of a supplier's breach of their DMEPOS Competitive Bidding Program contract.

    (b) Notice of breach of contract—(1) CMS notification. If CMS determines a supplier to be in breach of its contract, it will notify the supplier of the breach of contract in a notice of breach of contract.

    (2) Content of the notice of breach of contract. The CMS notice of breach of contract will include the following:

    (i) The details of the breach of contract.

    (ii) The action(s) that CMS is taking as a result of the breach of the contract pursuant to § 414.422(g)(2), and the duration of or timeframe(s) associated with the action(s), if applicable.

    (iii) The right to request a hearing by a CBIC hearing officer and, depending on the nature of the breach, the supplier may also be allowed to submit a corrective action plan (CAP) in lieu of requesting a hearing by a CBIC hearing officer, as specified in paragraph (c)(1)(i) of this section.

    (iv) The address to which the written request for a hearing must be submitted.

    (v) The address to which the CAP must be submitted, if applicable.

    (vi) The effective date of the action(s) that CMS is taking is the date specified by CMS in the notice of breach of contract, or 45 days from the date of the notice of breach of contract unless:

    (A) A timely hearing request has been filed; or

    (B) A CAP has been submitted within 30 days of the date of the notice of breach of contract where CMS allows a supplier to submit a CAP.

    (c) Corrective action plan (CAP)—(1) Option for a CAP. (i) CMS has the option to allow a supplier to submit a written CAP to remedy the deficiencies identified in the notice at its sole discretion, including where CMS determines that the delay in the effective date of the breach of contract action(s) caused by allowing a CAP will not cause harm to beneficiaries. CMS will not allow a CAP if the supplier has been excluded from any Federal program, debarred by a Federal agency, or convicted of a healthcare-related crime, or for any other reason determined by CMS.

    (ii) If a supplier chooses not to submit a CAP, if CMS determines that a supplier's CAP is insufficient, or if CMS does not allow the supplier the option to submit a CAP, the supplier may request a hearing on the breach of contract action(s).

    (2) Submission of a CAP. (i) If allowed by CMS, a CAP must be submitted within 30 days from the date on the notice of breach of contract. If the supplier decides not to submit a CAP the supplier may, within 30 days of the date on the notice, request a hearing by a CBIC hearing officer.

    (ii) Suppliers will have the opportunity to submit a CAP when they are first notified that they have been determined to be in breach of contract. If the CAP is not acceptable to CMS or is not properly implemented, suppliers will receive a subsequent notice of breach of contract. The subsequent notice of breach of contract may, at CMS' discretion, allow the supplier to submit another written CAP pursuant to paragraph (c)(1)(i) of this section.

    (d) The purpose of the CAP. The purpose of the CAP is:

    (1) For the supplier to remedy all of the deficiencies that were identified in the notice of breach of contract.

    (2) To identify the timeframes by which the supplier will implement each of the components of the CAP.

    (e) Review of the CAP. (1) The CBIC will review the CAP. Suppliers may only revise their CAP one time during the review process based on the deficiencies identified by the CBIC. The CBIC will submit a recommendation to CMS for each applicable breach of contract action concerning whether the CAP includes the steps necessary to remedy the contract deficiencies as identified in the notice of breach of contract.

    (2) If CMS accepts the CAP, including the supplier's designated timeframe for its completion, the supplier must provide a follow-up report within 5 days after the supplier has fully implemented the CAP that verifies that all of the deficiencies identified in the CAP have been corrected in accordance with the timeframes accepted by CMS.

    (3) If the supplier does not implement a CAP that was accepted by CMS, or if CMS does not accept the CAP submitted by the supplier, then the supplier will receive a subsequent notice of breach of contract, as specified in paragraph (b) of this section.

    (f) Right to request a hearing by the CBIC Hearing Officer. (1) A supplier who receives a notice of breach of contract (whether an initial notice of breach of contract or a subsequent notice of breach of contract under § 414.422(e)(3)) has the right to request a hearing before a CBIC hearing officer who was not involved with the original breach of contract determination.

    (2) A supplier that wishes to appeal the breach of contract action(s) specified in the notice of breach of contract must submit a written request to the CBIC. The request for a hearing must be received by the CBIC within 30 days from the date of the notice of breach of contract.

    (3) A request for hearing must be in writing and submitted by an authorized official of the supplier.

    (4) The appeals process for the Medicare DMEPOS Competitive Bidding Program is not to be used in place of other existing appeals processes that apply to other parts of Medicare.

    (5) If the supplier is given the opportunity to submit a CAP and a CAP is not submitted and the supplier fails to timely request a hearing, the breach of contract action(s) will take effect 45 days from the date of the notice of breach of contract.

    (g) The CBIC Hearing Officer schedules and conducts the hearing. (1) Within 30 days from the receipt of the supplier's timely request for a hearing the hearing officer will contact the parties to schedule the hearing.

    (2) The hearing may be held in person or by telephone at the parties' request.

    (3) The scheduling notice to the parties must indicate the time and place for the hearing and must be sent to the parties at least 30 days before the date of the hearing.

    (4) The hearing officer may, on his or her own motion, or at the request of a party, change the time and place for the hearing, but must give the parties to the hearing 30 days' notice of the change.

    (5) The hearing officer's scheduling notice must provide the parties to the hearing the following information:

    (i) A description of the hearing procedure.

    (ii) The specific issues to be resolved.

    (iii) The supplier has the burden to prove it is not in violation of the contract or that the breach of contract action(s) is not appropriate.

    (iv) The opportunity for parties to the hearing to submit additional evidence to support their positions, if requested by the hearing officer.

    (v) A notification that all evidence submitted, both from the supplier and CMS, will be provided in preparation for the hearing to all affected parties at least 15 days prior to the scheduled date of the hearing.

    (h) Burden of proof and evidence submission. (1) The burden of proof is on the Competitive Bidding Program contract supplier to demonstrate to the hearing officer with convincing evidence that it has not breached its contract or that the breach of contract action(s) is not appropriate.

    (2) The supplier's evidence must be submitted with its request for a hearing.

    (3) If the supplier fails to submit the evidence at the time of its submission, the Medicare DMEPOS supplier is precluded from introducing new evidence later during the hearing process, unless permitted by the hearing officer.

    (4) CMS also has the opportunity to submit evidence to the hearing officer within 10 days of receiving the scheduling notice.

    (5) The hearing officer will share all evidence submitted by the supplier and/or CMS, with all parties to the hearing at least 15 days prior to the scheduled date of the hearing.

    (i) Role of the hearing officer. The hearing officer will conduct a thorough and independent review of the evidence including the information and documentation submitted for the hearing and other information that the hearing officer considers pertinent for the hearing. The role of the hearing officer includes, at a minimum, the following:

    (1) Conduct the hearing and decide the order in which the evidence and the arguments of the parties are presented;

    (2) Determine the rules on admissibility of the evidence;

    (3) Examine the witnesses, in addition to the examinations conducted by CMS and the contract supplier;

    (4) The CBIC may assist CMS in the appeals process including being present at the hearing, testifying as a witness, or performing other, related ministerial duties;

    (5) Determine the rules for requesting documents and other evidence from other parties;

    (6) Ensure a complete record of the hearing is made available to all parties to the hearing;

    (7) Prepare a file of the record of the hearing which includes all evidence submitted as well as any relevant documents identified by the hearing officer and considered as part of the hearing; and

    (8) Comply with all applicable provisions of 42 U.S.C. Title 18 and related provisions of the Act, the applicable regulations issued by the Secretary, and manual instructions issued by CMS.

    (j) Hearing officer recommendation. (1) The hearing officer will issue a written recommendation(s) to CMS within 30 days of the close of the hearing unless an extension has been granted by CMS because the hearing officer has demonstrated that an extension is needed due to the complexity of the matter or heavy workload. In situations where there is more than one breach of contract action presented at the hearing, the hearing officer will issue separate recommendations for each breach of contract action.

    (2) The recommendation(s) will explain the basis and the rationale for the hearing officer's recommendation(s).

    (3) The hearing officer must include the record of the hearing, along with all evidence and documents produced during the hearing along with its recommendation(s).

    (k) CMS' final determination. (1) CMS' review of the hearing officer's recommendation(s) will not allow the supplier to submit new information.

    (2) After reviewing the hearing officer's recommendation(s), CMS' decision(s) will be made within 30 days from the date of receipt of the hearing officer's recommendation(s). In situations where there is more than one breach of contract action presented at the hearing, and the hearing officer issues multiple recommendations, CMS will render separate decisions for each breach of contract action.

    (3) A notice of CMS' decision will be sent to the supplier and the hearing officer. The notice will indicate:

    (i) If any breach of contract action(s) included in the notice of breach of contract, specified in paragraph (b)(1) of this section, still apply and will be effectuated, and

    (ii) The effective date for any breach of contract action specified in paragraph (k)(3)(i) of this section.

    (4) This decision(s) is final and binding.

    (l) Effect of breach of contract action(s)—(1) Effect of contract suspension. (i) All locations included in the contract cannot furnish competitive bid items to beneficiaries within a CBA and the supplier cannot be reimbursed by Medicare for these items for the duration of the contract suspension.

    (ii) The supplier must notify all beneficiaries who are receiving rented competitive bid items or competitive bid items on a recurring basis of the suspension of their contract.

    (A) The notice to the beneficiary from the supplier must be provided within 15 days of receipt of the final notice.

    (B) The notice to the beneficiary must inform the beneficiary that they must select a new contract supplier to furnish these items in order for Medicare to pay for these items.

    (2) Effect of contract termination. (i) All locations included in the contract can no longer furnish competitive bid items to beneficiaries within a CBA and the supplier cannot be reimbursed by Medicare for these items after the effective date of the termination.

    (ii) The supplier must notify all beneficiaries, who are receiving rented competitive bid items or competitive bid items received on a recurring basis, of the termination of their contract.

    (A) The notice to the beneficiary from the supplier must be provided within 15 days of receipt of the final notice of termination.

    (B) The notice to the beneficiary must inform the beneficiary that they are going to have to select a new contract supplier to furnish these items in order for Medicare to pay for these items.

    (3) Effect of preclusion. A supplier who is precluded will not be allowed to participate in a specific round of the Competitive Bidding Program, which will be identified in the original notice of breach of contract, as specified in paragraph (b)(1) of this section.

    (4) Effect of other remedies allowed by law. If CMS decides to impose other remedies under § 414.422(g)(2)(iv), the details of the remedies will be included in the notice of breach of contract, as specified in paragraph (b)(2) of this section.

    PART 494—CONDITIONS FOR COVERAGE FOR END-STAGE RENAL DISEASE FACILITIES 15. The authority citation for part 494 continues to read as follows: Authority:

    Secs. 1102 and 1871 of the Social Security Act (42 U.S.C. 1302 and 1395hh).

    16. Amend § 494.1 by revising paragraph (a)(3) and adding paragraph (a)(7) to read as follows:
    § 494.1 Basis and Scope.

    (a) * * *

    (3) Section 1861(s)(2)(F) of the Act, which describes “medical and other health services” covered under Medicare to include home dialysis supplies and equipment, self-care home dialysis support services, and institutional dialysis services and supplies, for items and services furnished on or after January 1, 2011, renal dialysis services (as defined in section 1881(b)(14)(B)), including such renal dialysis services furnished on or after January 1, 2017, by a renal dialysis facility or provider of services paid under section 1881(b)(14) to an individual with acute kidney injury (as defined in section 1834(r)(2)).

    (7) Section 1861(s)(2)(F) of the Act, which authorizes coverage for renal dialysis services furnished on or after January 1, 2017 by a renal dialysis facility or provider of services currently paid under section 1881(b)(14) of the Act to an individual with AKI.

    Dated: October 24, 2016. Andrew M. Slavitt, Acting Administrator, Centers for Medicare & Medicaid Services. Approved: October 25, 2016. Sylvia M. Burwell, Secretary, Department of Health and Human Services.
    [FR Doc. 2016-26152 Filed 10-28-16; 4:15 pm] BILLING CODE 4120-01-P
    81 214 Friday, November 4, 2016 Rules and Regulations Part IV Department of the Interior National Park Service 36 CFR Parts 1 and 9 General Provisions and Non-Federal Oil and Gas Rights; Final Rule DEPARTMENT OF THE INTERIOR National Park Service 36 CFR Parts 1 and 9 [NPS-WASO-NRSS-21688; GPO Deposit Account 4311H2] RIN 1024-AD78 General Provisions and Non-Federal Oil and Gas Rights AGENCY:

    National Park Service, Interior.

    ACTION:

    Final rule.

    SUMMARY:

    We are updating our service-wide regulations governing the exercise of non-federal oil and gas rights, to improve our ability to protect park resources, values, and visitors from potential impacts associated with nonfederal oil and gas operations located within National Park Service units outside Alaska. The rule also makes the regulations consistent with existing policies and practices, and updates the format to improve clarity and simplify application and compliance for oil and gas operators and our employees.

    DATES:

    This rule is effective December 5, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Edward O. Kassman, Jr., Geologic Resources Division, National Park Service, P.O. Box 25287, Denver, Colorado 80225; [email protected]; (303) 969-2146.

    SUPPLEMENTARY INFORMATION:

    Background Proposed Rule and Public Comment Period

    On October 26, 2015, the National Park Service (NPS) published the proposed rule in the Federal Register (80 FR 65572). The rule was open for public comment for 60 days, until December 28, 2015. The NPS invited comments via mail and the Federal eRulemaking Portal at http://www.regulations.gov.

    At the start of the comment period, the NPS distributed over 1,000 newsletters to non-governmental organizations, individuals, industry groups, Alaska native corporations, and state agencies, primarily the oil and gas regulatory agencies from multiple states (Alaska, Alabama, California, Colorado, Florida, Indiana, Kentucky, Kansas, Louisiana, Mississippi, Montana, New Mexico, North Dakota, Ohio, Oklahoma, Pennsylvania, Tennessee, Texas, Utah, Virginia, West Virginia, Wyoming). These newsletters summarized the proposed rule, alternatives considered in the related draft environmental impact statement (DEIS), and how the public could comment on the proposed rule and DEIS. In an effort to reach an even broader audience, the NPS hosted a pre-recorded webinar describing the proposed rulemaking. This online webinar soliciting public comment on the DEIS and the proposed rule and was open to any member of the public.

    The NPS received 20 comment letters on the proposed rule during the comment period. These included unique comment letters and form letters. Some comment letters received were submitted improperly and not considered. Additionally, many comments were signed by more than one person. NPS counted a letter as a single set of comments, regardless of the number of signatories. A summary of comments and NPS responses is provided below in the section entitled “Summary of and Responses to Public Comments.”

    After considering the public comments and additional review, the NPS made changes in the final rule. These changes are summarized below in the section entitled “Changes in the Final Rule.”

    1978 Regulations

    On December 8, 1978, the NPS promulgated the regulations at 36 CFR part 9, subpart B (43 FR 57825) (1978 Regulations), governing the exercise of non-federal oil and gas rights in units of the National Park System (System units).

    The 1978 Regulations applied to all activities associated with non-federal oil and gas exploration and development inside System unit boundaries where access is on, across, or through federally owned or controlled lands or waters (36 CFR 9.30(a)). Under the 1978 Regulations, an operator utilizing such access must obtain our approval of a plan of operations before commencing non-federal oil and gas operations in a System unit (36 CFR 9.32(b)). This requirement covered exploration, drilling, production, transportation, plugging, and reclamation operations.

    The proposed plan of operations was an operator's blueprint of all intended activities and was our primary means for evaluating the operation's potential adverse impacts on park resources and values. The operator must demonstrate that it is exercising a bona fide property right to non-federal oil and gas located within a System unit (36 CFR 9.36(a)(2)). Generally, the proposed plan of operations must also describe:

    • The proposed operation, including the equipment, methods, and materials to be used in the operation;

    • Access to the site;

    • Mitigation measures that will be implemented to protect NPS resources and values;

    • Environmental conditions in the vicinity of the site;

    • Alternatives to the proposal; and

    • The environmental impacts of the proposed operation (36 CFR 9.36(a)).

    In addition to the proposed plan of operations, and prior to approval, the operator must submit a performance bond to ensure that funds are available to reclaim a site if the operator defaults on its obligations under an approved plan (36 CFR 9.48). In order to make the regulatory process as efficient and transparent as possible, we work collaboratively with operators early in their planning process to provide guidance on information requirements, alternative area of operations locations, and potential mitigation and avoidance measures.

    During our approval process, we coordinate and consult with a variety of state and other federal regulatory agencies to ensure that approval complies with applicable laws, such as the National Environmental Policy Act of 1969, the Endangered Species Act, the National Historic Preservation Act, and the Clean Water Act.

    The 1978 Regulations required that operators conducting non-federal oil and gas operations in System units provide an affidavit that operations planned are in compliance with all applicable state and local laws (36 CFR 9.36(a)(15)). Although state oil and gas regulations may contain provisions designed to protect natural resources (e.g., surface and groundwater), their primary focus is on oil and gas production and protection of associated ownership interests. The purpose and focus of the NPS's regulation of non-federal oil and gas operations is to protect the National Park System's natural and cultural resources and visitor values and safety.

    When the NPS Regional Director has determined that the proposal meets the requirements contained in the regulations and the NPS has completed the required environmental compliance, the Regional Director will approve the plan (36 CFR 9.37). The approved plan is the operator's authorization to conduct its operation in a System unit (36 CFR 9.32(a)).

    During the life of an oil or gas operation in a park, the park manager has the authority to monitor and ensure compliance with the approved plan of operations (36 CFR 9.37(f)). If there is a change in circumstances, the NPS or the operator can make a request to supplement or modify the plan (36 CFR 9.40). The 1978 Regulations authorize us to enforce the terms of the plan, as may be necessary, including suspending operations or revoking plan approval (36 CFR 9.51). The operator may appeal a Regional Director's decision (36 CFR 9.49).

    Authority To Promulgate the Regulations

    The authority to promulgate these regulations is the statute commonly known as the NPS Organic Act (54 U.S.C. 100101 et seq.) as well as other statutes governing the administration of the National Park System. The Organic Act directs the Secretary of the Interior, acting through the NPS, to “promote and regulate the use of the National Park System by means and measures that conform to the fundamental purpose of the System units, which purpose is to conserve the scenery, natural and historic objects, and wild life in the System units and to provide for the enjoyment of the scenery, natural and historic objects, and wild life in such manner and by such means as will leave them unimpaired for the enjoyment of future generations.” The Organic Act also grants the NPS the authority to promulgate regulations “necessary or proper for the use and management of System units.” (54 U.S.C. 100751). This includes the authority to regulate the exercise of non-federal oil and gas rights within park boundaries for the purpose of protecting the resources and values administered by the NPS.

    In addition, the enabling acts for several System units contain specific provisions directing or authorizing us to regulate the exercise of non-federal oil and gas rights. In the authority section of the rule, we list the individual enabling statutes that address non-federal oil and gas rights in those System units.

    Our authority to promulgate regulations that govern the exercise of non-federal oil and gas operations has been recognized as a valid exercise of NPS's Organic Act authority by a U.S. District Court and the United States Court of Appeals for the Fifth Circuit. See Dunn-McCampbell Royalty Interest v. National Park Service, 964 F. Supp. 1125 (S.D. Tex. 1995), and Dunn-McCampbell Royalty Interest v. National Park Service, 630 F.3d 431 (5th Cir. 2011). Courts have also recognized NPS's authority to regulate other non-federal property interests within units of the National Park System. See, e.g., United States v. Vogler, 859 F.2d 638 (9th Cir. 1988), cert. denied, 488 U.S. 1006 (1989); United States v. Garfield County, 122 F. Supp. 2d 1201 (D. Utah 2000). See also Southern Utah Wilderness Alliance v. Bureau of Land Management, 425 F. 3d 735, 746-47 (10th Cir. 2005).

    System units in Alaska would have been subject to the regulations in the proposed rule. As explained in the preamble to the proposal, we relied upon Sturgeon v. Masica, 768 F.3d 1066, 1077-78 (9th Cir. 2014), for the proposition that “because these regulations are generally applicable to System units nationwide and to non-federal interests in those units, they are not `applicable solely to public lands within [units established under ANILCA],' and thus are not affected by section 103(c) of ANILCA.” This Ninth Circuit opinion recently was vacated by the Supreme Court and remanded for further consideration. Sturgeon v. Frost, 136 S.Ct. 1061 (2016). NPS also received several comments stating that application of the proposed rule to nonfederal oil and gas activities on private land would be contrary to section 103(c) of ANILCA. In light of the pending litigation, the applicability of the ANILCA Title XI regulations in 43 CFR part 36, and the lack of current oil and gas development proposals and resource threats, NPS has decided to apply this rule only to operations within System units outside of Alaska. NPS may reconsider this exemption upon receipt of a final decision in the Sturgeon litigation, and if appropriate, to consider Alaska specific special regulations which could be included along with the other NPS Alaska regulations in 36 CFR part 13.

    The rule has no effect on the above-referenced regulations at 43 CFR part 36, promulgated by the Department of the Interior in 1986 to implement section 1110(b) of ANILCA, which apply to persons who use lands and waters administered by NPS to conduct activities on, or for access to, non-federal inholdings within Alaska parks.

    A unique provision exists under section 8 of the Big Cypress National Preserve Addition Act of 1988 (Addition Act), codified at 16 U.S.C. 698m-4. In addition to authorizing the Secretary to promulgate rules and regulations specifically for Big Cypress National Preserve, the Addition Act authorized the Secretary to enter into interim agreements with owners of non-Federal oil and gas interests governing the conduct of oil and gas exploration, development, or production activities within the boundary of the Addition. 16 U.S.C. 698m-4(e). Such agreements had been interpreted to obviate the need for operators to propose a plan of operations under the 1978 Regulations for their operations on the Addition lands.

    Consistent with the statute, the present oil and gas operations within the Addition Area had been controlled under the terms of the Agreement Governing The Exercise Of Reserved Oil And Gas Rights Of Collier Enterprises And Barron Collier Company, which is Appendix 6 to the Agreement Among the United States of America, Collier Enterprises, Collier Development Corporation, and Barron Collier Company (May 12, 1988). This rule supersedes Appendix 6.

    Non-Federal Oil and Gas Rights in System Units

    Non-federal oil and gas rights exist within System units in situations where the United States does not own the oil and gas interest, either because:

    • The United States acquired the property from a grantor that did not own the oil and gas interest; or

    • The United States acquired the property from a grantor that reserved the oil and gas interest from the conveyance.

    Non-federal oil and gas interests can be held by individuals; nonprofit organizations; corporations; or state and local governments. Interests in non-federal oil and gas are property rights that may only be taken for public use with payment of just compensation in accordance with the Fifth Amendment of the U.S. Constitution.

    Accordingly, from their initial promulgation, the 1978 Regulations at 36 CFR 9.30(a) have stated that they are “not intended to result in the taking of a property interest, but rather to impose reasonable regulations on activities that involve and affect federally owned lands.” This rule includes this same provision.

    There are currently 534 non-federal oil and gas operations in a total of 12 System units. These units are: Alibates Flint Quarries National Monument, Texas (5 operations); Aztec Ruins National Monument, New Mexico (4 operations); Big Cypress National Preserve, Florida (20 operations); Big Thicket National Preserve, Texas (39 operations); Big South Fork National River and Recreation Area, Tennessee/Kentucky (152 operations); Cumberland Gap National Historical Park, Tennessee (2 operations); Cuyahoga Valley National Park, Ohio (90 operations); Gauley River National Recreation Area, West Virginia (28 operations); Lake Meredith National Recreation Area, Texas (174 operations); New River Gorge National River, West Virginia (1 operation; Obed Wild and Scenic River, Tennessee (5 operations); and Padre Island National Seashore, Texas (14 operations).

    Based on the presence of split estates, exploration and production occurring on adjacent or nearby lands, and likely increases in energy prices, NPS expects that future non-federal oil and gas operations within park boundaries could occur in up to 30 additional System units.

    Summary of Potential Impacts From Oil and Gas Operations on NPS Resources and Values

    Examples of non-federal oil and gas activities conducted in System units include geophysical (seismic) exploration; exploratory well drilling; field development well drilling; oil and gas well production operations, including installation and operation of well flowlines and gathering lines; well plugging and abandonment; and site reclamation.

    Such oil and gas activities may adversely impact System unit resources in various ways:

    • Surface water quality degradation from spills, storm water runoff, erosion, and sedimentation. Through site inspections the NPS has documented 26 instances of in-park operation sites with surface contamination;

    • Soil and ground water contamination from existing drilling mud pits, poorly constructed wells, spills, and leaks. Through site inspections the NPS has documented 47 instances of sites with wellhead leaks, pump jack leaks, tank battery leaks, and operations and maintenance spills;

    • Air quality degradation from dust, natural gas flaring, hydrogen sulfide gas, and emissions from production operations and vehicles. Through site inspections the NPS has documented 14 instances of notable odors emanating from the wellhead;

    • Noise from seismic operations, blasting, construction, oil and gas drilling and production operations. Through site inspections the NPS has documented 6 instances of excess noise issues from well pad equipment;

    • Noise and human presence effects on wildlife behavior, breeding, and habitat utilization;

    • Disruption of wildlife migration routes;

    • Adverse effects on sensitive and endangered species. Through site inspections the NPS has documented 15 sites with sensitive species or habitat;

    • Viewshed intrusion by roads, traffic, drilling equipment, production equipment, pipelines, etc.;

    • Night sky intrusion from artificial lighting and gas flares;

    • Disturbance to archeological and cultural resources from blasting associated with seismic exploration and road/site preparation, maintenance activities, or by spills. Through site inspections the NPS has documented 6 sites with associated cultural resources; and

    • Visitor safety hazards from equipment, pressurized vessels and lines, presence of hydrogen sulfide gas, and leaking oil and gas that can create explosion and fire hazards. Through site inspections the NPS has documented 62 instances of visitor safety hazards.

    Examples of documented impacts can be found in many parks. For example, at Big South Fork natural-gas-fired pump jack engines can be heard at visitor overlooks that are 2 to 3 miles away. Simple mitigation such as a corrugated steel fence around the operations would abate this impact; however, due to the well's grandfathered status, the NPS has been unable to require this mitigation and is therefore forced to accept this adverse impact.

    Another example of avoidable impacts was found at Aztec Ruins National Monument where an operation exempt from the 1978 Regulations due to the grandfathered exemption contained a road that traversed an unexcavated archeological site. Only when this well lost its grandfathered status due to a change of operator was the NPS able to require the new operator to conduct a cultural resource survey to determine the impacts to the site. As mitigation the operator installed a layer of dirt between the archeological site and the road base to protect the resources.

    Final Rule Summary of Final Rule

    The summary below details the significant differences between the 1978 Regulations and this final rule. As appropriate, this summary also briefly describes the reasons changes were made to this rule as a result of public comments received.

    Purpose and Scope of the Regulation Interests Protected Under These Regulations

    After careful review we have found that the 1978 Regulations were inconsistent in their description of the interests that the regulations were designed to protect. This rule at § 9.30(a) and throughout consistently states that the purpose of the regulations is to protect federally owned or administered lands, waters, or resources of System units, visitor uses or experiences, and visitor or employee health and safety. The NPS evaluates operators' proposals on a case-by-case basis and applies avoidance and mitigation measures and requires financial assurance amounts to the extent necessary to protect the interests described above. Depending on the type of activity proposed, environmental factors, visitor use patterns, and land ownership status (activity either on federal or non-federal lands), the NPS will adjust its avoidance and mitigation measures and financial assurance amounts accordingly.

    This rule replaces the phrase “federally owned or controlled” with the phrase “federally owned or administered” to be consistent with the terminology we use in our general regulations, at 36 CFR 1.2, and 36 CFR 1.4(a) (definition of “National Park System”).

    Operators Subject to the Regulation

    Under § 9.30(a) of the 1978 Regulations, application of the rule was predicated on “access on, across, or through federally owned or controlled lands or waters.” This rule at 9.30(b) applies to all operators conducting non-federal oil or gas operations on lands or waters within a System unit, regardless of the ownership or legislative jurisdictional status of those lands or waters.

    Reasonable Regulation of Non-Federal Oil and Gas Rights

    Section 9.30(c) of this rule retains language from § 9.30(a) of the 1978 Regulations stating that the intention of this subpart is to reasonably regulate non-federal oil and gas activities in a System unit, but not to result in a taking of private property. Although the NPS has required mitigation measures on proposed operations, we have never, in the more than 37 years of applying this subpart, failed to approve a plan of operations. We will continue to work with operators to ensure they have reasonable access to their oil and gas rights while protecting park resources and values without resulting in a taking in violation of the Fifth Amendment of the United States Constitution.

    Scope of the Regulations

    Section 9.31(a) of this rule changes the scope to cover all nonfederal oil and gas operations within the boundary of a System unit outside of Alaska. Section 9.31(b) of this rule also covers those operations that become located within a System unit either by statutory boundary expansion or establishment of a new System unit. Section 9.31(c) of this rule covers those operations that access oil and gas rights from a surface location outside the park boundary but due to a boundary expansion or establishment of a new unit, the surface location is now within a System unit. Under § 9.31(b) and (c) such operations follow the same requirements and procedures as those for previously exempt operations at §§ 9.50 through 9.53 of this rule.

    Type of Authorization Required

    Section 9.32(a) of this rule provides that an operator must have either a temporary access permit before conducting reconnaissance surveys on NPS administered lands or an operations permit for operations in a System unit.

    Demonstration of Valid Existing Right

    The 1978 Regulations contained a requirement that operators demonstrate that they hold valid rights to conduct activities under the plan of operations information requirements. This rule moves this requirement to § 9.32(b) to clarify that all operators must demonstrate up front that they hold a valid existing right to conduct operations in a System unit. Until an operator can demonstrate a valid existing right to conduct all operations described in its operation permit application, we will not undertake formal review of an operator's operations permit application.

    Definitions

    This rule deletes several redundant definitions because the terms are defined at 36 CFR 1.4. The definitions being deleted from the 1978 Regulation are: “Secretary” (former § 9.31(a)), “Director” (former § 9.31(b)), “Person” (former § 9.31(e)), and “Superintendent” (former § 9.31(f)). This rule also deletes two definitions that are no longer used: “Commercial Vehicle” (former § 9.31(g)) and “Statement for Management” (former § 9.31(o)).

    This rule adds a new term, “Area of Operations,” to replace the term “Site,” at former § 9.31(m). The new term means all areas where an operator is authorized to conduct its activities, including access to the operations site.

    This rule expands the definition of “Contaminating Substances,” at former § 9.31(n), to include other toxic or hazardous substances. This definition no longer uses the term “waste,” and the rule includes a separate definition of “waste.”

    This rule deletes the term “Unit” and instead the text of the rule uses the statutory term “System unit,” which is defined at 54 U.S.C. 100102(6).

    This rule changes the definition of “Operations” at § 9.31(c) of the 1978 Regulation, to clarify that “access” includes “any means of ingress to or egress from an area of operations.” This change covers any and all types of access, including access via aircraft (time, place, and manner of aircraft landing on or taking off) to an area of operations. Accordingly, the NPS removed former § 9.32(c), which regulated 9B aircraft access.

    The definition of “Operations” under this rule also clarifies that the operation of a flowline or a gathering line is included within this definition, but not the installation, operation, or maintenance of trans-park oil and gas pipelines that are under authority of a deeded easement or other right-of-way and which are not covered by this regulation.

    This rule adds a new term “Operations permit” as the permitting instrument for all operations. An operations permit is a special use permit subject to cost recovery under 54 U.S.C. 103104, which authorizes the NPS to recover all costs associated with providing necessary services associated with special use permits.

    This rule updates the definition of “Operator” at § 9.31(d) of the 1978 Regulations by clarifying that responsibilities and liability under this subpart can attach to the operator or the operator's agents, assignees, designees, lessees, or representatives.

    This rule defines “owner” as a “person” (the definition of “person” is found at 36 CFR 1.4).

    This rule adds a new definition of “Previously exempt operation” to clarify which types of operations are covered under §§ 9.50 through 9.53. This definition does not include those operations where the operator was granted an exemption under § 9.32(e) of the 1978 Regulations to the plan of operations requirement by the NPS because it accessed oil and gas rights inside the park boundary from a surface location outside the park boundary (which are covered by § 9.33(b) of this rule).

    This rule adds a new term “Reconnaissance survey” to clarify that reconnaissance surveys do not include surface disturbance activities, except the minimal disturbance necessary to perform the surveys.

    This rule adds a new term “Right to operate” that incorporates much of the language in § 9.36(a)(2) of the 1978 Regulations (right to operate description for a Plan of Operations). This new definition clarifies that an operator's documentation must demonstrate that all proposed activities are within the scope of that right.

    This rule adds a new term “Technologically feasible, least damaging methods” to describe the general standard that all operators must satisfy when meeting applicable operating standards.

    This rule adds a new term “Temporary access permit” to clarify that the NPS grants temporary access only for reconnaissance surveys and to collect basic information necessary to prepare a permit application.

    This rule adds a new term “Third-party monitor” to identify a third-party monitor's necessary qualifications.

    This rule adds a new term “Usable water” to describe the criteria that the NPS uses to identify protected sources of groundwater.

    This rule adds a new term “Waste” to differentiate between “waste” and “contaminating substances.” Further, the NPS changed the definition of Waste from the proposed rule by replacing the term “toxic or hazardous substance” with the phrase “contaminating substance” to more clearly explain the differences between wastes and contaminating substances.

    This rule adds a new set of terms “We and us” to refer to the National Park Service.

    This rule adds a definition of “You” to be consistent with the plain language format of this subpart.

    Commercial Vehicles

    This rule deletes former § 9.32(d). This access is controlled by NPS commercial vehicle regulations at 36 CFR 5.6(c).

    Previously Exempt Operations

    This rule creates a new section “Previously Exempt Operations” to describe the process for bringing exempt operations under the 1978 Regulations into compliance with the requirements of this rule. These include operations that do not require access on, across, or through federal lands (former § 9.30) and grandfathered operations (former § 9.33).

    The 1978 Regulations applied only when an operator's “access [was] on, across, or through federally owned or controlled lands or waters.” Seventy-eight current operations (15% of all oil and gas operations in System units) did not require access on, across, or through federally owned or controlled lands or waters and thus were not covered by the 1978 Regulations. These operators were not required to obtain an approved NPS plan of operations, post financial assurance, or otherwise comply with this subpart to protect park resources and values. However, our experience over the past three decades has demonstrated that these operations have the potential to adversely affect NPS resources, values, and visitor health and safety. The NPS identified at least 10 instances of previously exempt sites with oil spills or leaks resulting in contamination of soils and water.

    Under this rule at §§ 9.30 through 9.33, all operators conducting operations within NPS boundaries are subject to permit requirements. The permitting process includes an evaluation to determine whether, and the extent to which, such operations would have an adverse effect on federally owned or administered lands, waters, or resources of System units, visitor uses or experiences, or visitor or employee health and safety. These operations are also subject to measures to mitigate such adverse effects, as well as to the financial assurance and reclamation requirements.

    Under § 9.33 of the 1978 Regulations, operators who were conducting operations at the time the regulations became effective (January 8, 1979) and who had already obtained any valid federal or state permit were “grandfathered.” These operators were not required to obtain an approved plan of operations; comply with NPS operating standards, including reclamation of their area of operations to NPS standards; or post a reclamation bond. The Superintendent had authority under § 9.33(c) of the 1978 Regulations to suspend grandfathered operations if there was an “immediate threat of significant injury to federally owned or controlled lands or waters.” Under § 9.33(a)(1) of the 1978 Regulations, when the existing federal or state permit expired and was replaced with a new permit, a plan of operations would then be required.

    In 1978, the NPS had expected that over time the permits associated with these operations would expire and that the operators would then be required to come into compliance with the 1978 Regulations. However, the rate of permit expiration has been much slower than anticipated. This has resulted in approximately 45% of operations (241 wells service-wide) remaining exempt from the regulations despite the passage of over thirty-seven years. As discussed above, this has resulted in readily avoidable impacts to NPS-administered resources and values. The grandfather exemption was intended to provide for a “smooth and fair phase in of [the 1978] regulations.” (43 FR 57822) This rulemaking is intended to ensure that all operations within System units are conducted in a manner that protects park resources and values. This rule in §§ 9.50 through 9.53 sets forth the procedure for bringing previously exempt operations into compliance.

    Temporary Access

    This rule requires an operator to obtain a temporary access permit in order to conduct reconnaissance surveys on NPS administered lands and waters and removes provisions from the 1978 Regulations that allowed the NPS to authorize temporary access for existing operations and for new operations. Those provisions are no longer necessary because operations within the boundary of a System unit are required to obtain an Operations Permit. This rule identifies at §§ 9.60 through 9.63 the procedure for obtaining a temporary access permit and what information is necessary for the NPS to evaluate an operator's proposal. No comments were received on this provision of the proposed rule.

    Accessing Oil and Gas Rights From a Surface Location Outside The Park Boundary

    Section 9.32(e) of the 1978 Regulations allowed operators to apply for an exemption from the regulations if they directionally drilled from a surface location outside a System unit to reach a bottom hole located within NPS boundaries and the drillbore passed under any land or water the surface of which was owned by the United States. This exemption was available if operations within the park boundary posed no significant threat of damage to NPS resources, both surface and subsurface, resulting from surface subsidence, fracture of geological formations with resultant fresh water aquifer contamination, or natural gas escape. Surface activities located outside the NPS boundary were not within the scope of the 1978 Regulations. Under this regulation, regulatory authority over these operations is exercised beginning at the subsurface point where the proposed operation (borehole) crosses the park boundary, and applies to all infrastructure and activities within the System unit regardless of the ownership of the surface estate. NPS will review your proposed operations and provide an exemption from the operations permit requirement whenever it determines that your downhole operations within the park boundary do not pose a significant threat to park resources or park visitors. For further guidance on applying for an exemption for such operations, please see the 9B Operator's Handbook.

    The availability of the exemption is intended to continue to provide an incentive for operators to locate surface facilities outside a System unit. Location of operations outside a System unit generally avoids direct impacts to NPS resources and visitors. Therefore, this rule at § 9.70 is consistent with the concepts that underlay the former rule exemption, but operators are subject to the General Terms and Conditions and the Prohibitions and Penalties provisions for operations located within the boundary of a System unit.

    Operations Permit Application

    This rule details the information requirements that an operator must satisfy when submitting a complete Operations Permit application. These requirements are separated into the following categories: § 9.83, information that must be included in all applications; § 9.87, additional information that must be included for a proposed geophysical exploration; § 9.88, additional information that must be included for a proposed drilling operations; § 9.89 additional information must be included for a proposed well stimulation operations, including hydraulic fracturing; and, § 9.90 additional information that must be included for a proposed production operations.

    Additions to and Clarification of Existing Information Requirements

    This rule contains the following new or updated information requirements from the 1978 Regulations for all operations permit applications:

    Contact Information—Section 9.83 of the 1978 Regulations limited identification of an operation's key personnel to the operator, owners, and lessees. To ensure that the NPS has all appropriate contact information, § 9.83(b) of this rule requires that operators also identify agents, assignees, designees, contractors, and other representatives.

    Use of Water—Section 9.83(e) of this rule clarifies and expands upon § 9.36(a)(5) of the 1978 Regulations. Section 9.83(e) requires information regarding the source, transportation method and quantity of water to be used in addition to how the operator will manage waste water.

    New Surface Disturbance and Construction—Section 9.84 of this rule requires an operator to specify site security measures and an operation's power sources and transmission systems.

    • The NPS has updated language from the proposed rule at § 9.84(a)(2) to add “wetlands, seepage areas, springs, shallow water aquifers, . . .” to the example list of natural features.

    Environmental Conditions and Mitigation Actions—Section 9.85(a) of this rule has been updated from the proposed rule to clarify that natural resource conditions include baseline soil and water testing (e.g., use of photoionization detectors, conductivity meters, or titration strips) within an operator's area of operation. Further, § 9.85(b) of this rule requires an operator to describe steps proposed to mitigate adverse environmental impacts and list and discuss the impacts that cannot be mitigated. Operators are required to consider and describe all alternative technologically feasible, least damaging methods. Technologically feasible, least damaging alternatives are defined in § 9.31 as those alternatives that are viable (based on economic, environmental, and technological considerations) and conform to federal, state, and local laws and regulations.

    Cultural Resources—In this rule, the NPS eliminates § 9.47(a) of the 1978 Regulations, “Cultural Resource Protection,” because the section merely summarized the requirements of the Antiquities Act (54 U.S.C. 320301 et seq.). Restating those statutory requirements in this rule is unnecessary, and the 1978 Regulations reference failed to include other statutes that also applied to such resources.

    Spill Control and Emergency Preparedness Plan—Section 9.86 of this rule consolidates various provisions of the 1978 Regulations, includes a requirement that an operator must submit a Spill Control and Emergency Preparedness Plan (SCEPP) plan to the NPS, and identifies the information necessary for a SCEPP. The NPS has made nonsubstantive changes to the proposed rule so the term “Spill control and emergency preparedness plan” is used consistently throughout the final rule.

    This rule at § 9.87 clarifies the additional information a geophysical operator must submit to the NPS. Furthermore, this rule at §§ 9.88 through 9.90 clarifies the additional information an operator must submit if it is proposing to drill, stimulate, or produce a well. The final rule adds language to §§ 9.88 and 9.89 of the proposed rule to include any proposed stimulation technique including hydraulic fracturing.

    This rule also contains, § 9.89, a new set of information requirements for well stimulation, including hydraulic fracturing operations. Information requirements include identifying the geologic barriers between the target zone and the deepest usable water zone, verifying mechanical integrity of the wellbore, and describing water use and disposal management of flowback fluids. The NPS rule is similar to BLM's hydraulic fracturing information requirements at 43 CFR 3162.3-3(d)(1) through (7), which BLM recently promulgated under various authorities, including the Mineral Leasing Act, 30 U.S.C. 189, the Federal Land Policy and Management Act, 43 U.S.C. 1701 et seq. As previously discussed, that rule has not gone into effect, and is the subject of litigation. Regardless of BLM's authorities under the statutes it implements, we have determined, as discussed below, that the limited information and reporting requirements and performance standards for well stimulation activities under this rule are consistent with the Secretary's regulatory authority under the Organic Act. Additionally, since 2006 NPS has provided specific guidance on means to ensure that well integrity standards are met in its 9B Operator's Handbook.

    Operations Permit: Application Review Process

    Section 9.37(a)(1) of the 1978 Regulations required that, before approving a plan of operations, the Regional Director determine that the operator uses technologically feasible, least damaging methods that provide for protection of the park's resources and public health and safety.

    The 1978 Regulations had two different approval standards, depending on whether the operation was proposed on non-federally or federally owned surface. For operations proposed on non-federally owned surface a Regional Director could not approve an operation that would constitute a nuisance to federal lands or waters in the vicinity of the operations, or would significantly injure federally owned or controlled lands or waters. For operations proposed on federally owned surface a Regional Director could not approve an operation that would substantially interfere with management of the unit to ensure the preservation of its natural and ecological integrity in perpetuity, or would significantly injure federally owned or controlled lands or waters. If applying the standard for operations proposed on federally owned lands would constitute a taking of a property interest, the NPS could have either approved the operations if the operator used technologically feasible, least damaging methods or acquire the mineral interest.

    Section 9.37(b) and (c) of the 1978 Regulations required the NPS to make a decision on the plan of operations within 60 days after the date that the NPS determines that the materials submitted under the plan are adequate. Within 60 days, the Regional Director was required to make one of six final decisions in writing. The final decisions were: approval or rejection; conditional approval; modification to the plan or additional information is required; more time is necessary to complete review; environmental statement is required before approval; or more time is necessary for public participation and analysis of public comments.

    Section 9.37(c) of the 1978 Regulations provided that failure of the NPS to make a final decision within 60 days constituted a rejection of the plan for which the operator had the option of appealing immediately to the Regional Director under former § 9.49.

    This rule establishes a two-stage permit application review process, eliminates the dual approval standards, provides more realistic timeframes to provide notice back to an operator, and consolidates the final decisions the NPS can make on an operator's permit application.

    Stage One: Initial Review

    Section 9.101 of this rule describes the NPS's initial review of an operator's permit application. During initial review the NPS determines whether the applicant has supplied all information necessary for the NPS to evaluate the operation's potential impacts on federally owned or administered lands, waters, or resources of System units, visitor uses or experiences, or visitor or employee health and safety. The NPS will respond to applicants in writing within 30 days and notify them whether the information contained in their permit applications is complete. If the NPS needs more time to complete the initial review, the NPS will provide the applicant with an estimate of the amount of additional time reasonably needed and an explanation for the delay. Once a permit application is complete the NPS conducts a formal review.

    Stage Two: Formal Review

    During formal review under § 9.102, the NPS evaluates whether the proposed operation meets the NPS approval standards (§ 9.103) and complies with applicable federal statutes (e.g. National Environmental Policy Act (NEPA), Endangered Species Act (ESA), and National Historic Preservation Act (NHPA)).

    Timeframe for Final Action

    In light of NPS experience over the past 37 years in implementing the 1978 Regulations, the 60-day period for reaching a final decision on a permit application has proven to be unrealistic. These decisions require time to adequately analyze an operator's proposal, work with the operator on a design that incorporates acceptable avoidance and mitigation measures, and comply with the associated statutory responsibilities such as NEPA, ESA, and NHPA. These regulations provide operators with realistic expectations of the timeframe necessary to process operations permits. Similarly, the NPS has taken into account time frames for its coordination with other federal and state agencies. Thus, § 9.104 allows the NPS to complete its legal compliance responsibilities and then take final action on the operations permit within 30 days. This rule allows for a longer period of time, if the parties agree to it, or if the NPS determines that it needs more time to comply with applicable legal requirements.

    This rule removes § 9.37(c) of the 1978 Regulations, which allowed an operator to immediately appeal the failure to reach a decision within 60 days. This rule, at § 9.104, authorizes the Superintendent to notify the operator in writing that additional time is necessary to make a final decision.

    Elimination of Dual Approval Standards

    Section 9.103 replaces the dual approval standards under the 1978 Regulations with a single three-part approval standard that applies to all operations, regardless of surface ownership. Oil and gas operations located on non-federally owned surface have the potential to impact federally owned or administered lands, waters, or resources of System units, visitor uses or experiences, or visitor or employee health and safety to the same degree as operations sited on federally owned surface.

    Section 9.103(a) of the proposed rule has been changed in two ways. First, in response to comment the NPS changed the introductory language to expressly provide that if an operator meets the approval standards, the Regional Director will approve the operation permit. Second, this section lists two (rather than three) determinations that the Regional Director must make in order to approve an operations permit. The NPS clarified the language in § 9.103(a)(1) to include statutes that may apply to operations in particular System units. The NPS also removed language in paragraph (b)(3) in the proposed rule that required the Regional Director to make a “determination” that an operator was in compliance with all other applicable federal, state, and local laws. Rather, as a prerequisite to approval of an operations permit, the modified language requires that the operator provide the Regional Director with an affidavit stating that it is in compliance with all applicable federal, state, and local laws.

    Thus, revised § 9.103(b) requires three prerequisites for final approval: (1) Submittal of adequate financial assurance, (2) proof of adequate liability insurance, and (3) an affidavit stating that the operations planned are in compliance with all applicable Federal, State, and local laws and regulations.

    Final Actions

    Section 9.104 of this rule establishes two final actions: (1) Approved, with or without conditions, or (2) denial, and the justification for the denial. The Regional Director will notify the operator in writing of the final action. If approved, this written notification constitutes the NPS's authorization to conduct activities. The NPS has simplified the language at § 9.104(a)(2) to read “all applicable legal requirements.”

    The NPS has eliminated the proviso in the approval standard in current § 9.37(a)(3) of the 1978 Regulations, which allows for approval using only the “technologically feasible, least damaging methods” standard of § 9.37(a)(1) if application of the more stringent § 9.37(a)(3) standard would cause a taking of a property interest. Over the past 37 years of implementing the 1978 Regulations, the NPS has never invoked this exception. In every instance, the NPS been able to authorize operators' access while protecting park resources and values. Section 9.30(c) continues the 1978 regulatory statement that application of the regulations are not intended to result in a taking of mineral rights and § 9.104(b)(2) requires that any denial of an operations permit must be consistent with that provision. This change from the 1978 Regulations is not intended or expected to authorize any taking of property rights, and is intended solely to simplify the approval standards and avoid redundancy and confusion. The NPS will continue to work with operators to help plan and design their operations in a way that meets NPS operating standards and other applicable provisions of these regulations.

    Compliance With Big Cypress National Preserve Addition Act

    The Addition Act, 16 U.S.C. 698m-4, directs the NPS to promulgate rules and regulations governing the exploration for and development and production of nonfederal oil and gas interests within the Big Cypress National Preserve and Addition Area.

    Accordingly, § 9.105 of this rule describes the procedure for initial review of a proposed operation in Big Cypress National Preserve. This procedure differs slightly from the service-wide procedure described in §§ 9.101 and 9.102. The NPS's service-wide rule incorporates the 30-day initial review period from the Addition Act. However, the Addition Act at 16 U.S.C. 698m-4(b)(2)(C) places a limit on the amount of collaboration that can occur between the NPS and the operator. Under this provision, there is no mechanism for the NPS to require further information from an operator after the NPS has made its initial request for additional information. After making such a request, the NPS's only options are to approve or deny the application. This procedure could conceivably result in denial of applications that would have been approved if the NPS had the regulatory authority to again request the additional information necessary to fully evaluate a proposed operation. In practice, the NPS will continue to collaborate with prospective operators in Big Cypress National Preserve early in their planning process and as much as possible during initial review, in order to reduce such theoretical problems. The NPS is not using the Big Cypress procedure in its service-wide regulations, because it does not want to constrain its ability to have more robust collaboration with operators.

    The Addition Act also differs slightly from the proposed service-wide rule in that under the Addition Act the 90-day time period for final action begins upon submission of the permit application to the NPS. For the service-wide rule, the NPS has chosen not to adopt submission of the permit application as the triggering event for final action. Rather, the NPS service-wide rule provides that final action must occur within 30 days after the completion of NPS legal compliance responsibilities (such as NEPA, ESA, and NHPA). For proposals within Big Cypress National Preserve, the NPS will strive to meet the applicable timeframe for final action while otherwise complying with applicable laws including NEPA, ESA, and NHPA.

    The NPS has decided it is more appropriate to include these Big Cypress-specific provisions in this regulation instead of in a new park-specific regulation in part 7, because other provisions of this regulation still apply to oil and gas operations in Big Cypress National Preserve. It will be easier for operators to have all applicable provisions in one rule.

    Operating Standards

    Section 9.110 of this rule clarifies the purpose and function of operating standards. The NPS will maintain the current practice under the 1978 Regulations of setting non-prescriptive operating standards to allow operators the flexibility to design their proposed operation using the latest technological innovations that will best protect park system resources, values, and visitor health and safety.

    Section 9.110(a) of this rule clarifies the practice under the 1978 Regulations that applicable operating standards will be incorporated into an approved operations permit so that the operating standards become enforceable terms and conditions of an approved permit.

    Section 9.110(c) of this rule requires all operators to use technologically feasible, least damaging methods to protect NPS resources and values while assuring human health and safety. In the 1978 Regulations, “technologically feasible, least damaging methods” was part of an overall plan of operations approval standard at 36 CFR 9.37(a)(1).

    Reorganization of Operating Standards

    This rule organizes all operating standards into one section and separates the standards into the following categories: §§ 9.111 through 9.116, are operating standards that apply to all operations; § 9.117, additional operating standards that apply to geophysical operations; and § 9.118, additional operating standards that apply to drilling, stimulation, and production operations. Organizing the standards in this manner will allow the NPS and operators to readily understand which operating standards are applicable to the particular type of operation proposed.

    Clarification of and Additions to Former Operating Standards

    Some of the operating standards in the 1978 Regulations were minimally described. Additional operating standards were included in the NPS's 2006 9B Operator's Handbook. This rule now contains all operating standards. To the extent this rule incorporates operating standards from the 1978 Regulations without substantive change; those standards are not further discussed below. The operating standards summarized below are either clarifications to the 1978 Regulations, are new standards that the NPS has added, or are revisions to those included in the proposed rule.

    Operating Standards That Apply to All Operations

    This rule modifies language from § 9.112(a) of the proposed rule to remove the phrase “ground disturbing” because no activities incident to oil and gas operations, whether or not they disturb the ground, may be conducted within 500 feet of any structure or facility used by the NPS for interpretation, public recreation, or administration. The NPS moved § 9.112(a) of the proposed rule to § 9.111(a) of this rule. Section 9.111(a) of this rule modifies language from § 9.112(a) of the proposed rule to clarify that Superintendents may increase or decrease the 500 foot setback consistent with the need to protect federally owned or administered lands, water, or resources of System units, visitor uses or experiences, or visitor or employee health and safety. The NPS also added the phrase “within 500 feet of the mean high tide line” to § 9.111(a) of this rule to provide notice to operators that the general 500 foot setback also applies to tidal areas.

    This rule includes a new standard at § 9.111(b) to require that either existing or newly created surface disturbance is kept to the minimum necessary for safe conduct of operations.

    This rule modifies language from § 9.111(d) of the proposed rule to clarify how waste must be handled.

    This rule modifies language from § 9.111(g) of the proposed rule to clarify that hydrocarbon and air pollutant releases are to be minimized along with minimizing the flaring of gas.

    This rule adds new standards at §§ 9.114 and 9.115 that limit the visual and sound impacts of oil and gas operations on park visitor use and experience.

    This rule adds a new standard at § 9.111(h) that requires operators to control the introduction of exotic species.

    This rule adds new standards at § 9.112 that address hydrologic connectivity.

    Reclamation Operating Standards

    Section 9.116 of this rule describes the standards for reclamation.

    Operating Standards That Apply to Geophysical Operations

    Section 9.117 of this rule describes standards for geophysical surveying methods including source points, use of equipment and methods, and shot holes.

    Operating Standards That Apply to Drilling, Stimulation, and Production Operations

    Section 9.118(a)(1) of this rule requires all operators to use containerized mud systems during drilling, stimulation, and production operations.

    Section 9.118(a)(2) of this rule prohibits the establishment of new earthen pits for any use. Use of existing earthen pits may continue, however, the Superintendent may require the pits be lined or removed depending on site specific conditions.

    Section 9.118(b) of this rule establishes standards for well stimulation, including standards that address hydraulic fracturing operations, such as ensuring the mechanical integrity of the wellbore, water use and disposal, and management of flowback fluids.

    NPS's approach is to review an operator's submissions to determine if they meet the overall operating standard of using the most “technologically feasible, least damaging methods” that protect park resources and values, and any other applicable operation standards. If not, the NPS will add terms and conditions in the permits to address specific deficiencies. In light of our previous experience under the 1978 Regulations addressing downhole operations, we expect that application of these requirements will result in little or no change to well stimulation activities proposed by an operator and approved by the state. We also expect that in most cases the information needed to be reviewed by NPS will be that already submitted to the state for its approval. Guidance on specific means to meet NPS operating standards is found in NPS's 2006 9B Operator's Handbook, which is distributed to every operator and available electronically.

    General Terms and Conditions

    This rule contains a new “General Terms and Conditions” section listing terms and conditions that apply to all operations. This section consolidates the following sections from the 1978 Regulations: §§ 9.35, 9.36(a)(15), 9.37(f), 9.41(g), 9.42, 9.46, 9.47(b), and 9.51(a) and (b). Described below are either clarifications to the 1978 Regulations, new terms and conditions that the NPS has added, or revisions to those included in the proposed rule.

    The water use section at § 9.35 of the 1978 Regulations did not address all state water law systems under which water rights are established or decided. Section 9.120(b) of this rule requires that an operator may not use any surface water or groundwater owned or administered by the United States that has been diverted or withdrawn from a source located within the boundaries of a System unit unless the use has been approved in accordance with NPS policy.

    Because monitoring and reporting requirements are necessary for all operations, the NPS includes monitoring and reporting requirements under General Terms and Conditions. Section 9.121(a) authorizes the NPS to access an operator's area of operations at any time to monitor operations and to ensure compliance with the regulations. To the extent such operations are located on non-federally administered lands and waters, the NPS will provide the operator reasonable notice in advance of such access, other than in emergencies. Section 9.121(b) of this rule allows the NPS to require that operators hire third party monitors when they are necessary to ensure compliance and protection of park resources and values. The NPS had previously required in some operations plans the use of third party monitors to help ensure that it received unbiased, reliable, and timely monitoring information demonstrating an operator's compliance with its plan of operations. See, 2006 9B Operator's Handbook, Chapter 3 (Geophysical Exploration). Over the past fifteen years, operators at Big Thicket National Preserve, Padre Island National Seashore, Jean Lafitte National Historic Site, and Big Cypress National Preserve were required to use third party monitors for certain geographically extensive and logistically complex 3D seismic operations. The use of third party monitors allowed the NPS to augment monitoring by park staff while ensuring plan compliance and enabling operators to simultaneously engage in multiple operations at different locations. This provision also more closely conforms the NPS's requirements with practices of other federal agencies (BLM, the U.S. Forest Service, and the U.S. Fish and Wildlife Service have each in some instances required third party monitoring for oil and gas operations on lands they administer), as well as state oil and gas regulatory agencies. This section describes criteria that the NPS will consider when making the decision to require a third party monitor. The third party monitor will report directly to the NPS to ensure oversight and accountability.

    The NPS has modified language from § 9.121(c) and (d) of the proposed rule to clarify the timing for reporting of incidents occurring on an operations site and for reporting requirements for cultural or scientific resources encountered on an operations site, respectively.

    Section 9.121(e) broadens the reporting requirement from the 1978 Regulations to require that the operator submit any information requested by the Superintendent that is necessary to verify compliance with either a provision of the operations permit or this subpart. To ease this burden, the rule allows an operator to submit the same reports it submits to a state or other federal agency as long as those reports meet the information requirements of this subsection. This is similar to § 9.42 of the 1978 Regulations.

    Section 9.122 requires reporting related to the hydraulic fracturing process, including the disclosure of chemicals used in the hydraulic fracturing process and the volume of recovered fluids. In § 9.122, NPS has used BLM's post-hydraulic fracturing reporting requirements, but did not include two provisions (requirement for affidavit of compliance and general supporting documentation), as those requirements are addressed in other sections of this rule.

    Access to Oil and Gas Rights

    This rule contains a new section that addresses access across federally owned or administered lands or waters to reach the boundary of an operator's oil and gas right. Section 9.50 of the 1978 Regulations authorized the NPS to charge a fee for commercial vehicles using NPS administered roads. Despite this longstanding authority, we are not aware that such fees had actually been collected. This new section expands upon former § 9.50.

    Section 9.131(a)(1) of this rule allows the NPS to charge an operator a fee based on fair market value for access (e.g., use of existing roads as well as constructing new roads, or running gathering lines) across federal lands outside the scope of an operator's oil and gas right. The NPS will set fees consistent with NPS part 14 rights-of-way guidance (NPS Reference Manual 53, Special Park Uses, Appendix 5, Exhibit 2). Section 9.131(b) provides that NPS will not charge a fee for access that is within the scope of the operator's oil and gas right, or access that is otherwise provided for by law. Section 9.132 addresses access across federally owned or administered lands or waters necessary to respond to an emergency.

    Financial Assurance

    The NPS renamed this section of the rule “Financial Assurance” (titled “Performance Bond” under the 1978 Regulations) to better reflect the variety of instruments that operators can provide to the NPS to meet their obligation under this section.

    Section 9.48(a) of the 1978 Regulations required an operator to file a performance bond, or other acceptable method of financial assurance, for all types of non-federal oil and gas operations and all phases of the operations. The performance bond requirement ensured that in the event an operator becomes insolvent or defaults on its obligations under an approved plan of operations, the defaulted funds would be paid to the United States.

    Section 9.48(d)(3) of the 1978 Regulations limited the performance bond amount to $200,000 per operator, per System unit. Therefore, if one operator had multiple wells in an System unit, the NPS could only require up to $200,000 financial assurance from that operator. The $200,000 limit was established in 1979 and in most cases did not reflect the potential costs of reclamation. In the event of a default by the operator, reclamation costs exceeding the limit could have required the NPS to bring a civil action in federal court to recover the additional costs.

    Section 9.140 of this rule requires the operator to file with the NPS financial assurance in a form acceptable to the Regional Director. The current 9B Operator's Handbook identifies acceptable forms of financial assurance as including: corporate surety bonds, US Treasury bonds, irrevocable letters of credit, cash. The NPS will update the Handbook as additional guidance is provided.

    Section 9.141 of this rule makes the financial assurance amount equal to the estimated cost of reclamation. This substantially reduces the risk of the American taxpayers being left to assume reclamation costs in the event of operator default.

    Section 9.142 of this rule outlines the process for adjusting the amount of financial assurance due to changed conditions. Section 9.143 describes the conditions under which the NPS will release the financial assurance. Section 9.144 describes those circumstances that will result in forfeiture.

    Section 9.144(b)(3) of this rule allows the NPS to suspend review of an operator's pending permit applications, if that operator has forfeited its financial assurance in any System unit. Suspension would last until the Superintendent determines that all violations have been resolved.

    Modification to an Operation

    Section 9.150 of this rule renames the “Supplementation or Revision of Plan of Operations” section as “Modification to an Operation” to characterize any change to an approved operations permit. This section clarifies that either the NPS or the operator can request modification of the operator's permit, and describes the modification procedures. Approval of any modification to an approved permit must meet the relevant criteria applicable to Temporary Access Permits (§§ 9.60 through 9.63) or Operations Permit: Application Review Process (§§ 9.100 through 9.105).

    Section 9.150(c) of this rule prohibits an operator from implementing a modification until the NPS has provided written approval of the modification. No comments were received on this provision of the proposed rule.

    Change of Operator

    This section renames § 9.34 “Transfer of Interest” of the 1978 Regulations to “Change of Operator.”

    Section 9.34(a) of the 1978 Regulations provided that a previous operator remained liable on its financial assurance until it informed the NPS that the rights had been transferred to another party. A new operator could not operate until it posted financial assurance and ratified the existing plan of operations. Once the previous operator provided notice to the Superintendent, the previous owner could request release of its financial assurance before the new owner posted its own financial assurance with the NPS. Therefore, if the new operator abandoned operations before posting financial assurance with the NPS, the burden of reclaiming the site would fall on the taxpayers.

    Section 9.160(a) requires the previous operator to notify the NPS of a transfer of operations and provide contact information. Section 9.160(b) holds the previous operator responsible to the NPS until the new operator adopts and agrees to the terms and conditions of the previous operator's permit; and provides financial assurance; provides proof of liability insurance; and an affidavit demonstrating compliance with applicable federal, state, or local laws. Section 9.160(c) addresses a transfer of operation where the previous operator did not have an operations permit.

    Section 9.161(a) of this rule requires the new operator who acquires an operation that was under an operations permit to adopt the previous permit. Section 9.161(b) addresses the transfer of an operation where an exemption has been granted under § 9.72 of this rule. Section 9.161(c) addresses transfer of an operation where the previous operator did not have an operations permit. No comments were received on this provision of the proposed rule.

    Well Plugging

    This section replaces, in part, § 9.39(a)(2)(iv) of the 1978 Regulations and creates a new section entitled “Well Plugging.”

    Section 9.39(a)(2)(iv) of the 1978 Regulations required operators to plug and cap all non-productive wells and to fill dump holes, ditches, reserve pits, and other excavations. Section 9.116(d)(1) (Operating Standards) retains the requirement that an operator conduct reclamation by plugging all wells. However, the 1978 Regulations did not directly address whether NPS could require an operator to plug wells that have been in an extended shut-in status. As a result, inactive wells have remained unplugged for years and, in some instances, decades. Such unplugged wells have caused adverse impacts to park resources and presented risks to park visitors.

    Section 9.170(a) of this rule requires operators to plug a well within 60 days after cessation of drilling, or 1 year after completion of production operations, or upon the expiration of NPS approved shut-in status. Under § 9.171, an operator may obtain an extension to the plugging requirement if the operator demonstrates mechanical integrity, a plan for future use of the well, and that the operator will follow maintenance requirements.

    These procedures are consistent with the way many states approach the issue of inactive wells, and recognize that certain economical or logistical reasons exist to justify maintenance of wells in shut-in status for extended periods of time. Rather than a “produce or plug” policy, the rule is intended to ensure that shut-in wells are maintained in an environmentally sound and safe manner.

    Prohibitions and Penalties

    Section 9.51(c) of the 1978 Regulations provided two different compliance procedures for suspending an operation, depending on whether or not the violation posed an “immediate threat of significant injury to federally owned lands or waters.”

    Section 9.181 of this rule authorizes the Superintendent to suspend an operation regardless of whether an operator's violation poses an “immediate threat of significant injury.” Whether the threat is immediate or not, any violation that results in a threat of damage to park resources and values should be addressed by the Superintendent.

    Prohibited Acts

    Section 9.180 lists prohibited acts to provide operators with notice of the acts that constitute a violation of these regulations. The prohibited acts in this rule include violations of the terms and conditions of an Operations Permit, as well as violations of other provisions of these regulations.

    Incorporation of 36 CFR 1.3 Penalties

    Section 9.51 of the 1978 Regulation authorized the NPS to suspend an operation for non-compliance, and if the violation or damage was not corrected, revoke an operator's plan of operations. The process to suspend an operation required coordination between park staff and other NPS offices, during which time damage to park system resources and values may continue. Additionally, suspension and revocation were not necessarily the most appropriate means to correct minor acts of non-compliance (minor leaks and spills, improper road maintenance, or not maintaining proper site security). Therefore, we are incorporating our existing penalties provision at 36 CFR 1.3, which allows NPS law enforcement rangers and special agents to issue citations, which result in fines for minor acts of non-compliance, while treating serious acts as ones that may be subject to a fine or imprisonment, or both.

    No New Authorization Unless Operator Is in Compliance

    Under § 9.182 of this rule, NPS will not review any new operating permit applications or continue review of any pending permit applications in any System unit until an operator comes into compliance with this subpart or the terms or conditions of an operations permit. No comments were received on this provision of the proposed rule.

    Reconsideration and Appeals

    Most of the procedures outlined in § 9.49 of the 1978 Regulations remain the same. The operator continues to have the right to appeal a decision made by either the Superintendent or the Regional Director. The operator now must exhaust these remedies before the NPS decision is a final agency action that is subject to review under the Administrative Procedure Act (APA).

    This rule renames the first step of the process as a request for “reconsideration,” rather than an appeal, since it is directed to the same official who issued the original decision. The rule also includes other clarifications of the existing language, makes editorial corrections, and reorganizes the sequence of some of the paragraphs.

    Consistent with the APA, § 9.193(a) of this rule provides that during the reconsideration and appeals process the NPS's decision will be suspended and the decision will not become effective until the completion of the appeals process. Section 9.193(b) addresses suspension of operations due to emergencies that pose an immediate threat of injury to injury to federally owned or administered lands or waters.

    Under section 9.194, if the Superintendent has the authority to make the original decision, requests for reconsideration and appeals are to be filed in the manner provided under §§ 9.190 through 9.193, except that requests for reconsideration are directed to the Superintendent, and appeals are directed to the Regional Director.

    No comments were received on these provisions of the proposed rule.

    Public Participation

    The rule renames the “Public Inspection of Documents” section to “Public Participation.” Section 9.52(a) of the 1978 Regulation required a Superintendent to publish a notice in a local newspaper of a request to conduct non-federal oil and gas operations whether or not a complete plan of operations was ever submitted by an operator. Section 9.52(b) of the 1978 Regulation further required a Superintendent to publish a notice in the Federal Register of receipt of a plan of operations. This rule eliminates the public notice steps currently required under § 9.52(a) and (b) of the 1978 Regulation and replaces them with a more efficient public involvement and review process.

    The rule retains the ability for an operator to protect proprietary or confidential information from disclosure to the public. Operators need to clearly mark those documents that they wish to protect from public disclosure as “proprietary or confidential information” such that these documents are readily identifiable by the NPS decision maker. The NPS has also included provisions that allow an operator engaged in hydraulic fracturing operations to withhold chemical formulations that are deemed to be a trade secret. The NPS has updated § 9.200(c) from the proposed rule to include reference to §§ 9.88 and 9.89 to allow operators to maintain proprietary information for stimulation techniques. The NPS has also removed language from § 9.200(g) of the proposed rule regarding record retention for operations on Indian and Federal lands to make this provision conform to the scope of this regulation.

    Information Collection

    See Paperwork Reduction Act discussion below.

    Summary of and Responses to Public Comments

    A summary of substantive comments and NPS responses is provided below followed by a table that sets out changes we have made in the final rule based on the analysis of the comments and other considerations.

    NPS Authority To Regulate Non-Federal Oil and Gas Rights

    1. Comment: Commenters noted that additional regulation of private oil and gas rights on NPS land could infringe on private property rights or could represent a taking.

    NPS Response: Based on its long experience implementing the 1978 Regulation, NPS disagrees with the commenter's conclusion that application of this rule is likely to result in an actual taking of private property. This is discussed in further detail in the takings analysis above.

    2. Comment: Commenters stated that the NPS does not have authority to regulate oil and gas operations taking place on lands outside of a System unit boundary or on non-federally owned lands within the boundaries of System units.

    NPS Response: This rule states that the regulations only apply to operations that are conducted within the boundaries of System units. See § 9.30(a) and (b), the definition of “Operations” at § 9.40, and § 9.70.

    Although the NPS does not generally assert regulatory authority over activities on non-federal lands, see 36 CFR 1.2(b), the NPS has long regulated three types of activities on non-federal lands that have a high potential to harm park resources and values—the operation of solid waste disposal sites, 1872 Mining Law claims and operations, and non-federal oil and gas operations. As stated above, courts have consistently recognized NPS's authority to regulate non-federal interests within units of the National Park System. Courts have also recognized that on split estate lands. Where the federal government owns the surface estate and the mineral estate is privately held, the subsurface is within the boundary of a National Park System unit.

    This rule applies to all operations conducted within the boundary of a System unit, with the exception of System units in the State of Alaska, where this rule does not apply. As explained in the preamble to the proposed rule: “ [NPS's] experience over the past three decades has demonstrated that [operations conducted on non-federal lands] have the potential to have adverse effects on NPS resources, values, and visitor health and safety. Through site inspections, the NPS has found at least 10 instances of sites [on non-federal lands] with oil spills or leaks resulting in contamination of soils and water.” (80 FR 65575). That an operation is located on non-federal lands within a System unit does not mean that the operation has no potential to affect NPS administered resources and values.

    3. Comment: One commenter suggested the NPS require the mineral owner and the operator to assume joint and several liability arising from oil and gas operations.

    NPS Response: The NPS included joint and several liability as an alternative in the DEIS because it could encourage owners to emphasize to their lessees requirements for strict compliance with applicable laws and regulations, including the responsibility to plug and reclaim their operations. Because we have included in this rule a bonding requirement that covers the full estimated cost of reclamation, we have concluded that the joint and several liability provision is unnecessary.

    State Oil and Gas Regulation

    4. Comment: One commenter opposed the rule, stating that existing state oil and gas laws and regulations already provide sufficient oversight.

    NPS Response: In reviewing the state oil and gas regulations for the 8 states where non-federal oil and gas operations are currently undertaken in System units, the NPS found that the focus of these state regulations is primarily limited to the protection of mineral rights, maximization of production of oil and gas resources, protection of water resources, and managing waste by-products of oil and gas operations. While these states have general provisions that address protection of the environment and public health, they do not adequately protect NPS administered resources to the standards developed under this rule.

    Congress mandated that System units be managed “for the benefit and inspiration of all the people of the United States.” In the context of these regulations, the NPS fulfills its mandate by applying a consistent set of Servicewide standards to govern oil and gas activities in all System units. These regulations are designed to protect the unique and nationally significant natural and cultural resources that constitute each System unit, including: Geological resources, air quality, water quality and quantity, vegetation, fish and wildlife and their habitat, floodplains and wetlands, archeological resources, paleontological resources, soundscapes, night skies, viewsheds, cultural landscapes, and ethnographic resources. These regulations are also designed to protect visitor health and safety.

    5. Comment: One commenter expressed concern that the rule duplicates requirements in state regulations.

    NPS Response: To fulfill the NPS's mission to protect park resources and values, the NPS must have sufficient information from an applicant to adequately evaluate an operator's proposed operations. When applying for an operations permit, § 9.81(b) allows an operator to submit the same reports it submits to a state or other federal agency as long as those reports meet the information requirements of this subsection. This is similar to § 9.42 of the 1978 Regulations. The NPS will review this information and determine if it meets NPS information requirements and operating standards. This reduces the potential burden on applicants who have already applied for a state permit.

    Big Cypress National Preserve

    6. Comment: Commenters requested the NPS clarify how these regulations will apply to oil and gas activities in Big Cypress National Preserve in light of existing statutory provisions included in the Big Cypress enabling legislation.

    NPS Response: The relationship between this rule and Appendix 6 (to the Agreement Among the United States of America, Collier Enterprises, Collier Development Corporation, and Barron Collier Company (May 12, 1988)) is explained in the Summary of Final Rule section above. The Addition Act states that such “agreements shall be superseded by the rules and regulations promulgated by the Secretary, when applicable . . .” 16 U.S.C. 698m-4(e). This rule applies to operations in both the original preserve and the Addition Area.

    National Environmental Policy Act

    7. Comment: One commenter suggested that operators should be able to submit Environmental Assessments for agency use, and that the regulations should be updated to allow an operations permit application to function as a draft Environmental Assessment.

    NPS Response: The NPS will comply with Council on Environmental Quality and DOI NEPA regulations, and NPS NEPA guidance documents. This rule does not alter those requirements. An operations permit application generally does not contain all of the required elements of an Environmental Assessment. The NPS will continue its existing practice of allowing applicants to prepare the draft of the appropriate NEPA document. NPS will update its guidance manual to reflect this practice.

    Purpose and Scope

    8. Comment: One commenter suggested that 9B Rules be expanded to govern other non-federal mineral rights such as sand, gravel, and coal.

    NPS Response: Regulating the extraction of sand, gravel, and coal is beyond the scope of this rulemaking, which was to revise the former rules applicable to the exercise of non-federal oil and gas rights. Coal extraction is generally prohibited within System units under the Surface Mining Control and Reclamation Act. There are no current coal operations in any System units. The NPS generally is able to regulate non-federal sand and gravel extraction through the use of special use permits and applicable provisions of regulations set forth at 36 CFR part 6.

    9. Comment: Commenters suggested that the NPS consider buying out nonfederal mineral rights.

    NPS Response: The NPS has determined that acquisition of all mineral rights in System units is economically inefficient, financially infeasible, and unnecessary to protect park system resources and values.

    NPS will continue to determine, on a case by case basis and in collaboration with prospective operators, whether a proposed operation meets the operating standards and approval standards of these regulations. If the proposed operation does not meet 9B approval standards, the NPS has the authority to seek to acquire the mineral right from the operator.

    10. Comment: One commenter stated that the NPS has not demonstrated that there are systemic problems with the 1978 Regulations, or that existing regulatory schemes (including the 1978 Regulations) are inadequate.

    NPS Response: As described above in the “Summary of Potential Impacts from Oil and Gas Operations on NPS Resources and Values,” the NPS concluded the problems that necessitated this rule were systemic and that existing laws or regulatory schemes were inadequate to address protection of the nationally significant resources administered by the NPS.

    Demonstration of Right To Conduct Operations

    11. Comment: One commenter suggested that the rule clarify that an operator does not need to demonstrate a right to conduct oil and gas operations beneath the operator's access route, in cases where an operator needs to traverse some other area of the unit to access its operations area.

    NPS Response: As addressed by § 9.130—Access to Oil and Gas Rights, the NPS may have the discretion to grant access rights outside the boundary of an operator's oil and gas right when the operator does not hold a statutory or deeded right of access. In such cases, the operator does not need to demonstrate a right to conduct operations.

    12. Comment: One commenter suggested that the rule should better define the type of information that operators may submit to demonstrate the right to conduct operations. This commenter proposed other types of documents that could demonstrate a right to operate.

    NPS Response: The definition of “right to operate” in § 9.40 of the rule lists specific examples of documents—deed, lease, memorandum of lease, designation of operator, assignment of right—that would meet the requirement. The NPS has included the phrase “other documentation” in the rule because there may be documentation that is not listed that would demonstrate a legal right to conduct the operations in a System unit. This provides greater flexibility to the applicant. What the NPS deems an acceptable demonstration of a legal right to conduct operations is evaluated on a case by case basis.

    13. Comment: One commenter stated that the NPS should implement a conditional approval process that would allow the operator to access a mineral right over NPS land, subject to later demonstrating that the operator has acquired access to that mineral right.

    NPS Response: The NPS has long required the operator to demonstrate a right to operate prior to formally analyzing a proposal. This requirement ensures the NPS does not expend taxpayer funds on proposals that are ultimately not viable because an operator lacks sufficient rights. A parallel or contingent approval process would further complicate the regulations, and any time and cost savings for certain viable proposals would be outweighed by the unnecessary time and cost spent reviewing proposals that are not viable. However, an operator who has acquired only a portion of the rights it expects to eventually hold may, under § 9.82(b), submit its application in phases covering only those rights it holds at the time of the application.

    14. Comment: One commenter suggested that the permit review and approval process run parallel to the NPS's review of the operator's right to operate documentation.

    NPS Response: As explained in the previous response, NPS requires complete demonstration of a right to operate prior to formally analyzing a proposal, which includes the permit review and approval process. This provision is meant to ensure that the agency does not expend taxpayer money unnecessarily on proposals that may not be possible because of the lack of complete acquisition of the right to operate. For example, an operator proposing a 3D seismic survey covering many acres within a park may not ultimately be able to acquire all rights within the proposed operations area.

    Definitions

    15. Comment: One commenter suggested that the definition of “Waste” should not include items such as fuel drums, pipes, oil, or contaminated soil that have any residue of oil, which contains benzene, toluene, xylene, and other hazardous chemicals. This commenter said these items should instead be included under the definition of “Contaminating Substances.”

    NPS Response: The items described by the commenter fall under the definitions of both “waste” and “contaminating substances.” Any “waste” that contains a “contaminating substance” is required to be properly discarded from an operations site, but also handled in a manner that ensures proper containment and clean-up of the contaminating substance.

    16. Comment: One commenter suggested that the definition of “usable water” should not just refer to whether the water is usable for humans but also should include whether the water is usable for wildlife, ecosystems, and people's wells.

    NPS Response: The definition of the term “usable water” is the same as the definition of the term “underground source of drinking water” that is used by the Environmental Protection Agency (EPA) in the Underground Injection Control Program. A similar definition is used by several states with NPS units that have non-federal oil and gas operations (Texas, New Mexico, Florida). The EPA and these states use these definitions to regulate specific downhole activities of oil and gas operations and ensure protection of zones of groundwater. Water that is used by wildlife, ecosystems, and people's wells is addressed by other standards and requirements of the rule. See, hydrologic operating standards at § 9.112, and water use requirements at § 9.120. The definition for usable water does not need to be changed.

    Previously Exempt Operations

    17. Comment: One commenter expressed concern that elimination of the access and grandfathered exemptions would negatively impact individuals who rely on mineral resources located within the National Park System.

    NPS Response: The NPS has analyzed the effects of this rulemaking on the regulated public and found that the updates to the 1978 Regulations will not have a significant economic impact on a substantial number of 9B operators. The cost-benefit and regulatory flexibility analysis, Cost-Benefit and Regulatory Flexibility Analyses: U.S. Department of the Interior, National Park Service for Proposed Revisions to 36 CFR part 9, subpart B, can be viewed at https://parkplanning.nps.gov/CBA_9B.

    18. Comment: One commenter stated that the rule should phase out previously exempt “grandfathered” operations over a period of time, rather than requiring these operations to comply with the rule immediately.

    NPS Response: While not all previously exempt operations present an immediate threat to park resources and values, there are a significant number of operations exhibiting operating conditions not consistent with current NPS standards that the NPS concludes are necessary to address as soon as possible. These operations qualified for the regulatory exemption under the 1978 Regulations because they were in operation as of January 8, 1979, and the operators held a valid state or federal permit at that time. More than 37 years have passed during which these operations have not been subject to NPS regulation. The NPS is promulgating this rule to bring these operations up to NPS operating standards, including NPS reclamation and financial assurance standards, in order to protect park resources and values.

    Accessing Oil and Gas Rights From a Surface Location Outside the Park Boundary

    19. Comment: Some commenters opposed the provision in the rule that would authorize the NPS to exempt directional drilling operations outside the park boundary from the operations permit requirement. Commenters also sought clarification regarding what aspects of a directional drilling operation are covered by these regulations.

    NPS Response: As stated in the preamble to the proposed rule: “The availability of the exemption [for directional drilling operations] provides an incentive for operators to locate surface facilities outside a System unit. Location of operations outside a System unit generally avoids direct impacts to NPS resources and values.” (80 FR 65578). Regulating surface activities outside the boundary of the park would eliminate this incentive. Such surface activities are not themselves located on NPS-administered land. While there might be some benefits to the neighboring or nearby NPS-administered property, based on our years of experience, on the whole any such benefits would be outweighed by the loss of the incentive to place such operations outside the boundary, resulting in more direct impacts to park resources and values. Although law review articles and the Office of the Solicitor have indicated that the Organic Act could be interpreted to authorize NPS to regulate activities occurring outside park boundaries, to date NPS has not promulgated any such regulations.

    Regulatory authority over directional drilling operations begins at the subsurface point where the proposed operation (borehole) crosses the park boundary and enters federally owned or administered lands or water, and applies to all infrastructure and activities within the System unit. Section 9.70 of this rule states that “downhole activities inside an NPS unit are subject to these regulations.”

    The NPS does not require financial assurance from directional drilling operators because, although the operation is drilling to a bottom hole location within the System unit, the surface operation is located outside the park boundary on lands not administered by the NPS. Each state has requirements for plugging, abandonment, surface reclamation, and financial assurance from the operator.

    The NPS examines each exemption application to ensure that the downhole portion of the operation that is inside the park boundary meets the NPS approval standard. If the NPS finds, through monitoring of the operation, that the operation inside the park is causing damage to park administered resources or values, the NPS may require the operator to rectify the violation. The NPS has additional guidance describing the process for applying for such an exemption in the 9B Operator's Handbook.

    20. Comment: One commenter questioned whether the NPS has the authority to apply the General Terms and Conditions and Prohibitions and Penalties to directional drilling operations that cross beneath privately owned surface estate inside the System unit boundary.

    NPS Response: The General Terms and Conditions and the Prohibitions and Penalties provisions in the rule apply to operations located inside the boundaries of the System unit. The authority to apply these provisions to operations inside the unit on non-federal lands is summarized in the preamble to the proposed rule at 80 FR 65573.

    21. Comment: One commenter suggested that the rule require operators to comply with mitigation measures required by other natural resource agencies for directional drilling operations where the surface location is located outside the boundaries of System units.

    NPS Response: NPS has concluded that it does not need to separately enforce the requirements of other natural resource agencies or determine whether operators are in compliance with those authorities. NPS does generally coordinate and share information with other federal and state agencies, but it does not need to provide for duplicative enforcement of mitigation measures required by other authorities. Nothing in this rule relieves the permittee from compliance with other applicable, Federal, State, and local laws and regulations.

    22. Comment: One commenter suggested that the rule require mandatory rather than voluntary mitigation requirements for directional drilling operations located outside the boundary of the System unit.

    NPS Response: This rule requires mandatory rather than voluntary mitigation requirements for directional drilling operations Therefore, these operating standards are mandatory for operations conducted inside the park boundary. To maintain the incentive to have operators locate surface facilities outside the System unit, mandatory operating standards only apply to operations located with the boundary of the System unit. The NPS will not apply mandatory mitigation measures to operations outside System units.

    Operations Permit Requirement

    23. Comment: One commenter suggested that the rule should not require oil and gas operations to carry out mitigation and reclamation that are not required for other commercial activities.

    NPS Response: Exploration and development of non-federal oil and gas resources are high-impact industrial activities that can generally be expected to have some adverse effects on park resources. The mitigation and reclamation requirements contained in the final rule are similar to those required for other high impact industrial activities occurring within System units, e.g., mining activities under the part 9A regulations but do differ from those that may apply to other types of commercial activities, e.g., park concessions.

    24. Comment: One commenter requested that well permitting standards should require a baseline assessment of environmental conditions, including groundwater testing, before construction and operations commence.

    NPS Response: The proposed rule was intended to allow NPS to require the applicant to undertake specified testing and submit baseline data for evaluation. Section 9.85(a) of this rule has been updated from the proposed rule to clarify that the NPS may require any information it needs about natural and cultural resources, including groundwater resources that may reasonably be impacted by surface operations. This information may include data from baseline testing of soils and surface waters within the area of operations.

    25. Comment: One commenter suggested the examples listed for natural features should also include wetlands, seepage areas, springs, and shallow water aquifers.

    NPS Response: The NPS has included these as additional examples of natural features in the final rule.

    26. Comment: One commenter noted that the phrase spill control environmental preparedness plan was not referred to consistently throughout the proposed regulation.

    NPS Response: NPS has made nonsubstantive changes to address this in the final rule.

    27. Comment: One commenter suggested that maps of surface and subsurface operations be recorded in land records so that future oil and gas operations do not damage existing or closed wells.

    NPS Response: Operators proposing new operations within System units must submit a state drilling permit as part of an operations permit application. As part of the state permitting process, the state conducts an evaluation of the proposed well path in relation to existing (including plugged and abandoned) wells. Records of surface and subsurface operations, including maps and permit applications, are kept by the state oil and gas permitting agency and are used by the state to evaluate subsequent applications.

    Operations Permit Approval

    28. Comment: Commenters suggested that the permit approval standards could be interpreted to give the NPS the authority to determine whether an operator has complied with state and local law.

    NPS Response: NPS did not intend to make such determinations. As a result, we have clarified this rule so that it simply requires at § 9.120(c) that an operator provide an affidavit to the NPS stating that it is in compliance with all applicable Federal, state, and local laws. The Regional Director will review affidavits submitted by an operator prior to approval of an operations permit.

    29. Comment: The NPS sought comments on whether the 180 day timeline for final action is reasonable and on any resulting incremental impacts on operators. Commenters expressed concern that the rule gives the NPS too much time to review a permit application, and that the NPS could take more time in order to comply with applicable laws without a hard deadline for taking a final action. One commenter suggested that the NPS review all operations permit applications within 90 days, with an automatic 60-day extension if needed as well as additional time as the applicant agrees. The commenter modeled that recommendation on the time frame for reviewing biological opinions in the Endangered Species Act, which allows for a total of 185 days for review. One commenter recommended that the NPS add a provision that would allow for automatic approval of an operations permit if the NPS did not reach a deadline.

    NPS Response: In response to comments and upon further review, the NPS has decided to change the timeframe for final action in this rule to “within 30 days of completing all required legal compliance, including compliance with the National Environmental Policy Act . . .” The NPS is making this change because it more accurately reflects the timeframe for the process that the NPS must follow before taking final action on an Operations Permit. Under this rule, the NPS has 30 days to conduct its “initial review” to determine whether an operator's application is complete, request more information from the operator, or inform the operator that more time is necessary and written justification for the delay. Once an application is deemed complete the NPS must complete its legal compliance responsibilities, which include, but are not limited to, compliance with NEPA (for example, preparing an Environmental Assessment and a Finding of No Significant Impact), compliance with the ESA (for example, consulting with the U.S. Fish & Wildlife Service under Section 7), and consultation with Indian tribes. Once the legal compliance is completed, the NPS will take final action within 30 days. The NPS may only take more time if the operator agrees, or if it is necessary for the NPS to comply with unanticipated legal requirements.

    Providing for automatic approval of a permit application if the NPS does not meet a deadline would most likely violate procedural and substantive legal requirements for agency actions.

    30. Comment: One commenter recommended that the rule: (1) State the criteria on which the NPS will deny operation permit applications; (2) state that the NPS shall approve a plan of operations if the plan complies with existing law and applicable operating standards; and (3) include a reference to the enabling statutes for System units and any standards that may be contained therein.

    NPS Response: Operations permits would be approved or denied based on whether the plan meets the approval standards. Therefore this rule only needs one set of standards. Accordingly, the NPS has clarified the language in this rule. The final rule states that the Regional Director will approve an operations permit if the NPS determines that the operations meet the approval standards.

    Section 9.103(a)(1) of this rule has been updated from the proposed rule to reflect that the Regional Director must determine that the operations will not impair park resources and values under the NPS Organic Act, or violate other statutes governing administration of specific units of the National Park System. Enabling statutes are mentioned because NPS is required to comply with requirements imposed by Congress for individual System units.

    Operating Standards

    31. Comment: One commenter requested that the rule exempt certain operations from specific operating standards on a case by case basis.

    NPS Response: To the extent that certain operating standards are not applicable to a particular proposal, those standards would not be applied by the NPS. Accordingly, there is no need for an exemption. The NPS does not find it necessary or advisable to allow for exemptions to otherwise applicable operating standards.

    32. Comment: One commenter suggested the rule clarify the: (1) Applicability of the technologically feasible, least damaging methods standard to site specific conditions regarding environmental and operating methods that are presented by an operator's proposal; and (2) prohibition of “ground disturbing operations” within 500 feet of any structure or facility used by the NPS for interpretation, public recreation, or administration.

    NPS Response: Section 9.110(c) of this rule requires operators, when applying standards to a particular operation, to use technologically feasible, least damaging methods to protect federally owned or administered lands, waters, and resources of System units, visitor uses and experiences, and visitor and employee health and safety. The NPS applies the “technologically feasible, least damaging methods” standard consistently to all aspects of an operation. The NPS included the phrase “to a particular operation” in this section, however, to recognize that the methods used to meet the technologically feasible, least damaging methods standard may vary depending on the individual operation and the environmental conditions of the proposed operation.

    The NPS has removed the phrase “ground disturbing” from this rule because generally no activities incident to oil and gas operations, whether or not they disturb the ground, may be conducted within 500 feet of any structure or facility used by the NPS for interpretation, public recreation, or administration. We have clarified the language in this rule regarding the Superintendent's discretion to increase or decrease this distance consistent with the need to protect federally owned or administered lands, waters, or resources of System units, visitor uses or experiences, or visitor or employee health and safety.

    33. Comment: Commenters suggested that the rule should require the use of best management practices and specific, prescriptive performance standards.

    NPS Response: Executive Order 12866 requires federal agencies, to the extent feasible, to specify performance objectives, rather than specifying the behavior or manner of compliance that regulated entities must adopt. Consistent with this direction, and because this approach has worked well under the 1978 Regulations, this rule maintains the current practice of setting non-prescriptive operating standards that provide operators the flexibility to design their proposed operation using the latest technological innovations that best protect park system resources, values, and visitor health and safety.

    Wildlife and Habitat Protection

    34. Comment: One commenter suggested that the proposed rule address how listed species under the Endangered Species Act (ESA) will be conserved in areas impacted by oil and gas activities, including those using hydraulic fracturing completion methods.

    NPS Response: NPS will consult with FWS and NOAA in accordance with the requirements of Section 7 of the ESA. It is not necessary to repeat or separately incorporate those requirements in this regulation.

    35. Comment: One commenter suggested that the rule identify habitats and implement seasonal closures and other time limitations to protect wildlife and other resources.

    NPS Response: Through interdisciplinary review of each site-specific proposal under the regulation, the NPS identifies potential effects from oil and gas operations on species and habitat. The NPS applies mitigation and avoidance measures, which may include seasonal closures, to protect these resources, and also implements requirements imposed or recommended by FWS and NOAA through the Section 7 process.

    Hydraulic Fracturing Completion Methods

    36. Comment: One commenter expressed concern that the rules for hydraulic fracturing are premature due to ongoing litigation concerning the Bureau of Land Management (BLM) final rule to manage hydraulic fracturing on federal and tribal lands (80 FR 16128).

    NPS Response: The U.S. District Court for the District of Wyoming, in State of Wyoming v. U.S. Department of the Interior, Case No. 2:15-CV-043-SWS, issued an order on June 21, 2016, setting aside the BLM regulations. That order is under appeal in the U.S. Court of Appeals for the Tenth Circuit. That case concerns different statutory authorities that do not apply to the NPS, and is unlikely to set any precedent that is applicable to regulations issued under NPS's authorities, which require NPS to conserve park resources and protect against their impairment, and which do not generally provide for any development of federally owned oil and gas in System units.

    37. Comment: One commenter opposed the rule because it would allow operators to withhold disclosure of fracking chemicals.

    NPS Response: The NPS supports and through this rule requires the disclosure of all chemicals used in any hydraulic fracturing operation. Operators may provide this information to the NPS through FracFocus or another existing database available to the public. Because Federal law provides for the protection of trade secrets, the NPS will allow that information to be withheld if the operator and any other owner of the trade secret submits to the NPS an affidavit containing specific information explaining the reasons for the claim for protection. If the NPS has questions about the validity of the claim for protection, the NPS may require the operator to provide the withheld information to the NPS, and the NPS will then determine whether the data must be disclosed to the public.

    38. Comment: One commenter recommended that the rule be revised to require disclosure of chemicals for all types of well stimulation operations, not just hydraulic fracturing operations.

    NPS Response: NPS has added language in §§ 9.88 and 9.89 of the rule to clarify that operators must disclose all chemicals used for well stimulation activities in a System unit. These disclosures are subject to any lawful trade secret protections that may be demonstrated by an operator.

    39. Comment: One commenter suggested that the rule ban hydraulic fracturing or set specific standards to protect park resources from the potential effects of hydraulic fracturing.

    NPS Response: Congress has directed the NPS to “ensure that management of System units is enhanced by the availability and utilization of a broad program of the highest quality science and information.” 54 U.S.C. 100702. Some studies show that oil and gas operations that include hydraulic fracturing stimulation methods can negatively affect surrounding resources and the environment and can increase the risks of such impacts where appropriate measures are not taken before, during, and after hydraulic fracturing operations (e.g., improper cementing of casing and well integrity issues or surface mismanagement of fracking and flowback fluids). However, studies also show that proper implementation of such measures can substantially reduce—to a level close to that of conventional well operations—the risks to the surrounding environment from hydraulic fracturing operations. Based on the NPS's research and review of studies provided during the public comment period, a blanket ban on hydraulic fracturing completion methods in System units is not necessary at this time. The NPS will continue to review information on hydraulic fracturing completion methods as it becomes available. Proposed well completion programs using hydraulic fracturing are not given blanket approval. The rule includes operating standards and approval standards that are designed to ensure that operators employ the least damaging methods that are technologically feasible, and that such methods do not impair park system resources or values. The NPS will consider hydraulic fracturing operations on a case by case basis and analyze potential impacts on park resources and values according to the approval standards in the rule.

    40. Comment: One commenter expressed concern that operators are not required to retain records long enough to provide adequate protections from hydraulic fracturing operations.

    NPS Response: The rule requires the operator (and any subsequent operators) to maintain records until the later of when the NPS releases the operator's financial assurance or 7 years after completion of hydraulic fracturing operations. The rule does not allow the operator to destroy withheld information before the NPS releases the operator's financial assurance. The NPS does not release the operator's financial assurance until the operator has completed operations, including site reclamation. These timeframes provide for an adequate length of time to require an operator to retain records, and are consistent with other federal agency requirements for record retention, see BLM Oil and Gas; Hydraulic Fracturing on Federal and Indian Lands (80 FR 16128). The NPS has determined that a perpetual retention requirement is not necessary.

    General Terms and Conditions

    41. Comment: One commenter suggested that the rule contain language that would ensure that third party monitors have no conflict of interest.

    NPS Response: Although the third party monitor, if required by the NPS, is hired by the operator, the monitor reports directly to the NPS. Additionally, this rule requires that the monitor demonstrate its qualifications to the NPS. These requirements are sufficient to avoid conflicts of interest.

    42. Comment: One commenter suggested shortening the notification and reporting timeframe for equipment failure (including loss of mechanical integrity), accident, injury to persons or resources, or notification of change of operator.

    NPS Response: The reporting and notification timeframes are appropriate to protect park resources and values. The NPS is declining to shorten the time frames because we conclude that the proposed timeframes sufficiently address both protection of park resources and the practical needs of the operator for time to prepare appropriate notices to NPS. For loss of mechanical integrity, the rule requires the operator to immediately cease the operation and notify the Superintendent as soon as feasible, but no later than 24 hours after the incident. For accidents and injury to persons and resources, § 9.121(c) and (d) of this rule has been updated from the proposed rule to require notification as soon as feasible, but no later than 24 hours. For change of operator, the rule reduces the seller's notification time from 60 in existing regulations to 30 days. This 30 day period is sufficient because the rule holds the previous owner responsible until the Regional Director accepts the new operator's financial assurance.

    Access Fees

    43. Comment: One commenter questioned the legal authority of the NPS to charge access fees to parties who own subsurface oil and gas rights underneath the access route leading to the boundary of the oil and gas right being developed and the legal basis for charging access fees for oil and gas operators in excess of those it charges for other recreational users.

    NPS Response: Federal law states that charges should be assessed against each identifiable recipient for special benefits beyond those received by the general public from Federally-permitted activities. 31 U.S.C. 9701. This statute authorizes the NPS to impose a user charge for the value of the facilities or lands used, or the services provided. The NPS does not charge oil and gas operators for access that is pursuant to a right (e.g., access within the boundary of the oil and gas right that is being developed) or via a deeded or statutory right to use the park-administered lands. NPS is only charging for access that is granted as a privilege “outside the scope of an operator's oil and gas right.” This sort of access is a special benefit that warrants such a user charge. Unless otherwise authorized by law, such funds collected are deposited in the general fund of the Treasury as miscellaneous receipts.

    44. Comment: One commenter suggested the rule should contain criteria that would be used to determine how the NPS would authorize an operator to undertake compensatory mitigation in lieu of paying a fee to access oil and gas rights.

    NPS Response: At this time, the NPS is unable to identify the necessary statutory authority to promulgate a regulatory provision authorizing use of compensatory mitigation in lieu of payment of fees for access. However, if such authority becomes available in the future, the NPS intends to re-evaluate whether it can then authorize the substitution of compensatory mitigation projects.

    Financial Assurance

    45. Comment: One commenter stated that the removal of the bond cap and the mechanism for calculating a bond amount for non-federal lands is not adequately explained in the rule.

    NPS Response: The NPS applies the financial assurance provisions on a case by case basis, including the calculation of the amount of financial assurance necessary to reclaim and restore the federally owned surface estate. To calculate the amount of financial assurance, the NPS considers the following costs: Plugging wells (if applicable), removing all equipment and debris, restoring topographic grade, replacing topsoil, vegetation planting/seeding, exotic species control, and monitoring the success of reclamation. For proposed operations that are located on non-federal surface estate within a System unit, the NPS will consider whether that operation requires any reclamation of adjacent federal lands (e.g., reclamation of temporary access road across NPS administered lands). If a particular operation located on non-federal land has no potential to require reclamation of federal land, the NPS will not require financial assurance from that operator.

    46. Comment: One commenter suggested that the amount of financial assurance required for oil and gas operations should incorporate the amount of financial assurance already required under state law, such that the total amount of financial assurance provided to all government entities be considered when determining if the amount of financial assurance meets the total potential cost of reclamation. The commenter gave an example that if the total cost of reclamation by a third party would be $500,000, and the state is requiring a $200,000 reclamation bond, then the NPS should only require an additional $300,000 financial assurance ($500,000−$200,000) for the project. This would protect taxpayers in the event of a default, and would not require an operator to pledge financial assurance that is in excess of the required amount.

    NPS Response: The NPS is responsible for ensuring that an operator fulfills its reclamation responsibilities after operations cease protecting park resources and values and ensuring that there is adequate bonding to do so is a high priority. In many states, the required reclamation bond is a blanket bond. In the commenter's example, the state-required $200,000 reclamation bond is likely not for a single well, but would cover multiple wells. For example, the State of Texas allows operators to post a blanket bond of $250,000 to cover one hundred or more wells. (Texas Statewide Rule 78). In this scenario, should an operator become insolvent and not meet its reclamation requirements, the state required blanket bond is likely not an adequate amount to reclaim each of the operator's 100-plus well sites. Further, the State could not ensure the NPS that the bonded funds would be available to reclaim the operator's sites within a System unit. In many states, funds collected from insolvent operators go into a plugging fund, and funds are assigned to oil and gas sites based on a prioritized list established by the State. We are not aware of any state assurance programs, where the amount paid to the State would with certainty be available to NPS. For these reasons, the rule requires the full estimated amount of assurance be provided to NPS.

    Well Plugging

    47. Comment: One commenter suggested the NPS shorten the approval period for a shut-in well so that public lands are not left in a degraded condition any longer than necessary.

    NPS Response: Five years is a reasonable amount of time to allow an operator to meet the criteria it needs to obtain authorization to shut in its well. All applicable laws and regulation related to well-bore integrity and testing will still apply during the shut-in period, which will protect park resources and values until the operator obtains the shut-in authorization.

    Public Participation

    48. Comment: One commenter expressed concern about the removal of specific public notice requirements under the proposed rule.

    NPS Response: Sections 9.52(a) and (b) of the 1978 Regulations are removed by this rule because these provisions created an inefficient method of public involvement. Section 9.52(a) of the 1978 Regulations required the Superintendent to publish a notice of access requests in a newspaper of general circulation in the county(s) where the lands were situated, or in publications deemed appropriate by the Superintendent. At that point in the operator's planning process, the scope and methods of the proposed operation were not finalized. Further, after initial scoping and planning, an operator may sometimes abandon its proposal. Notice to the public at such a preliminary stage of the operator's planning was premature for meaningful public engagement.

    Section 9.52(b) of the 1978 Regulations required the Superintendent to publish a notice in the Federal Register advising the public that the plan of operations was available for public review and comment. Under this rule, the NPS will provide the opportunity for public review and comment (on both the complete permit application and draft environmental review documents) in accordance with NEPA and other applicable legal requirements. See § 9.200(a). In general, public notice includes a 30-day public comment period.

    49. Comment: One commenter requested that the NPS issue guidance materials for public review and comment prior to finalizing the rule.

    NPS Response: The NPS will follow its standard procedures for review and issuance of guidance documents. See NPS Management Policies (2006), Introduction (Law, Policy, and Other Guidance), page 5. Because any new guidance documents must be consistent with these regulations, these regulations must be issued first.

    Alaska

    50. Comment: Commenters expressed concerns regarding the conflict between the rule and the access provision found in ANILCA section 1110(b), including the possible imposition of access fees or compensatory mitigation on those interests subject to the ANILCA access provision. Other commenters stated that NPS lacked the authority to regulate such activities on park inholdings section 103(c) of ANILCA.

    NPS Response: As stated above, the NPS has chosen to limit the rule to System units outside of Alaska. We have also clarified above that the Departmental regulations at 43 CFR part 36 are unaffected by this rule. This addresses or moots the concerns raised in these comments and will allow NPS to address concerns expressed in a future rulemaking if appropriate, once the Sturgeon litigation is resolved.

    Changes in the Final Rule.

    After taking the public comments into consideration and after additional review, the NPS made the following substantive changes in the final rule as described in the table below. The NPS also made numerous non-substantive changes to the regulatory language and formatting in the final rule. These non-substantive changes are not included in the table below.

    §§ 9.30(a) and (b) Added “. . . within System units outside of Alaska, . . .” § 9.31(a) Added “. . . proposes to conduct non-federal oil or gas operations outside of Alaska.” § 9.40 Definition of Waste—changed “toxic or hazardous substance” to “contaminating substance.”
  • Definition of Unit—deleted the term “Unit.” The text of this rule uses the statutory term “System unit,” which is found at 54 U.S.C. 100102(6).
  • Definition of Operations—changed to “. . . occurring within a System unit outside of Alaska.”
  • Definition of Operator—changed to “. . . within the boundaries of a System unit outside of Alaska.”
  • Definition of Technologically Feasible Least Damaging Methods—removed “on a case-by-case basis, . . .”
  • Definition of Third Party Monitor—removed “demonstrated to the NPS . . .”
  • § 9.63 Removed 60 day maximum time for reconnaissance survey permit and replaced it with “based upon the scope of the reconnaissance surveys needed.” § 9.70 Modified language to clarify when an operations permit is required for operations that access oil and gas rights located inside a System unit from a surface location outside the unit. § 9.84(a)(2) Added “wetlands, seepage areas, springs, shallow water aquifers, . . .” to the list of examples of natural features. § 9.85(a) Modified language to clarify that the NPS may require an operator to conduct baseline testing. § 9.88(j) Added “any proposed stimulation techniques” to the list of completion reporting requirements. § 9.89(a) Modified language to clarify what geologic information is required in an operations permit application that proposes well stimulation activities. § 9.89(e)(1) Modified language to clarify the stimulation fluid information requirement in an operations permit application. § 9.103(a) Modified language to clarify the criteria under which the Regional Director will approve operations permits. § 9.103(a)(1) Modified language to clarify the NPS laws that apply to the approval of operations permits. § 9.103(b)(3) Changed the approval section to reflect that the Regional Director will review affidavits that the operator submits showing that the operations proposed are in compliance with all applicable federal, state, and local laws and regulations. § 9.104(a) Modified language to clarify the timeframe for a Regional Director to take final action on an operations permit application. § 9.104(a)(2) Removed “Executive Orders” from the list of requirements with which the Regional Director must ensure consistency to approve an operations permit and changed to read “all applicable legal requirements.” § 9.111(a) Section 9.112(a) of the proposed rule moved to § 9.111(a). Section 9.111(a) was modified to clarify the required setbacks from surface water; wetlands the mean high tide line; or structures or facilities. § 9.111(d) Changed to read “confine in a manner appropriate to prevent escape” § 9.111(g) Modified to clarify the operating standard for minimizing the release of air pollutants and hydrocarbons, and flaring of gas. § 9.111(i) Inserted new operating standard for the protection of sensitive wildlife. § 9.112 Paragraphs changed to reflect movement of § 9.112(a) of the proposed rule to § 9.111(a) of this rule § 9.120(a) Modified to clarify that operators are responsible for ensuring that all employees, contractors, and subcontractors comply with NPS requirements. § 9.121(b)(3) Added paragraph (b)(3) to clarify that third party monitors must disclose any potential conflicts of interest to the NPS. § 9.130 Added “. . . in any System unit outside of Alaska . . .” § 9.150 We added language to this section to provide more clarity on the processes to modify an operations permit. § 9.160 We added language to this section to provide more clarity on the processes for an operator to transfer operations. § 9.161 We added language to this section to provide more clarity on the processes for a new operator to acquire operations. § 9.170(b) Changed from “continuously inactive for a period of 1 year” to “has no measureable production quantities for 12 consecutive months.” § 9.200(c) We added reference to § 9.88(j) to clarify that proprietary information submitted pursuant to § 9.88 can be withheld from disclosure. § 9.200(g) Modified language to clarify the record retention requirements after completion of hydraulic fracturing operations.
    Compliance With Other Laws, Executive Orders, and Department Policy Regulatory Planning and Review (Executive Order 12866 and 13563)

    Executive Order 12866 provides that the Office of Information and Regulatory Affairs in the Office of Management and Budget will review all significant rules. The Office of Information and Regulatory Affairs has determined that this rule is significant because it may raise novel legal or policy issues arising out of legal mandates, the President's priorities, or the principles set forth in the Executive order.

    Executive Order 13563 reaffirms the principles of Executive Order 12866 while calling for improvements in the nation's regulatory system to promote predictability, to reduce uncertainty, and to use the best, most innovative, and least burdensome tools for achieving regulatory ends. The executive order directs agencies to consider regulatory approaches that reduce burdens and maintain flexibility and freedom of choice for the public where these approaches are relevant, feasible, and consistent with regulatory objectives. Executive Order 13563 emphasizes further that regulations must be based on the best available science and that the rulemaking process must allow for public participation and an open exchange of ideas. We have developed this rule in a manner consistent with these requirements.

    Regulatory Flexibility Act (RFA)

    This rule does not have a significant economic effect on a substantial number of small entities under the RFA (5 U.S.C. 601 et seq.). This certification is based on the cost-benefit and regulatory flexibility analysis found in the report Cost-Benefit and Regulatory Flexibility Analyses: U. S. Department of the Interior, National Park Service for Proposed Revisions to 36 CFR part 9, subpart B, which can be viewed at https://parkplanning.nps.gov/CBA_9B.

    Small Business Regulatory Enforcement Fairness Act (SBREFA)

    This rule is not a major rule under 5 U.S.C. 804(2) of the SBREFA. This rule:

    (a) Does not have an annual effect on the economy of $100 million or more;

    (b) Will not cause a major increase in costs or prices for consumers, individual industries, Federal, state, or local government agencies, or geographic regions; and

    (c) Does not have significant adverse effects on competition, employment, investment, productivity, innovation, or the ability of U.S.-based enterprises to compete with foreign-based enterprises.

    These conclusions are based upon the cost-benefit and regulatory flexibility analysis found in the report entitled Cost-Benefit and Regulatory Flexibility Analyses: U. S. Department of the Interior, National Park Service for Proposed Revisions to 36 CFR part 9, subpart B, which can be viewed at https://parkplanning.nps.gov/CBA_9B.

    Unfunded Mandates Reform Act

    This rule does not impose an unfunded mandate on State, local, or tribal governments or the private sector of more than $100 million per year. The rule does not have a significant or unique effect on State, local, or tribal governments or the private sector. It addresses use of national park lands, and imposes no requirements on other agencies or governments. A statement containing the information required by the UMRA (2 U.S.C. 1531 et seq.) is not required.

    Takings (Executive Order 12630)

    The NPS received public comment that additional regulation of private oil and gas rights on NPS land could infringe on private property rights or could represent a taking. The rule does not take private property or authorize the taking of private property. Moreover, implementation of the rule is not likely to result in a taking of private property.

    The rule updates regulations that have been in effect since 1979. It updates various provisions of the existing regulations in a manner that is consistent with current industry standards and technological capabilities, prevailing industry and investor expectations, and the most recent developments in regulatory and takings law. It authorizes NPS to recover its legitimate permit-processing and monitoring costs and to charge operators for privileged access across federal lands (i.e., access that is not a legal right incident to the mineral estate). Although it may potentially increase the amount of financial assurance that operators must post, it will do so only to a level commensurate with the cost of restoring the federally owned surface estate.

    The rule extends the applicability of these regulations to most currently exempt operations located within park boundaries. During the 36 years that the existing regulations have been in place, however, NPS has never disapproved a submitted plan of operations and no mineral owner or operator has ever filed a claim asserting that implementation of the regulations has resulted in a taking of private property. Moreover, as described above, the rule updates the existing regulations in a manner consistent with current industry standards and technological capabilities. Accordingly, the application of the rule to currently exempt operations is not likely to result in a taking. The rule continues to allow operators reasonable access across federally owned surface to develop non-federal mineral rights. No other private property is affected. The rule brings outdated provisions into line with modern regulatory practice and is a reasonable exercise of its regulatory authority.

    Finally, the regulatory text will continue to state (as do the existing regulations) that it is not intended to result in a taking. The existing regulations also contain a second provision that expressly applies the lower of the two standards of review in the event of a possible taking. Because this rule contains only one standard of review (in an effort to simplify the rule), such a provision no longer appears appropriate. NPS has never actually needed to invoke that second provision, nor has it ever failed to provide final approval for a plan of operations that has been sought. Under the rule, NPS retains discretion to make individual permit decisions that will avoid a taking if an unexpected problem should arise.

    Federalism (Executive Order 13132)

    Under the criteria in section 1 of Executive Order 13132, the rule does not have sufficient federalism implications to warrant the preparation of a Federalism summary impact statement. It addresses use of national park lands, and imposes no requirements on other agencies or governments. A Federalism summary impact statement is not required.

    Civil Justice Reform (Executive Order 12988)

    This rule complies with the requirements of Executive Order 12988. Specifically, this rule:

    (a) Meets the criteria of section 3(a) requiring that all regulations be reviewed to eliminate errors and ambiguity and be written to minimize litigation; and

    (b) Meets the criteria of section 3(b)(2) requiring that all regulations be written in clear language and contain clear legal standards.

    Consultation With Indian Tribes (E.O. 13175 and Department policy) and ANCSA Native Corporations

    The Department of the Interior strives to strengthen its government-to-government relationship with Indian Tribes through a commitment to consultation with Indian Tribes and recognition of their right to self-governance and tribal sovereignty. We evaluated this rule under the Department's consultation policy and under the criteria in Executive Order 13175 and determined that it has no substantial direct effects on federally recognized Indian tribes and that consultation under the Department's tribal consultation policy is not required. Nonetheless, NPS consulted with all federal tribes traditionally associated with System units that have current oil and gas operations, and System units that do not have active operations, but have potential for future operations. The NPS initially consulted with federal tribes during scoping for the DEIS. Upon initial consultation, the NPS received letters back from the Choctaw Nation of Oklahoma, the Hopi Tribe, the Navajo Nation, and the San Carlos Apache Tribe of the San Carlos Reservation requesting consultation and review of the DEIS, once available. The NPS again consulted with all federal tribes traditionally associated with System units that have current oil and gas operations, and System units that do not have active operations, but have potential for future operations when the DEIS and proposed rule were released for the 60 day public comment period. The NPS received letters/emails back from the Choctaw Nation of Oklahoma, Pueblo of Santa Ana and Pueblo of Santa Clara on its second consultations letters. These letters are available in the appendix to the FEIS. In recognition of its relationship with tribal affiliates, the NPS Alaska Regional office reached out directly to Alaska tribes. NPS received no follow up comments from the Alaska tribal affiliates.

    Paperwork Reduction Act (44 U.S.C. 3501 et seq.)

    This rule contains information collection requirements that have been approved by the Office of Management and Budget (OMB) under the PRA (44 U.S.C. 3501 et seq.). OMB has reviewed and approved the current information collection requirements associated with non-Federal oil and gas rights in national parks and assigned OMB Control Number 1024-0064, which expires June 30, 2019. OMB has assigned OMB Control Number 1024-0274 (expires XX/XX/2019) for information collection associated with 36 CFR part 9, subpart B, contained in this rule. We plan to transfer the corresponding burden for the subpart B requirements to OMB Control No. 1024-0064 after the final rule goes into effect and will then discontinue the new number. We may not conduct or sponsor and you are not required to respond to a collection of information unless it displays a currently valid OMB control number.

    Title: Non-Federal Oil and Gas Rights, 36 CFR part 9, subpart B.

    OMB Control Number: 1024-0274.

    Service Form Number: None.

    Type of Request: New.

    Description of Respondents: Businesses.

    Respondent's Obligation: Required to obtain or retain a benefit.

    Frequency of Collection: On occasion.

    Activity/requirement Estimated number of
  • annual
  • responses
  • Completion time per
  • response
  • (hours)
  • Estimated total annual burden hours
    Previously Exempt Operations (§§ 9.50-9.53) 106 10 1,060 Application for Temporary Access Permit (§§ 9.60-9.63) 5 15 75 Extension of Temporary Access Permit 1 1 1 Accessing Oil and Gas Rights From a Surface Location Outside the Park Boundary—Application for Exemption (§§ 9.70-9.73) 3 80 240 Accessing Oil and Gas Rights From a Surface Location Outside the Park Boundary—Notice of change (§§ 9.70-9.73) 1 2 2 Operations Permit Operations Permit (New Operations): Application Contents—(§§ 9.80-9.90) 5 140 700 Operating Standards—Stimulation Operations (§ 9.118(b)): Demonstrate mechanical integrity 5 4 20 Record treating pressures and all annular pressures 5 4 20 Notify Superintendent if mechanical integrity is lost 1 1 1 Report of accident 2 1 2 Operating Standards—Production (§ 9.118(c)): Document maintenance of mechanical integrity 534 2 1068 Signage to identify wells 5 4 20 General Terms and Conditions (§§ 9.120-9.122): Affidavit that proposed operations are in compliance with all laws and that information submitted to NPS is accurate 111 1 111 Third-Party Monitor Report 60 17 1,020 Notification—Accidents involving Serious Personal Injuries/Death and Fires/Spills 2 1 2 Written Report—Accidents Involving Serious Injuries/Deaths and Fires/Spills 2 16 32 Notification—Discovery of any cultural or scientific resources 1 1 1 Report—Verify Compliance with Permits 534 4 2,136 Reporting for Hydraulic Fracturing 1 2 2 Financial Assurance (§§ 9.140-9.144) 5 1 5 Modification to an Operation (§ 9.150) 1 16 16 Change of Operator (§§ 9.160-9.161) 5 8 40 Well Plugging (§§ 9.170-9.171) 33 14 462 Reconsideration and Appeals (§§ 9.190-9.194) 1 16 16 Public Participation (§ 9.200) 1 4 4 Total 1,430 7,056

    Currently, there are oil and gas operations in 12 of the 410 parks in the National Park System, and about 60 percent of those operations are exempt from NPS regulations. This rule would apply NPS regulations to operations that are currently exempt and any future oil and gas operations in the National Park System. We will use the information collected to: (1) Evaluate proposed operations, (2) ensure that all necessary mitigation measures are employed to protect park resources and values, and (3) ensure compliance with all applicable laws and regulations. We will collect information associated with non-Federal oil and gas operations within units of the National Park System under the below listed sections of 36 CFR part 9, subpart B:

    Previously Exempt Operations (§§ 9.50 through 9.53) Temporary Access Permits (9.60 through 9.63) Accessing Oil and Gas Rights from a Surface Location Outside the Park Boundary (9.70 through 9.73) Operations Permit: Application Contents (§§ 9.80 through 9.90) Operating Standards (§§ 9.110-9.118) Financial Assurance (§§ 9.140 through 9.144) Modification to an Operation (§ 9.150) Change of Operator (§§ 9.160 and 9.161) Well Plugging (§§ 9.170 and 9.171) Reconsideration and Appeals (§§ 9.190 through 9.194) Public Participation (§ 9.200)

    During the proposed rule stage, we received one comment which addressed the issue of the information requested under this rule. The commenter suggested that the NPS collect baseline and historical data on groundwater levels, water quality, aquifer conditions, groundwater discharge, natural features, and aquatic and wildlife habitats that could be used to evaluate potential effects and actual impacts of mineral development on habitats, communities, homeowners, farms and ranches within and surrounding national parks.

    NPS Response: This rule contains information requirements that will allow the NPS to collect and evaluate the information that the commenter is suggesting. For instance, the rule allows the NPS to request that the operator provide baseline water quality data in its permit application. See, § 9.85(a). Further, each permit application will be evaluated under the requirements of the National Environmental Policy Act for impacts to the human environment.

    The public may comment, at any time, on the accuracy of the information collection burden in this rule and may submit any comments to the Information Collection Clearance Officer, National Park Service, 12201 Sunrise Valley Drive (Mail Stop 242), Reston, VA 20192.

    National Environmental Policy Act (NEPA).

    This rule constitutes a major Federal action with the potential to significantly affect the quality of the human environment. We have prepared the FEIS under the requirements of NEPA. On October 20, 2016, the Director of the National Park Service signed the Record of Decision identifying Alternative B in the FEIS as the selected action. The FEIS and ROD are available online at https://parkplanning.nps.gov/FEIS9B and https://parkplanning.nps.gov/ROD_9B.

    Effects on the Energy Supply (Executive Order 13211).

    This rule is not a significant energy action under the definition in Executive Order 13211. A statement of Energy Effects is not required.

    Drafting Information

    This rule reflects the collective efforts of NPS staff in the Geologic Resources Division, parks, and field offices, with assistance from the Regulations, Jurisdiction, and Special Park Uses Division.

    List of Subjects 36 CFR Part 1

    National parks, Penalties, Reporting and recordkeeping requirements

    36 CFR Part 9

    National parks, Oil and gas exploration, Reporting and recordkeeping requirements.

    In consideration of the foregoing, the National Park Service amends 36 CFR parts 1 and 9 as follows:

    PART 1—GENERAL PROVISIONS 1. The authority citation for part 1 continues to read as follows: Authority:

    54 U.S.C. 100101, 100751, 320102.

    2. Revise § 1.3 to read as follows:
    § 1.3 Penalties.

    (a) A person convicted of violating a provision of the regulations contained in parts 1 through 7, part 9 subpart B, and parts 12 and 13 of this chapter, within a park area not covered in paragraph (b) or (c) of this section, shall be punished by a fine as provided by law, or by imprisonment not exceeding 6 months, or both, and shall be adjudged to pay all costs of the proceedings.

    (b) A person who knowingly and willfully violates any provision of the regulations contained in parts 1 through 5, 7, part 9 subpart B, and part 12 of this chapter, within any national military park, battlefield site, national monument, or miscellaneous memorial transferred to the jurisdiction of the Secretary of the Interior from that of the Secretary of War by Executive Order No. 6166, June 10, 1933, and enumerated in Executive Order No. 6228, July 28, 1933, shall be punished by a fine as provided by law, or by imprisonment for not more than 3 months, or by both.

    Note to paragraph (b):

    These park areas are enumerated in a note under 5 U.S.C. 901.

    (c) A person convicted of violating any provision of the regulations contained in parts 1 through 7 and part 9 subpart B of this chapter, within a park area established pursuant to the Act of August 21, 1935, 49 Stat. 666, shall be punished by a fine as provided by law and shall be adjudged to pay all costs of the proceedings. 54 U.S.C. 320105.

    (d) Notwithstanding the provisions of paragraphs (a), (b), and (c) of this section, a person convicted of violating § 2.23 of this chapter shall be punished by a fine as provided by law. 16 U.S.C. 6811.

    PART 9—MINERALS MANAGEMENT Subpart D—Alaska Mineral Resource Assessment Program 3. The authority citation for part 9, subpart D, is revised to read as follows: Authority:

    16 U.S.C. 410hh; 16 U.S.C. 3101, et seq.; 16 U.S.C. 347; 16 U.S.C. 410bb; 16 U.S.C. 1131 et seq.; 54 U.S.C. 320301; 54 U.S.C. 100101, et seq.

    Subpart D—[Redesignated as Subpart C] 4. Redesignate subpart D, consisting of §§ 9.80 through 9.89, as subpart C, consisting of §§ 9.300 through 9.309. 5. Revise subpart B to read as follows: Subpart B—Non-federal Oil and Gas Rights Purpose And Scope Sec. 9.30 What is the purpose and scope of this subpart? 9.31 When does this subpart apply to me? 9.32 What authorization do I need to conduct operations? 9.33 If am already operating under an NPS authorization, what do I need to do? Definitions 9.40 What do the terms used in this subpart mean? Previously Exempt Operations 9.50 Do I need an operations permit for my previously exempt operations? 9.51 How do I apply for my operations permit? 9.52 What will the NPS do with my application? 9.53 May I continue to operate while the NPS reviews my application? Temporary Access Permits 9.60 When do I need a temporary access permit? 9.61 How do I apply for a temporary access permit? 9.62 When will the NPS grant a temporary access permit? 9.63 How long will I have to conduct my reconnaissance surveys? Accessing Oil and Gas Rights From a Surface Location Outside the System Unit Boundary 9.70 Do I need an operations permit for accessing oil and gas rights from outside the System unit boundary? 9.71 What information must I submit to the NPS? 9.72 How will the NPS act on my submission? 9.73 If I don't need an operations permit, are there still requirements that I must meet? Operations Permit: Application Contents 9.80 Who must apply for an operations permit? 9.81 May I use previously submitted information? 9.82 What must I include in my application? 9.83 What information must be included in all applications? 9.84 Existing conditions and proposed area of operations. 9.85 Environmental conditions and mitigation actions. 9.86 Spill control and emergency preparedness plan. 9.87 What additional information must be included if I am proposing geophysical exploration? 9.88 What additional information must be included if I am proposing drilling operations? 9.89 What additional information must be included if I am proposing well stimulation operations, including hydraulic fracturing? 9.90 What additional information must be included if I am proposing production operations? Operations Permit: Application Review Process 9.100 How will NPS process my application? 9.101 How will the NPS conduct initial review? 9.102 How will the NPS conduct formal review? 9.103 What standards must be met to approve my operations permit? 9.104 What final actions may the Regional Director take on my operations permit? 9.105 What is the approval process for operations in Big Cypress National Preserve? Operating Standards 9.110 What are the purposes and functions of NPS operating standards? 9.111 What general facility design and management standards must I meet? 9.112 What hydrologic standards must I meet? 9.113 What safety standards must I meet? 9.114 What lighting and visual standards must I meet? 9.115 What noise reduction standards must I meet? 9.116 What reclamation and protection standards must I meet? 9.117 What additional operating standards apply to geophysical operations? 9.118 What additional operating standards apply to drilling, stimulation, and production operations? General Terms And Conditions 9.120 What terms and conditions apply to all operators? 9.121 What monitoring and reporting is required for all operators? 9.122 What additional reports must I submit if my operation includes hydraulic fracturing? Access to Oil and Gas Rights 9.130 May I cross Federal property to reach the boundary of my oil and gas right? 9.131 Will the NPS charge me a fee for access? 9.132 Will I be charged a fee for emergency access to my operations? Financial Assurance 9.140 Do I have to provide financial assurance to the NPS? 9.141 How does the NPS establish the amount of financial assurance? 9.142 Will the NPS adjust my financial assurance? 9.143 When will the NPS release my financial assurance? 9.144 Under what circumstances will the NPS retain my financial assurance? Modification to an Operation 9.150 How can an approved permit be modified? Change of Operator 9.160 What are my responsibilities if I transfer my operations? 9.161 What must I do if operations are transferred to me? Well Plugging 9.170 When must I plug my well? 9.171 Can I get an extension to the well plugging requirement? Prohibitions and Penalties 9.180 What acts are prohibited under this subpart? 9.181 What enforcement actions can the NPS take? 9.182 How do violations affect my ability to obtain a permit? Reconsideration and Appeals 9.190 Can I, as operator, request reconsideration of NPS decisions? 9.191 How does the NPS process my request for reconsideration? 9.192 Can I appeal the Regional Director's decision? 9.193 Will filing a request for reconsideration or appeal stop the NPS from taking action under this subpart? 9.194 What if the original decision was made by the Superintendent? Public Participation 9.200 How can the public participate in the approval process? Information Collection 9.210 Has the Office of Management and Budget approved the information collection requirements? Subpart B—Non-Federal Oil and Gas Rights Authority:

    16 U.S.C. 230a(a)(4), 459d-3, 460cc-2(i), 460ee(c)(4), 698c(b)(2), 698i(b)(2), and 698m-4; 18 U.S.C. 3571 and 3581; 31 U.S.C. 9701; 54 U.S.C. 100101, 100751, and 103104.

    Purpose and Scope
    § 9.30 What is the purpose and scope of this subpart?

    (a) The purpose of this subpart is to ensure that operators exercising non-federal oil and gas rights within a System unit outside of Alaska use technologically feasible, least damaging methods to:

    (1) Protect federally owned or administered lands, waters, or resources of System units;

    (2) Protect NPS visitor uses or experiences, or visitor or employee health and safety; and

    (3) Protect park resources and values under the statute commonly known as the NPS Organic Act;

    (b) This subpart applies to all operators conducting non-federal oil or gas operations on lands or waters within System units outside of Alaska, regardless of the ownership or legislative jurisdiction status of those lands or waters.

    (c) We do not intend for this subpart to result in a taking of a property interest. Application of this subpart is intended to reasonably regulate operations within System units that may affect federally owned or administered lands, waters, and resources, visitor uses and experiences, and visitor and employee health and safety.

    § 9.31 When does this subpart apply to me?

    (a) This subpart applies to you if you are an operator who conducts or proposes to conduct non-federal oil or gas operations outside of Alaska.

    (b) If you were operating outside of a System unit and your operation has been included within an existing System unit as a result of a change to the boundary, or included within a newly established System unit, you are subject to §§ 9.50 through 9.53.

    (c) If you were operating under an exemption because your operation accessed oil and gas rights inside the System unit boundary from a surface location outside the boundary, and your surface location has been included within an existing System unit as a result of a change to the boundary, or included within a newly established System unit, you are subject to §§ 9.50 through 9.53.

    § 9.32 What authorization do I need to conduct operations?

    (a) Except as provided in §§ 9.70 through 9.73, you must obtain a temporary access permit under §§ 9.60 through 9.63 or an operations permit under §§ 9.80 through 9.90 before conducting operations.

    (b) You must demonstrate that you have the right to operate in order to conduct activities within a System unit.

    § 9.33 If I am already operating under an NPS authorization, what do I need to do?

    (a) If you already have an NPS-approved plan of operations, you may continue to operate according to the terms and conditions of that approval, subject to the provisions of this subpart. For purposes of this subpart, we consider your approved plan of operations to be either a temporary access permit or operations permit.

    (b) This section applies to you if we have granted you an exemption to the plan of operations requirement because your operation accesses oil and gas rights inside a System unit boundary from a surface location outside the boundary. You may continue to operate under the exemption provided that your operations comply with the general terms and conditions of §§ 9.120 through 9.122. You are also subject to the prohibitions and penalties in §§ 9.180 through 9.182.

    Definitions
    § 9.40 What do the terms used in this subpart mean?

    In addition to the definitions in 36 CFR 1.4, the following definitions apply to this subpart:

    Area of operations means lands or waters within a System unit on which your operations are approved to be carried out, including roads or other areas where you are authorized to exercise the oil and gas rights.

    Contaminating substance means any toxic or hazardous substance which is used in or results from the conduct of operations and is listed under the Clean Water Act at 40 CFR part 116, the Resource Conservation and Recovery Act at 40 CFR part 261, or the Hazardous Materials Transportation Act at 49 CFR part 172. This includes, but is not limited to, explosives, radioactive materials, brine waters, formation waters, petroleum products, petroleum by-products, and chemical compounds used for drilling, production, processing, well testing, well completion, and well servicing.

    Gas means any fluid, either combustible or noncombustible, which is produced in a natural state from the earth and which maintains a gaseous or rarefied state at ordinary temperature and pressure conditions.

    Oil means any viscous combustible liquid hydrocarbon or solid hydrocarbon substance easily liquefiable on warming that occurs naturally in the earth, including drip gasoline or other natural condensates recovered from gas without resort to manufacturing process.

    Operations means all existing and proposed functions, work, and activities in connection with the exercise of oil or gas rights not owned by the United States and located or occurring within a System unit outside of Alaska.

    (1) Operations include, but are not limited to: Access by any means to or from an area of operations; construction; geological and geophysical exploration; drilling, well servicing, workover, or recompletion; production; gathering (including installation and maintenance of flowlines and gathering lines); storage, transport, or processing of petroleum products; earth moving; excavation; hauling; disposal; surveillance, inspection, monitoring, or maintenance of wells, facilities, and equipment; reclamation; road and pad building or improvement; shot hole and well plugging and abandonment, and reclamation; and all other activities incident to any of the foregoing.

    (2) Operations do not include reconnaissance surveys as defined in this subpart or oil and gas pipelines that are located within the System unit under authority of a deeded or other right-of-way.

    Operations permit means an NPS special use permit authorizing an operator to conduct operations in a System unit.

    Operator means any person or entity, agent, assignee, designee, lessee, or representative thereof who is conducting operations or proposing to exercise non-federal oil and gas rights within the boundaries of a System unit outside of Alaska.

    Owner means the person that holds title to non-federal oil or gas rights.

    Previously exempt operations means those operations being conducted in a System unit without an approved permit from the NPS as of December 5, 2016, except operations for which the NPS had granted the operator an exemption to the plan of operations requirement before such date, because the operator accessed oil and gas rights inside the System unit from a surface location outside the System unit.

    Reconnaissance survey means an inspection or survey conducted by qualified specialists for the purpose of preparing a permit application.

    (1) A reconnaissance survey includes identification of the area of operations and collection of natural and cultural resource information within and adjacent to the proposed area of operations.

    (2) Except for the minimal surface disturbance necessary to perform cultural resource surveys, natural resource surveys, and location surveys required under this subpart, surface disturbance activities are beyond the scope of a reconnaissance survey.

    Right to operate means a deed, lease, memorandum of lease, designation of operator, assignment of right, or other documentation demonstrating that you hold a legal right to conduct the operations you are proposing within a System unit.

    Technologically feasible, least damaging methods are those that we determine to be most protective of park resources and values while ensuring human health and safety, taking into consideration all relevant factors, including environmental, economic, and technological factors and the requirements of applicable law.

    Temporary access permit means an NPS special use permit authorizing an operator to access the proposed area of operations to conduct reconnaissance surveys necessary to collect basic information necessary to prepare an operations permit application.

    Third-party monitor means a qualified specialist who is not an employee, agent, or representative of the operator and who has the relevant expertise to monitor operations for compliance with applicable laws, regulations, and permit requirements.

    Usable water means an aquifer or its portion that:

    (1)(i) Supplies any public water system; or

    (ii) Contains a sufficient quantity of ground water to supply a public water system and either:

    (A) Currently supplies drinking water for human consumption; or

    (B) Contains fewer than 10,000 mg/l total dissolved solids; and

    (2) Is not an exempted aquifer under state law.

    Waste means any material that is discarded. It includes, but is not limited to: drilling fluids and cuttings; produced fluids not under regulation as a contaminating substance; human waste; garbage; fuel drums; pipes; oil; contaminated soil; synthetic materials; man-made structures or equipment; or native and nonnative materials.

    We and us mean the National Park Service.

    You and I mean the operator, unless otherwise specified or indicated by the context.

    Previously Exempt Operations
    § 9.50 Do I need an operations permit for my previously exempt operations?

    Yes. You must obtain an NPS operations permit.

    § 9.51 How do I apply for my operations permit?

    Within 90 days after December 5, 2016 or within 90 days after the effective date of a boundary change, or establishment of a new System unit, as applicable, you must submit the following to the Superintendent of the System unit in which you propose to continue to conduct operations:

    (a) The names and contact information of the operator, the owner, and the individuals responsible for overall management, field supervision, and emergency response of the proposed operations;

    (b) Documentation demonstrating that you hold a right, and the extent of such right, to operate within the System unit;

    (c) A brief description of the current operations and any anticipated changes to the current operations;

    (d) The American Petroleum Institute (API) well number or State well-identification permit number;

    (e) Maps to scale that clearly delineate your current area of operations as of December 5, 2016 or the effective date of a boundary change, or establishment of a new System unit, as applicable, and that identify the area of surface disturbance and equipment layout within your proposed area of operations;

    (f) The results of any reconnaissance surveys you have conducted to be used by the Superintendent to identify resource protection measures in your operations permit.

    (g) A spill control and emergency preparedness plan as required by § 9.86;

    (h) Documentation of the current operating methods, surface equipment, downhole well construction and completion, materials produced or used, and monitoring methods;

    (i) A description of how your proposed operation will meet each applicable operating standard at §§ 9.110 through 9.116 and 9.118; and

    (j) A description of the procedures to be used and cost estimates for well plugging and surface reclamation.

    § 9.52 What will the NPS do with my application?

    The NPS will review your application and take action under §§ 9.100 through 9.104.

    § 9.53 May I continue to operate while the NPS reviews my application?

    During this interim period, you may continue to conduct operations subject to the following conditions:

    (a) Continuation of operations is limited to those methods and the area of disturbance that existed on December 5, 2016 or the effective date of a boundary change, or establishment of a new System unit, as applicable.

    (b) Your operation is subject to the general terms and conditions in §§ 9.120 through 9.122 and the prohibitions and penalties in §§ 9.180 through 9.182.

    (c) Except in an emergency, we will not take any steps to directly regulate your operation before 90 days after December 5, 2016 or 90 days after the effective date of a boundary change, or establishment of a new System unit, as applicable.

    Temporary Access Permits
    § 9.60 When do I need a temporary access permit?

    (a) You must apply to the Regional Director for a temporary access permit to access your proposed area of operations that is on NPS administered lands or waters in order to conduct reconnaissance surveys. This permit will describe the means, routes, timing, and other terms and conditions of your access as determined by the Regional Director.

    (b) A temporary access permit is subject to cost recovery under 54 U.S.C. 103104.

    § 9.61 How do I apply for a temporary access permit?

    To apply for a temporary access permit, you must submit the following information to the Superintendent of the System unit in which you propose to conduct operations:

    (a) Documentation demonstrating that you hold a right, and the extent of such right, to operate within the System unit;

    (b) A map delineating the proposed reconnaissance survey areas in relation to the System unit boundary and the proposed area of operations at a minimum scale of 1:24,000, or a scale specified by the Superintendent as acceptable;

    (c) A brief description of the intended operation so that we can determine the scope of the reconnaissance surveys needed;

    (d) The name and contact information of the operator, employee, agent, or contractor responsible for overall management of the proposed reconnaissance surveys;

    (e) The name, legal address, telephone number, and qualifications of all specialists responsible for conducting the reconnaissance surveys;

    (f) A description of proposed means of access and routes proposed for conducting the reconnaissance surveys; and

    (g) A description of the survey methods you intend to use to identify the natural and cultural resources.

    § 9.62 When will the NPS grant a temporary access permit?

    If the Regional Director determines that your proposed reconnaissance surveys will not result in surface disturbance, except for minimal disturbance necessary to perform required surveys, the Regional Director will issue you a temporary access permit within 30 days after receipt of a complete application, unless the Regional Director notifies you that additional time is necessary to evaluate or process your application.

    § 9.63 How long will I have to conduct my reconnaissance surveys?

    The duration of your temporary access permit will be stated in the permit, based upon the scope of the reconnaissance surveys needed. The Regional Director may, upon written request, extend the term of the temporary access permit.

    Accessing Oil and Gas Rights From a Surface Location Outside the System Unit Boundary
    § 9.70 Do I need an operations permit for accessing oil and gas rights from outside the System unit boundary?

    Your downhole operations inside a System unit are subject to these regulations. If you wish to access your oil and gas rights located inside a System unit from a surface location outside the unit, you must submit the information required by § 9.71. We will evaluate this information and may request that you apply for an operations permit. We will require an operations permit for such operations only if we determine that downhole permit requirements are needed to protect against a significant threat of damage to:

    (a) Federally owned or administered lands, waters, or resources within System units;

    (b) NPS visitor uses or experiences; or

    (c) Visitor or employee health or safety.

    § 9.71 What information must I submit to the NPS?

    You must provide the information required by this section to the Superintendent of the System unit. You must provide all of the following.

    (a) The names and contact information of:

    (1) The operator;

    (2) The owner; and

    (3) The individuals responsible for overall management, field supervision, and emergency response of the proposed operations.

    (b) Documentation demonstrating that you hold a right, and the extent of such right, to operate within the System unit.

    (c) Maps and plats to scale showing the boundaries of each of the oil or gas rights that are relevant to your proposed operations within the System unit boundary.

    (d) Maps and plats to scale showing all proposed surface uses (well site, access route, flowlines, production facilities) that occur outside the System unit.

    (e) Information regarding downhole operations and conditions, including:

    (1) Description, including depths, thicknesses, and properties of geologic horizons between the target zone and the base of the deepest aquifer;

    (2) Drilling plan, including directional-drilling program, horizontal distance along the wellbore's path from well's surface location to the System unit boundary, depth at which wellbore crosses the boundary, and timeline for operations;

    (3) Casing, cementing, and mud programs;

    (4) Stimulation programs; and

    (5) Well plugging and abandonment program.

    (f) If you propose hydraulic fracturing, then you must also provide the information required by § 9.89.

    § 9.72 How will the NPS act on my submission?

    (a) Within 30 days after receiving your submission under § 9.71, the Superintendent will notify you in writing that your information is complete, you need to submit more information, or we need more time to review your submission.

    (b) After NPS receives your complete submission, and completes compliance with applicable federal laws, including the National Environmental Policy Act, the Superintendent will notify you in writing within 30 days that either:

    (1) No further action is required by the NPS and you are exempt from the operations permit requirement; or

    (2) You must obtain an operations permit.

    (c) If you need an operations permit, the information provided under § 9.71 is your permit application and the NPS will review your application under §§ 9.100 through 9.104.

    § 9.73 If I don't need an operations permit, are there still requirements that I must meet?

    If the NPS notifies you under § 9.72 that you do not need an operations permit, your operations are still subject to the general terms and conditions in §§ 9.120 through 9.122, the prohibitions and penalties in §§ 9.180 through 9.182, and the requirements in this section.

    (a) You must notify the NPS within 30 days if the methods or the environmental conditions of your downhole operations materially change.

    (b) The Regional Director may notify you in writing that you are no longer exempt from the operations permit requirement after determining that downhole operational requirements are needed to protect against a significant threat of damage to any of the following:

    (1) Federally owned or administered lands, waters, or resources of System units;

    (2) NPS visitor uses or experiences; or

    (3) Visitor or employee health or safety.

    (c) Within 30 days after receiving this notification, you must file your operations permit application with the Superintendent.

    Operations Permit: Application Contents
    § 9.80 Who must apply for an operations permit?

    (a) Except as otherwise provided in §§ 9.70 through 9.73, an operator proposing to conduct operations within the boundary of a System unit must submit an application for an operations permit to the Superintendent.

    (b) An operations permit is subject to cost recovery under 54 U.S.C. 103104.

    § 9.81 May I use previously submitted information?

    (a) In satisfying the requirements of §§ 9.82 through 9.90, you do not need to resubmit information that is already on file with the NPS. Instead, you may reference the previously submitted information in your permit application.

    (b) You may submit documents and materials containing the information required by §§ 9.82 through 9.90 that you submit to other Federal and State agencies. If you do this, you must clearly identify the information required by §§ 9.82 through 9.90.

    § 9.82 What must I include in my application?

    (a) Your application for an operations permit must include all of the information required by § 9.83 and, to the extent applicable, the information required by §§ 9.87 through 9.90, as well as any additional information that the Superintendent may require by written request.

    (b) You may provide information for only the phase of operations you propose. Each permit application is only required to describe those activities for which you request approval. Approval of an operations permit covering one phase of operations does not assure future approval of, or the terms of future approval for, an operations permit covering a subsequent phase.

    § 9.83 What information must be included in all applications?

    All applications must include the information required by this section.

    All operations permit applications must include information on . . . and must include the following detailed information . . . (a) Ownership documentation demonstrating that you hold a right, and the extent of such right, to operate within the System unit. (b) The owner/operator names, addresses, and other contact information for:
  • (1) The operator;
  • (2) The owner;
  • (3) Any agents, assignees, designees, contractors, or other representatives of the operator including those responsible for overall management, field supervision, and emergency response of the proposed operations.
  • (c) Existing conditions and proposed area of operations all the information required by § 9.84. (d) Reclamation plan (1) A description of the equipment and methods used to meet the operating standards for reclamation at § 9.116; and
  • (2) A breakdown of the estimated costs that a third party would charge to complete reclamation as proposed in your reclamation plan.
  • (e) Use of water (1) The source (including documentation verifying a water right), quantity, access route, and transportation/conveyance method for all water to be used in access road and pad construction, well drilling, stimulation, and production; and
  • (2) Estimations of any anticipated waste water volumes generated and how they will be managed (i.e. handled, temporary stored, disposed, recycled, reused) throughout stages of the operation.
  • (f) Environmental conditions and mitigation actions all the information required by § 9.85. (g) The spill control and emergency preparedness plan all the information required by § 9.86.
    § 9.84 Existing conditions and proposed area of operations.

    (a) You must submit to-scale maps that clearly depict:

    (1) The boundaries of your oil or gas rights in relation to your proposed operations and the relevant System unit boundary;

    (2) The natural features, including, but not limited, to streams, lakes, ponds, wetlands, seepage areas, springs, shallow water aquifers, topographic relief, and areas we have indicated to you as environmentally sensitive;

    (3) The locations of existing roads, trails, railroad tracks, pads, and other disturbed areas; and

    (4) The locations of existing structures that your operations could affect, including but not limited to: Buildings, pipelines, existing or permitted oil or gas wells, freshwater wells, underground and overhead electrical lines, and other utility lines.

    (b) You must submit the following information about geologic conditions in their natural state and under the proposed operating conditions:

    (1) Estimated depths and names of known zones of usable water, brine, hydrocarbon, geothermal, or other mineral-bearing zones based on the best available information;

    (2) Potential hazards to persons and the environment such as known abnormal pressure zones, lost circulation zones, hydrogen sulfide gas, or karst formations; and

    (3) Nature, extent, and depth (if known) of near-surface bedrock fracturing or jointing relative to proposed cemented surface casing-seat depth and any open annular interval proposed in the well design.

    (c) You must submit the following information for any new surface disturbances or construction:

    (1) Maps depicting the proposed area of operations, boundaries of new surface disturbances and proposed access routes;

    (2) Maps depicting the proposed location of all support facilities, including those for transportation (e.g., vehicle parking areas, airstrips, helicopter pads), sanitation, occupation, staging areas, fuel dumps, refueling areas, loading docks, water supplies, and disposal facilities;

    (3) The methods and diagrams, including cross-sections, of any proposed pad construction, road construction, cut-and-fill areas, and surface maintenance, including erosion control;

    (4) The number and types of equipment and vehicles, including an estimate of vehicular trips, associated with each phase of your operation;

    (5) An estimated time to complete each phase of the proposed operations, including any operational timing constraints;

    (6) The type and extent of security measures proposed within your area of operations;

    (7) The power sources and their transmission systems for the proposed operations; and

    (8) The types and quantities of all solid and liquid waste generation and the proposed methods of storage, handling, and off-site disposal.

    § 9.85 Environmental conditions and mitigation actions.

    You must submit the following information about environmental conditions and mitigation actions:

    (a) Description of the natural and cultural resource conditions from your reconnaissance surveys or other sources collected for your proposed area of operations. The Superintendent may require, on a case by case basis, baseline field testing of soils and field or laboratory testing of surface, or near-surface, waters within your area of operations, as well as any groundwater resources that may reasonably may be impacted by your surface operations;

    (b) Description of the steps you propose to take to mitigate any adverse environmental impacts on park resources and values, including but not limited to, the System unit's land features, land uses, fish and wildlife, vegetation, soils, surface and subsurface water resources, air quality, noise, lightscapes, viewsheds, cultural resources, and economic environment; and

    (c) Discussion of:

    (1) Any anticipated impacts that you cannot mitigate; and

    (2) All alternative technologically feasible, least damaging methods of operations, their costs, and their environmental effects.

    § 9.86 Spill control and emergency preparedness plan.

    You must submit the following information about your spill control and emergency preparedness plan. You may use a spill prevention control and countermeasure (SPCC) plan prepared under 40 CFR part 112 if the plan includes all of the information required by this section. You must submit:

    (a) A list of names, addresses, and telephone numbers of persons that the Superintendent can contact in the event of a spill, fire, or accident, including the order in which the persons should be contacted;

    (b) Your reporting procedures in the event of a spill, fire, or accident;

    (c) Identification of contaminating or toxic substances expected to be used within your area of operations;

    (d) Identification of abnormal pressure, temperature, toxic gases or substances, or other hazardous conditions expected to be encountered during operations;

    (e) Measures (e.g., procedures, facility design, equipment) to minimize risks to human health and safety and the environment;

    (f) Steps to prevent conditions creating fire hazards in the vicinity of well locations and lease tanks;

    (g) List of equipment and methods for containment and cleanup of contaminating substances, including a list of the equipment to be maintained on site as well as a list of equipment to be available from local contractors;

    (h) A storm water drainage plan and actions intended to mitigate storm water runoff;

    (i) Safety data sheets for each material expected to be used or encountered during operations, including quantities expected to be maintained at your area of operations;

    (j) A description of the emergency actions you will take in the event of accidents causing human injury; and

    (k) Contingency plans for relevant conditions and emergencies other than spills, based on the particular geographic area, such as hurricanes, flooding, tornadoes, or earthquakes.

    § 9.87 What additional information must be included if I am proposing geophysical exploration?

    If you propose to conduct geophysical exploration, you must submit the following additional information:

    (a) The number of crews and expected numbers of workers in each crew;

    (b) Names and depths of geologic zones targeted for imaging;

    (c) A description of the acquisition methods, including the procedures, specific equipment you will use, and energy sources (e.g., explosives or vibroseis trucks);

    (d) The methods of access along each survey line for personnel, materials, and equipment;

    (e) A list of all explosives, blasting equipment, chemicals, and fuels you will use in the proposed operations, including a description of proposed disposal methods, transportation methods, safety measures, and storage facilities; and

    (f) A map showing the positions of each survey line including all source and receiver locations as determined by a locational survey, and including shotpoint offset distances from wells, buildings, other infrastructure, and areas the NPS has indicated to you as environmentally sensitive areas.

    § 9.88 What additional information must be included if I am proposing drilling operations?

    If you are proposing to drill a well, you must submit the following additional information:

    (a) Well-pad construction plans, including dimensions and cross sections of: cut and fill areas and excavations for ditches, sumps, and spill control equipment or structures, including lined areas;

    (b) Drill-rig and equipment layout plans, including rig components, fuel tanks, testing equipment, support facilities, storage areas, and all other well-site equipment and facilities;

    (c) The drilling program, including hole size for each section and the directional program, if applicable;

    (d) Proposed drilling depth and the estimated depths and names of usable water, brine, hydrocarbon, geothermal, or other mineral-bearing zones;

    (e) The type and characteristics of the proposed mud systems;

    (f) The casing program, including the size, grade, weight, and setting depth of each string;

    (g) The cementing program, including downhole location of any stage equipment, cement types, volumes, and additives to be used, and a description of pressure tests and cement verification techniques used that will be run to evaluate cement placement and integrity;

    (h) The minimum specifications for pressure control equipment function, and pressure testing frequency, and the blowout preventer stack arrangement;

    (i) The proposed logging, coring, and testing programs;

    (j) The completion program, including completion type (open-hole, perforated, slotted liner, etc.), any proposed stimulation techniques, and procedures, including considerations for well control; and

    (k) A description of the equipment, materials, and procedures for well plugging, including plug depths, plug types, and minimum mud weight.

    § 9.89 What additional information must be included if I am proposing well stimulation operations, including hydraulic fracturing?

    If you are proposing well stimulation operations, including hydraulic fracturing, you must submit the following additional information:

    (a) The geologic names, a geologic description, and the estimated depths (measured and true vertical) to the top and bottom of the target formation(s). The estimated minimum vertical distance between the top of the completion zone and the nearest usable water zone, and the measured depth of the proposed perforated or open-hole interval.

    (b) The estimated depths (measured and true vertical) to the top and bottom of the confining zone(s). Include a map showing the location, orientation, and extent of any known or suspected faults or fractures within one-half mile (horizontal distance) of the wellbore trajectory that may transect the confining zone(s).

    (c) A map showing all existing wellbore trajectories, regardless of type, within one-half mile (horizontal distance) of any portion of the wellbore into which hydraulic fracturing fluids are to be injected. The true vertical depth of each wellbore identified on the map must be indicated.

    (d) Steps to be taken before well completions to verify mechanical integrity of all downhole tubulars and tools and cement quality, including pressure tests, monitoring of cement returns to surface, and cement evaluation logs (or other logs acceptable to the Superintendent) demonstrating that the occurrences of usable water zones have been isolated to protect them from contamination.

    (e) A detailed description of the proposed well-stimulation design, including:

    (1) The total proposed volume of stimulation fluid to be used; total proposed base fluid volume, description of proposed base fluid, and each additive in the proposed stimulation fluid, including the trade name, supplier, purpose, ingredients; Chemical Abstract Service Number (CAS); maximum ingredient concentration in additive (percent by mass); and maximum ingredient concentration in hydraulic fracturing fluid (percent by mass);

    (2) Proposed proppant system if applicable;

    (3) The anticipated surface treating pressure range;

    (4) The maximum anticipated surface pressure that will be applied during the hydraulic fracturing process;

    (5) The trajectory of the wellbore into which hydraulic fracturing fluids are to be injected and the estimated direction and length of the fractures that will be propagated and a notation indicating the true vertical depth of the top and bottom of the fractures; and

    (6) Any microseismic monitoring planned or proposed in conjunction with well stimulation.

    (f) The source and location of water supply, such as reused or recycled water, rivers, creeks, springs, lakes, ponds, and water supply wells, and the source and location of water supply, such as reused or recycled water, rivers, creeks, springs, lakes, ponds, and water supply wells.

    (g) The storage, mixing, pumping, and control equipment needed to perform the stimulation.

    (h) The following information concerning the handling of recovered fluids:

    (1) The estimated volume of stimulation fluids to be recovered during flow back;

    (2) The proposed methods of handling the recovered fluids including any onsite treatment for re-use of fluids in other stimulation activities; and

    (3) The proposed disposal method of the recovered fluids, including, but not limited to, injection, hauling by truck, or transporting by pipeline.

    § 9.90 What additional information must be included if I am proposing production operations?

    If you are proposing production operations, you must submit the following information:

    (a) The dimensions with a to-scale layout of the wellpad, clearly identifying well locations, noting partial reclamation areas; gathering, separation, metering, and storage equipment; electrical lines; fences; spill control equipment or structures including lined areas, artificial lift equipment, tank batteries, treating and separating vessels, secondary or enhanced recovery facilities, water disposal facilities, gas compression and/or injection facilities; metering points; sales point (if on lease); tanker pick-up points; gas compressor, including size and type (if applicable); and any other well site equipment;

    (b) The size, grade, weight, and setting depth of all casing and tubing strings; cementing history; type and size of packers and subsurface flow control devices; top and bottom depths of each completed interval; and method of completion;

    (c) The well history, including completions, stimulations, servicing, and workovers;

    (d) The minimum specifications for pressure-control equipment, function, and pressure-testing frequency;

    (e) The methods and means to be used to transport produced oil and gas, including vehicular transport; flowline and gathering line construction; operation; pipe size; operating pressure; cathodic protection methods; surface equipment use; surface equipment location; maintenance procedures; maintenance schedules; pressure detection methods; and shutdown procedures;

    (f) Road and wellpad maintenance plan, including equipment and materials to maintain the road surface and control erosion;

    (g) Vegetation management plan on well sites, roads, pipeline corridors, and other disturbed surface areas, including control of exotic species;

    (h) Storm water management plan on the well site;

    (i) Produced water storage and disposal plan; and

    (j) The procedures for well plugging, the depths and the types of plugs, and minimum mud weight.

    Operations Permit: Application Review Process
    § 9.100 How will NPS process my application?

    If you propose operations in System units, other than Big Cypress National Preserve, we will process your application in accordance with §§ 9.101 through 9.104. If you propose operations in Big Cypress National Preserve, we will process your application in accordance with §§ 9.103 and 9.105.

    § 9.101 How will the NPS conduct initial review?

    (a) Within 30 days after receipt of your application, the Superintendent will notify you in writing that either:

    (1) Your application is complete and the NPS will begin formal review;

    (2) Your permit application does not meet the information requirements and additional information is required before the NPS will conduct formal review of your permit application; or

    (3) More time is necessary to complete the review, in which case the NPS will provide you an estimate of the amount of additional time reasonably needed and an explanation for the delay.

    (b) If you resubmit information requested by the NPS under this section and the Superintendent determines that you have met all applicable information requirements, the Superintendent will notify you within 30 days after receipt of the additional information that either:

    (1) Your application is complete and the NPS will begin formal review; or

    (2) More time is necessary to complete the review, in which case the NPS will provide you an estimate of the amount of additional time reasonably needed and an explanation for the delay.

    § 9.102 How will the NPS conduct formal review?

    (a) The Superintendent will evaluate the potential impacts of your proposal on federally owned or administered lands, waters, or resources within System units, visitor uses and experiences, and visitor and employee health and safety. As part of this evaluation process, the NPS will comply with all applicable federal laws, including the National Environmental Policy Act. The Superintendent will then make a recommendation to the Regional Director regarding final action on your operations permit.

    (b) As part of the evaluation process, the Superintendent may consult with other Federal, State, and local agencies.

    § 9.103 What standards must be met to approve my operations permit?

    (a) The Regional Director will approve your operations permit if the NPS has determined that your operations:

    (1) Will not violate the laws governing administration of units of the National Park System; and

    (2) Will meet all applicable operating standards.

    (b) Before approval of your operations permit, you must submit to the Superintendent:

    (1) Financial assurance in the amount specified by the Regional Director and in accordance with the requirements of §§ 9.140 through 9.144;

    (2) Proof of liability insurance with limits sufficient to cover injuries to persons or property caused by your operations; and

    (3) An affidavit stating that the operations planned are in compliance with all applicable Federal, State, and local laws and regulations.

    § 9.104 What final actions may the Regional Director take on my operations permit?

    (a) The Regional Director will take final action within 30 days of completing all required legal compliance, including compliance with the National Environmental Policy Act, unless:

    (1) We and you agree that such final action will occur within a shorter or longer period of time; or

    (2) We determine that an additional period of time is required to ensure that we have, in reviewing the permit application, complied with all applicable legal requirements.

    (b) The Regional Director will notify you in writing that your operations permit is:

    (1) Approved with the operating conditions contained therein; or

    (2) Denied, and provide you justification for the denial. Any such denial must be consistent with § 9.30(c).

    § 9.105 What is the approval process for operations in Big Cypress National Preserve?

    (a) Within 30 days after the date of submission of your application, we will notify you whether the application contains all information reasonably necessary to allow us to consider the application and, if not, will request that you provide additional information. After receiving this notification, you must either supply any reasonably necessary additional information or must notify us that you believe that the application contains all reasonably necessary information and is therefore complete; whereupon we may:

    (1) Within 30 days after receipt of the notice from the applicant, determine that the application does not contain all reasonably necessary additional information and, on that basis, deny the application; or

    (2) Review the application and take final action within 60 days after the date that you provided notification to the NPS that your application is complete.

    (b) The Regional Director will take final action within 90 days after the date you submitted your application unless:

    (1) We and you agree that final action can occur within a shorter or longer period of time; or

    (2) We determine that an additional period of time is required to ensure that we have, in reviewing the permit application, complied with other applicable laws, executive orders, and regulations.

    Operating Standards
    § 9.110 What are the purposes and functions of NPS operating standards?

    (a) You must comply with all operating standards in §§ 9.111 through 9.116, as well as with the standards in §§ 9.117 and 9.118, if applicable. The standards apply only to operations that occur within a System unit, including downhole activities, and do not apply to surface activities located outside a System unit. These operating standards are incorporated into the terms and conditions of your operations permit. Violation of these operating standards will subject you to the prohibitions and penalties provisions of §§ 9.180 through 9.182.

    (b) NPS operating standards are applied to ensure protection of federally owned or administered lands, waters, and resources of System units, visitor uses and experiences, and visitor and employee health and safety. The operating standards give us and the operator flexibility to consider using alternative methods, equipment, materials design, and conduct of operations.

    (c) In applying standards to a particular operation, you must use technologically feasible, least damaging methods to protect federally owned or administered lands, waters, and resources of System units, visitor uses and experiences, and visitor and employee health and safety.

    § 9.111 What general facility design and management standards must I meet?

    (a) You must not conduct operations within 500 feet of surface water, including an intermittent or ephemeral watercourse, or wetland; within 500 feet of the mean high tide line; or within 500 feet of any structure or facility used by the NPS for interpretation, public recreation, or administration. The Superintendent may increase or decrease this distance consistent with the need to protect federally owned or administered lands, water, or resources of System units, visitor uses or experiences, or visitor or employee health and safety while ensuring that you have reasonable access to your non-Federal oil and gas rights. Measurements for purposes are by horizontal distance.

    (b) You must design, construct, operate, and maintain access to your operational site to cause the minimum amount of surface disturbance needed to safely conduct operations and to avoid areas the NPS has indicated to you as sensitive resources.

    (c) You must install and maintain secondary containment materials and structures for all equipment and facilities using or storing contaminating substances. The containment system must be sufficiently impervious to prevent discharge and must have sufficient storage capacity to contain, at a minimum, the largest potential spill incident.

    (d) You must keep temporarily stored waste in the smallest feasible area, and confine in a manner appropriate to prevent escape as a result of percolation, rain, high water, or other causes. You must regularly remove waste from the System unit and dispose of it in a lawful manner. Nothing in this subpart affects the application of the regulations found at 36 CFR part 6.

    (e) You must use engines that adhere to applicable Federal and State emission standards.

    (f) You must construct, maintain, and use roads to minimize fugitive dust.

    (g) You must use equipment and practices that minimize releases of air pollutants and hydrocarbons, and flaring of gas.

    (h) You must conduct operation in a manner that does not create an unsafe environment for fish and wildlife by avoiding or minimizing exposure to physical and chemical hazards.

    (i) You must conduct operations in a manner that avoids or minimizes impacts to sensitive wildlife, including timing and location of operations.

    (j) You must control the invasion of exotic plant and animal species in your area of operations from the beginning through final reclamation.

    § 9.112 What hydrologic standards must I meet?

    (a) You must maintain hydrologic connectivity between surface water and groundwater during all operations.

    (b) You must not cause measurable degradation of surface water or groundwater.

    (c) You must conduct operations in a manner that maintains natural channel and floodplain processes and functions.

    § 9.113 What safety standards must I meet?

    (a) You must maintain your area of operations in a manner that avoids or minimizes the cause or spread of fires and does not intensify fires originating outside your operations area.

    (b) You must maintain site security, structures, facilities, improvements, and equipment in a safe and professional manner in order to provide a safe environment for park resources, park visitors, and NPS employees, free from exposure to physical and chemical hazards.

    § 9.114 What lighting and visual standards must I meet?

    (a) You must design, shield, and focus lighting to minimize the effects of spill light on the night sky or adjacent areas.

    (b) You must reduce visual contrast in the landscape by selecting the area of operations, avoiding unnecessary disturbance, choosing appropriate colors for permanent facilities, and other means.

    (c) You must use road and pad materials similar in composition to soils in surrounding profiles whenever feasible.

    § 9.115 What noise reduction standards must I meet?

    You must prevent or minimize all noise that:

    (a) Adversely affects the natural soundscape or other park resources or values, taking into account frequency, magnitude, or duration; or

    (b) Exceeds levels that have been identified through monitoring as being acceptable to or appropriate for visitor uses at the sites being monitored.

    § 9.116 What reclamation and protection standards must I meet?

    (a) You must promptly clean up and remove any released contaminating substances and provide documentation to the Superintendent that the substances were disposed of in accordance with all applicable Federal, State, and local laws.

    (b) You must perform partial reclamation of areas no longer necessary to conduct operations. You must begin final reclamation as soon as possible but no later than 6 months after you complete your permitted operations unless the Regional Director authorizes a longer period in writing.

    (c) You must protect all survey monuments, witness corners, reference monuments, and bearing trees against destruction, obliteration, or damage from operations. You are responsible for reestablishing, restoring, and referencing any monuments, corners, and bearing trees that are destroyed, obliterated, or damaged by your operations.

    (d) You must complete reclamation by:

    (1) Plugging all wells;

    (2) Removing all above-ground structures, equipment, and roads and all other man-made material and debris resulting from operations;

    (3) Removing or neutralizing any contaminating substances;

    (4) Reestablishing native vegetative communities, or providing for conditions where ecological processes typical of the ecological zone (e.g., plant or wildlife succession) will reestablish themselves;

    (5) Grading to reasonably conform the contours to preexisting elevations that are most appropriate to maximizing ecologic functional value;

    (6) Restoring conditions to pre-disturbance hydrologic movement and functionality;

    (7) Restoring natural systems using native soil material that is similar in character to the adjacent undisturbed soil profiles;

    (8) Ensuring that reclaimed areas do not interfere with visitor use or with administration of the unit;

    (9) Meeting conditions compatible with the management objectives of the park; and

    (10) Ensuring proper and equitable apportionment of reclamation responsibilities by coordinating with us or with other operators who may be using a portion of your area of operations.

    § 9.117 What additional operating standards apply to geophysical operations?

    If you conduct geophysical operations, you must do all of the following:

    (a) Use surveying methods that minimize the need for vegetative trimming and removal;

    (b) Locate source points using industry-accepted minimum safe-offset distances from pipelines, telephone lines, railroad tracks, roads, power lines, water wells, oil and gas wells, oil and gas-production facilities, and buildings;

    (c) Use equipment and methods that, based upon the specific environment, will minimize impacts to federally owned or administered lands, waters, and resources of System units, visitor uses and experiences, and visitor and employee health and safety; and

    (d) If you use shot holes, you must:

    (1) Use biodegradable charges;

    (2) Plug all shot holes to prevent a pathway for migration for fluids along any portion of the bore; and

    (3) Leave the site in a clean and safe condition that will not impede surface reclamation or pose a hazard to human health and safety.

    § 9.118 What additional operating standards apply to drilling, stimulation, and production operations?

    If you conduct drilling, stimulation, and production operations, you must meet all of the standards in this section.

    (a) Drilling. (1) You must use containerized mud circulation systems for operations.

    (2) You must not create earthen pits for any use. Earthen pits used solely for secondary containment on sites existing before December 5, 2016 may continue in use; however, the Superintendent may require such structures to be lined or removed depending on site-specific operational and environmental conditions.

    (3) You must take all necessary precautions to keep your wells under control at all times, use only contractors or employees trained and competent to drill and operate the wells, and use only oil field equipment and practices generally used in the industry.

    (4) You must design, implement, and maintain integrated casing, cementing, drilling fluid, completion, stimulation, and blowout prevention programs. These programs must be based upon sound engineering principles to prevent escape of fluids to the surface and to isolate and protect usable water zones throughout the life of the well, taking into account all relevant geologic and engineering factors.

    (b) Stimulation operations including hydraulic fracturing. (1) You must not begin injection activities before you demonstrate the mechanical integrity of all surface and downhole tubulars and equipment to differential pressures equal to at least those calculated at the maximum anticipated treating pressure.

    (2) You must continuously monitor and record the treating pressures and all annular pressures before, during, and after the treatment to ensure that treatment materials are directed to the intended zone.

    (3) If mechanical integrity is lost during the treatment, you must immediately cease the operation and notify the Superintendent as soon as feasible, but no later than 24 hours after the incident. Within 15 days after the occurrence, you must submit to the Superintendent a report containing all details pertaining to the incident, including corrective actions taken.

    (c) Production. (1) You must monitor producing conditions in order to maintain the mechanical integrity of both surface and subsurface equipment.

    (2) You must maintain your well to prevent escape of fluids to the surface and to isolate and protect usable water zones throughout the life of the well, taking into account all relevant geologic and engineering factors.

    (3) You must identify wells and related facilities by a sign, which must remain in place until the well is plugged and abandoned and the related facilities are closed. The sign must be of durable construction, and the lettering must be legible and large enough to be read under normal conditions at a distance of at least 50 feet. Each sign must show the name of the well, name of the operator, and the emergency contact phone number.

    (4) You must remove all equipment and materials that are no longer needed for a particular phase of your operation.

    (5) You must plug all wells to:

    (i) Prevent a pathway of migration for fluids along any portion of the bore; and

    (ii) Leave the surface in a clean and safe condition that will not impede surface reclamation or pose a hazard to human health and safety.

    General Terms and Conditions
    § 9.120 What terms and conditions apply to all operators?

    The following terms and conditions apply to all operators:

    (a) The operator/permittee is responsible for ensuring that all of its employees and contractors and subcontractors comply fully with all of the requirements of this subpart;

    (b) The operator/permittee may not use any surface water or groundwater owned or administered by the United States that has been diverted or withdrawn from a source located within the boundaries of a System unit unless the use has been approved in accordance with NPS policy;

    (c) The operator/permittee must provide the NPS an affidavit, signed by an official who is authorized to legally bind the company, stating that proposed operations are in compliance with all applicable federal, state, and local laws and regulations and that all information submitted to the NPS is true and correct;

    (d) The operator/permittee must agree to indemnify and hold harmless the United States and its officers and employees from and against any and all liability of any kind whatsoever arising out of or resulting from the acts or omissions of the operator and its employees, agents, representatives, contractors, and subcontractors in the conduct of activities under the operations permit; and

    (e) The operator/permittee must agree to take all reasonable precautions to avoid, minimize, rectify, or reduce the overall impacts of your proposed oil and gas activities to System units. You may be required to mitigate for impacts to NPS resources and lost uses. Mutually agreed-upon mitigation tools for this purpose may include providing or restoring alternative habitat and resources to offset those impacts by the operations.

    § 9.121 What monitoring and reporting is required for all operators?

    (a) The NPS may access your area of operations at any time to monitor the potential effects of the operations and to ensure compliance with this subpart where applicable.

    (b) The Regional Director may determine that third-party monitors are required when necessary to protect federally owned or administered lands, waters, or resources of System units, visitor uses or experiences, or visitor or employee health and safety.

    (1) The Regional Director's determination will be based on the scope and complexity of the proposed operation and whether the park has the staff and technical ability to ensure compliance with the operations permit and any provision of this subpart.

    (2) A third-party monitor will report directly to the NPS at intervals determined by the Superintendent, and you will be responsible for the cost of the third party monitor. We will make the information reported available to you upon your request.

    (3) Third party monitors must disclose to the NPS any potential conflicts of interest that could preclude objectivity in monitoring an operator's compliance with the operations permit and any provision of this subpart.

    (c) You must notify the Superintendent of any accidents involving serious personal injury or death and of any fires or spills on the site as soon as feasible, but no later than 24 hours after the accident occurs. You must submit a full written report on the accident to the Superintendent within 90 days after the accident occurs.

    (d) You must notify the Superintendent as soon as feasible, but no later than 24 hours after the discovery of any cultural or scientific resource you encounter that might be altered or destroyed by your operation. You must cease operations if necessary and leave the discovered resource intact until the Superintendent provides you with instructions. The Superintendent will determine, within 10 working days after notification what action will be taken with respect to the discovery.

    (e) Upon the Superintendent's request, you must submit reports or other information necessary to verify compliance with your permit or with any provision of this subpart. To fulfill this request, you may submit to the NPS reports that you have submitted to the State under State regulations, or that you have submitted to any other Federal agency.

    § 9.122 What additional reports must I submit if my operation includes hydraulic fracturing?

    If your operations include hydraulic fracturing, you must provide the Superintendent with a report including all of the following details of the stimulation within 30 days after the completion of the last stage of hydraulic fracturing operations for each well:

    (a) The true vertical depth of the well; total water volume used; a description of the base fluid and each additive in the hydraulic fracturing fluid, including the trade name, supplier, purpose, ingredients; Chemical Abstract Service Number (CAS); maximum ingredient concentration in additive (percent by mass); and maximum ingredient concentration in hydraulic fracturing fluid (percent by mass). This information may be submitted to the Superintendent through FracFocus or another existing database available to the public;

    (b) The actual source(s) and location(s) of the water used in the hydraulic fracturing fluid;

    (c) The maximum surface pressure and rate at the end of each stage of the hydraulic fracturing operation and the actual flush volume;

    (d) The actual, estimated, or calculated fracture length, height and direction;

    (e) The actual measured depth of perforations or the open-hole interval;

    (f) The actual volume of stimulation fluids recovered during flow back, including a description of how the volumes were measured or calculated;

    (g) The following information concerning the handling of fluids recovered, covering the period between the commencement of hydraulic fracturing and the implementation of the approved permit for the disposal of produced water under NPS requirements:

    (1) The methods of handling the recovered fluids, including, but not limited to, transfer pipes and tankers, holding pond use, re-use for other stimulation activities, or injection; and

    (2) The disposal method of the recovered fluids, including, but not limited to, the percent injected, the percent stored at an off-lease disposal facility, and the percent recycled; and

    (h) Continuous monitoring records of annulus pressure at the bradenhead and other annular pressures that document pressures before, during, and after injection operations. You must submit a signed certification that wellbore integrity was maintained throughout the operation.

    Access to Oil and Gas Rights
    § 9.130 May I cross Federal property to reach the boundary of my oil and gas right?

    The Regional Director may grant you the privilege of access, subject to the provisions of any applicable law, on, across, or through federally owned or administered lands or waters in any System unit outside of Alaska to reach the boundary of your oil and gas right.

    § 9.131 Will the NPS charge me a fee for access?

    (a) Except as provided in paragraph (b) of this section, the Regional Director may charge you a fee if you use federally owned or administered lands or waters that are outside the scope of your oil and gas right.

    (1) If you require the use of federally owned or administered lands or waters to access your operation, the Regional Director will charge you a fee based on the fair market value of such use.

    (2) If access to your mineral right is on or across an existing park road, the Regional Director may charge you a fee according to a posted fee schedule.

    (b) Fees under this section will not be charged for access within the scope of your oil and gas right or access to your mineral right that is otherwise provided for by law.

    § 9.132 Will I be charged a fee for emergency access to my operations?

    The Regional Director will not charge a fee for access across federally owned or administered lands beyond the scope of your oil and gas right as necessary to respond to an emergency situation at your area of operations if the Regional Director determines that the circumstances require an immediate response to either:

    (a) Prevent or to minimize injury to park resources; or

    (b) Ensure public health and safety.

    Financial Assurance
    § 9.140 Do I have to provide financial assurance to the NPS?

    Yes. You must file financial assurance with us in a form acceptable to the Regional Director and payable upon demand. This financial assurance is in addition to any financial assurance required by any other regulatory authority.

    § 9.141 How does the NPS establish the amount of financial assurance?

    We base the financial assurance amount upon the estimated cost for a third-party contractor to complete reclamation in accordance with this subpart. If the cost of reclamation exceeds the amount of your financial assurance, you remain liable for all costs of reclamation in excess of the financial assurance.

    § 9.142 Will the NPS adjust my financial assurance?

    The Regional Director may require, or you may request, an adjustment to the financial assurance amount because of any circumstance that increases or decreases the estimated costs established under § 9.141.

    § 9.143 When will the NPS release my financial assurance?

    We will release your financial assurance within 30 days after the Regional Director:

    (a) Determines that you have met all applicable reclamation operating standards and any additional reclamation requirements that may be included in your operations permit; or

    (b) Accepts a new operator's financial assurance under § 9.160(b) or (c).

    § 9.144 Under what circumstances will the NPS retain my financial assurance?

    (a) We will retain all or part of your financial assurance if compliance with your reclamation responsibilities under the approved permit or any provisions of this subpart is incomplete.

    (b) In addition, we may also:

    (1) Prohibit you from removing all structures, equipment, or other materials from your area of operations;

    (2) Require you to secure the operations site and take any necessary actions to protect federally owned or administered lands, waters, or resources of System units, visitor uses or experiences, or visitor or employee health and safety; and

    (3) Suspend review of any permit applications you have submitted until the Regional Director determines that all violations of permit provisions or of any provision of this subpart are resolved.

    (4) Seek recovery as provided in § 9.141 for all costs of reclamation in excess of the posted financial assurance.

    Modification to an Operation
    § 9.150 How can an approved permit be modified?

    (a) You may request modification to a temporary access permit or operations permit by providing the Regional Director with written notice describing the modification and why you think it is needed.

    (b) The Regional Director may propose to modify an approved temporary access or operations permit to address changed or unanticipated conditions within your area of operations. You will be notified in writing of the proposed modifications and the justifications therefore, and the time within which you must either notify the Regional Director that you accept the modifications to your permit or explain any concerns you may have

    (c) The Regional Director will review requests made under paragraph (a) of this section or responses provided under paragraph (b) of this section applying the approval standards and timeframes at § 9.62 or § 9.104, respectively. You will be notified in writing of the Regional Director's decision and any revisions approved to the terms of the permit.

    Change of Operator
    § 9.160 What are my responsibilities if I transfer my operations?

    (a) You must notify the Superintendent in writing within 30 calendar days after the date the new owner acquires the rights to conduct operations. Your written notification must include:

    (1) The names and contact information of the person or entity conveying the oil or gas right, and the names and contact information of the person or entity acquiring the oil or gas right;

    (2) The effective date of transfer;

    (3) The description of the rights, assets, and liabilities being transferred and those being reserved by the previous owner; and

    (4) A written acknowledgement from the new owner that the contents of the notification are true and correct.

    (b) Until you meet the requirements of this section and the Regional Director provides notice to you that the new operator has complied with § 9.161(a) you remain responsible for compliance with your operations permit, and we will retain your financial assurance.

    (c) If you were operating without an operations permit, you are subject to §§ 9.120 through 9.122 and §§ 9.180 through 9.182 until the new operator meets the requirements of this section and the Regional Director provides notice to you that the new operator has complied with § 9.161(b) or (c), as applicable.

    § 9.161 What must I do if operations are transferred to me?

    (a) If you acquire rights to conduct operations, you must provide to the Superintendent:

    (1) Written acknowledgment that you adopt the previous operator's operations permit, and that you agree to conduct operations in accordance with all terms and conditions thereof, or that you adopt the previous operator's operations permit and are also requesting approval for modification of the previous operator's permit consistent with the procedures at § 9.150;

    (2) Financial assurance in the amount specified by the Regional Director and in accordance with the requirements of §§ 9.140 through 9.144;

    (3) Proof of liability insurance with limits sufficient to cover injuries to persons or property caused by your operations; and

    (4) An affidavit stating that your operations are in compliance with all applicable Federal, State, and local laws and regulations.

    (b) If the previous operator was granted an exemption under § 9.72, you must provide the Superintendent the following information within 30 calendar days after the date you acquire the rights to conduct operations:

    (1) Right to operate documentation demonstrating that you are the successor in interest to the previous operator's right, and the extent of such right, to operate within the System unit; and

    (2) The names and contact information of:

    (i) The operator;

    (ii) The owner; and

    (iii) The individuals responsible for overall management, field supervision, and emergency response of the proposed operations.

    (c) If the previous operator was operating without an operations permit, you will be considered a previously exempt operator and must obtain an operations permit. Within 90 days after acquiring the rights to conduct operations, you must submit the information at § 9.51(a) through (j), and your operations permit application will be processed in accordance with §§ 9.52 and 9.53.

    Well Plugging
    § 9.170 When must I plug my well?

    Except as provided in § 9.171, you must plug your well when any of the following occurs:

    (a) Your drilling operations have ended and you have taken no further action to produce the well within 60 days;

    (b) Your well, which has been completed for production operations, has no measureable production quantities for 12 consecutive months; or

    (c) The period approved in your operations permit to maintain your well in shut-in status has expired.

    § 9.171 Can I get an extension to the well plugging requirement?

    (a) You may apply for either a modification to your approved operations permit or, in the case of previously exempt operations, an operations permit to maintain your well in a shut-in status for up to 5 years. The application must include:

    (1) An explanation of why the well is shut-in or temporarily abandoned and your future plans for utilization;

    (2) Proof of the mechanical integrity of both surface and production casing demonstrating that no migration of fluid can be expected to occur; and

    (3) A description of the manner in which your well, equipment, and area of operations will be maintained.

    (b) Based on the information provided under this section, the Regional Director may approve your application to maintain your well in shut-in status for a period up to 5 years. You may apply for additional extensions by submitting a new application under paragraph (a) of this section.

    Prohibitions and Penalties
    § 9.180 What acts are prohibited under this subpart?

    The following are prohibited:

    (a) Operating in violation of the terms or conditions of a temporary access permit, or an approved operations permit, or any provision of this subpart;

    (b) Damaging federally owned or administered lands, waters, or resources of a System unit as a result of violation of the terms or conditions of a temporary access permit, an operations permit, or any provision of this subpart;

    (c) Conducting operations or activities without a required permit;

    (d) Failure to comply with any suspension or revocation order issued under this subpart; and

    (e) Failure to comply with any applicable Federal law or regulation, or non-conflicting State law or regulation, pertaining to your oil and gas operation.

    § 9.181 What enforcement actions can the NPS take?

    If you engage in a prohibited act described in § 9.180:

    (a) You may be subject to a fine or imprisonment, or both, in accordance with 36 CFR 1.3;

    (b) The Superintendent may suspend your operations; or

    (c) The Regional Director may revoke your approved temporary access permit or operations permit.

    § 9.182 How do violations affect my ability to obtain a permit?

    Until you are in compliance with this subpart or the terms and conditions of an existing temporary access permit or operations permit, we will not consider any new permit requests to conduct operations within any System unit.

    Reconsideration and Appeals
    § 9.190 Can I, as operator, request reconsideration of NPS decisions?

    Yes. If you disagree with a decision of the Regional Director under this subpart, you may file with the Regional Director a written statement describing the alleged factual or legal errors in the original decision and requesting that the Regional Director reconsider the decision. You must file your request for reconsideration within 60 calendar days after your receipt of the Regional Director's decision. The NPS will dismiss as untimely any request for reconsideration received more than 60 days after your receipt of the original decision.

    § 9.191 How does the NPS process my request for reconsideration?

    The Regional Director will review his or her original decision and, within 90 days after receipt of your appeal, provide you with a written statement reversing, affirming, or modifying that decision, unless the Regional Director notifies you that he or she needs additional time to review the original decision. When issued, that written statement constitutes the Regional Director's final decision on the matter.

    § 9.192 Can I appeal the Regional Director's decision?

    (a) If the Regional Director affirms or modifies his or her original decision after you file a request for reconsideration, you may file an appeal with the NPS Director within 60 calendar days after your receipt of the Regional Director's decision under § 9.191.

    (b) Your appeal must include a statement of exceptions specifying your specific disagreements with the Regional Director's final decision. If you do not file your appeal within 60 calendar days, your appeal will be dismissed as untimely.

    (c) If you timely file your statement of exceptions, the Regional Director will forward his or her decision and the record for the appeal to the NPS Director. The record will consist of all documents and materials considered by NPS that are related to the matter appealed. The Regional Director will maintain that record under separate cover and will certify that the decision was based on that record. The Regional Director will make a copy of the record available to you at your request.

    (d) If, upon review, the NPS Director considers the record inadequate, the NPS Director may require additional documentation or information, or may remand the matter to the Regional Director with instructions for further action.

    (e) Within 45 calendar days from the date the NPS Director receives your statement of exceptions, the Director will issue a written decision. If the Director requires more than 45 calendar days to reach a decision, the Director will notify you and specify the reasons for the delay. The Director's written decision will include:

    (1) A statement of facts;

    (2) A statement of conclusions; and

    (3) An explanation of the basis for the decision.

    (f) No NPS decision under these regulations that is subject to appeal to the Director, or the Regional Director pursuant to § 9.194, will be considered final agency action subject to judicial review under 5 U.S.C. 704 unless the appropriate official has rendered a decision on the matter. That decision will constitute NPS's final agency action, and no further appeal will lie in the Department from that decision.

    § 9.193 Will filing a request for reconsideration or appeal stop the NPS from taking action under this subpart?

    (a) Except as provided for in paragraph (b) of this section, during the reconsideration and appeal processes, the decision at issue will be stayed (suspended). The decision will not become effective until the appeals process is completed.

    (b) If NPS suspends your operation due to an emergency within your area of operation that poses an immediate threat of injury to federally owned or administered lands or waters, or to public health and safety, you have a right to request reconsideration and appeal the decision under §§ 9.190 through 9.194, but the suspension will not be stayed until the threat is eliminated.

    § 9.194 What if the original decision was made by the Superintendent?

    Where the Superintendent has the authority to make the original decision, requests for reconsideration and appeals may be filed in the manner provided by §§ 9.190 through 9.193, except that:

    (a) The request for reconsideration will be filed with and decided by the Superintendent;

    (b) The appeal will be filed with and decided by the Regional Director; and

    (c) The Regional Director's decision will constitute the final agency action on the matter.

    Public Participation
    § 9.200 How can the public participate in the approval process?

    (a) Interested parties may view the publicly available documents at the Superintendent's office during normal business hours or by other means prescribed by the Superintendent. The availability for public inspection of information about the nature, location, character, or ownership of park resources will conform to all applicable law and implementing regulations, standards, and guidelines.

    (b) The Superintendent will make available for public inspection any documents that an operator submits to the NPS under this subpart except those that you have identified as proprietary or confidential.

    (c) For the information required in §§ 9.88, 9.89, and 9.122, the operator and the submitter of the information will be deemed to have waived any right to protect from public disclosure information submitted to the NPS. For information required under §§ 9.88, 9.89, and 9.122 that the owner of the information claims to be exempt from public disclosure and is withheld from the NPS, a corporate officer, managing partner, or sole proprietor of the operator must sign and the operator must submit to the Superintendent an affidavit that:

    (1) Identifies the owner of the withheld information and provides the name, address and contact information for a corporate officer, managing partner, or sole proprietor of the owner of the information;

    (2) Identifies the Federal statute or regulation that would prohibit the NPS from publicly disclosing the information if it were in the NPS's possession;

    (3) Affirms that the operator has been provided the withheld information from the owner of the information and is maintaining records of the withheld information, or that the operator has access and will maintain access to the withheld information held by the owner of the information;

    (4) Affirms that the information is not publicly available;

    (5) Affirms that the information is not required to be publicly disclosed under any applicable local, State, tribal, or Federal law;

    (6) Affirms that the owner of the information is in actual competition and identifies competitors or others that could use the withheld information to cause the owner of the information substantial competitive harm;

    (7) Affirms that the release of the information would likely cause substantial competitive harm to the owner of the information and provides the factual basis for that affirmation; and

    (8) Affirms that the information is not readily apparent through reverse engineering with publicly available information.

    (d) If the operator relies upon information from third parties, such as the owner of the withheld information, to make the affirmations in paragraphs (c)(6) through (8) of this section, the operator must provide a written affidavit from the third party that sets forth the relied-upon information.

    (e) The NPS may require any operator to submit to the NPS any withheld information, and any information relevant to a claim that withheld information is exempt from public disclosure.

    (f) If the NPS determines that the information submitted under paragraph (e) of this section is not exempt from disclosure, the NPS will make the information available to the public after providing the operator and owner of the information with no fewer than 10 business days' notice of the NPS's determination.

    (g) The operator must maintain records of the withheld information until the later of the NPS's release of the operator's financial assurance or 7 years after completion of hydraulic fracturing operations. Any subsequent operator will be responsible for maintaining access to records required by this paragraph during its operation of the well. The operator will be deemed to be maintaining the records if it can promptly provide the complete and accurate information to NPS, even if the information is in the custody of its owner.

    (h) If any of the chemical identity information required in § 9.122 is withheld, the operator must provide the generic chemical name in the submission required by § 9.122. The generic chemical name must be only as nonspecific as is necessary to protect the confidential chemical identity, and should be the same as or no less descriptive than the generic chemical name provided to the Environmental Protection Agency.

    Information Collection
    § 9.210 Has the Office of Management and Budget approved the information collection requirements?

    (a) The Office of Management and Budget (OMB) has reviewed and approved the information collection requirements in 36 CFR part 9, subpart B, and assigned OMB Control Number 1024-0274. We may not conduct or sponsor and you are not required to respond to a collection of information unless it displays a currently valid OMB control number. We use the information collected to:

    (1) Evaluate proposed operations;

    (2) Ensure that all necessary mitigation measures are employed to protect park resources and values; and

    (3) Ensure compliance with all applicable laws and regulations.

    (b) You may submit comments on any aspect of the information collection requirements to the Information Collection Clearance Officer, National Park Service, 12201 Sunrise Valley Drive, Room 2C114, Mail Stop 242, Reston, VA 20192.

    § 9.302 [Amended]
    6. In newly redesignated § 9.302: a. In paragraphs (b)(1) and (2), remove the comma and add in its place a semicolon. b. In paragraph (b)(2), remove the reference “§ 9.86 of this subpart” and add in its place the reference “§ 9.306.”
    § 9.304 [Amended]
    7. In newly redesignated § 9.304, in paragraph (a), remove the reference “§ 9.84(b)” and add in its place the reference “§ 9.304(b)” and remove the reference “§ 9.83(b)” and add in its place the reference “§ 9.303(b).”
    § 9.306 [Amended]
    8. In newly redesignated § 9.306, in paragraph (a), remove the reference “§ 9.84” and add in its place the reference “§ 9.304.”
    § 9.308 [Amended]
    9. In newly redesignated § 9.308, in paragraph (a), remove the reference “§ 9.86” and add in its place the reference “§ 9.306.” Dated: October 21, 2016. Karen Hyun, Acting Principal Deputy Assistant Secretary for Fish and Wildlife and Parks.
    [FR Doc. 2016-26489 Filed 11-3-16; 8:45 am] BILLING CODE 4312-52-P
    81 214 Friday, November 4, 2016 Rules and Regulations Part V Department of Defense Defense Acquisition Regulations System 48 CFR Parts 202, 212, 215, et al. Federal Acquisition Regulations; Final and Proposed Rules DEPARTMENT OF DEFENSE Defense Acquisition Regulations System 48 CFR Parts 231 and 242 [Docket DARS-2015-0070] RIN 0750-AI81 Defense Federal Acquisition Regulation Supplement: Enhancing the Effectiveness of Independent Research and Development (DFARS Case 2016-D002) AGENCY:

    Defense Acquisition Regulations System, Department of Defense (DoD).

    ACTION:

    Final rule.

    SUMMARY:

    DoD is issuing a final rule amending the Defense Federal Acquisition Regulation Supplement (DFARS) to improve the effectiveness of independent research and development (IR&D) investments by the defense industrial base, by requiring contractors to engage in technical interchanges with DoD before costs are generated.

    DATES:

    Effective November 4, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Mr. Tom Ruckdaschel, telephone 571-372-6088.

    SUPPLEMENTARY INFORMATION:

    I. Background

    DoD published a proposed rule in the Federal Register at 81 FR 7723 on February 16, 2016, to revise DFARS 231.205-18, Independent Research and Development and Bid and Proposal Costs, to require that proposed new IR&D efforts be communicated to appropriate DoD personnel prior to the initiation of these investments, and that results be shared with appropriate DoD personnel. Nine respondents submitted public comments in response to the proposed rule.

    II. Discussion and Analysis

    DoD reviewed the public comments in the development of this final rule. A discussion of the comments and the changes made to the rule as a result of those comments is provided, as follows:

    A. Summary of Changes From the Proposed Rule in Response to Public Comments

    1. The requirement at DFARS 231.205-18(c)(iii)(C)(2) to include a “summary of results” with the annual update to online inputs is removed in the final rule.

    2. DFARS 231.205-18(c)(iii)(C)(4)(i) is revised to cite the Office of the Assistant Secretary of Defense for Research and Engineering (OASD R&E) as a resource for contractors who do not have a point of contact for the technical interchange. Contact information for OASD R&E can be found at http://www.acq.osd.mil/rd/contacts/.

    B. Analysis of Public Comments 1. Support for the Rule

    Comment: Three respondents expressed positive support of the rule and DoD's effort to enhance communications between industry and DoD regarding IR&D efforts.

    Response: DoD acknowledges the respondents' support for the rule.

    2. Favor Certain Projects/Different Priorities

    Comment: One respondent, though generally supportive of the goals of the rulemaking effort, believed the proposed rule will make it more difficult to pursue IR&D projects at their infancy for the following reason: “. . . by requiring technical interchange with Government employees before generating IR&D costs, defense contractors will shift toward IR&D projects that are of perceived interest to identifiable DoD officials.” One respondent stated that the rule will favor companies that have their IR&D (efforts) preapproved. One respondent, though supportive of technical interchanges, was concerned that DoD individuals participating in the interchanges may not share the long-term priorities outlined in Better Buying Power 3.0. Another respondent is concerned that “bona fide” technical interchanges exist outside of the contractor's controls and that the proposed rule penalizes contractors without an “ARDEC like” agency as their customer.

    Response: DoD anticipates that defense contractors will pursue IR&D projects intended to advance their ability to develop and deliver a superior and more competitive product to the warfighter. The requirement to hold a technical interchange is not a de facto approval process and will not favor one company over another. These technical interchanges are informal engagements designed to promote transparency, communication, and dialogue between IR&D participants and DoD. The intended outcome is to ensure that both IR&D performers and their potential DoD customers have sufficient awareness of each other's efforts and to provide industry with some feedback on the relevance of proposed and completed IR&D work. Consistent with that objective, the rule requires only that a technical interchange take place and that the date of the interchange and name of the DoD personnel contacted be reported to the defense innovation marketplace.

    3. Existing Regulations and Practices

    Comment: One respondent stated that the rule is not necessary and that the current text at DFARS 231.205-18 is sufficient. Another respondent questioned the proposed rule's statement that “there are no known significant approaches to the rule that would meet the requirements” when agencies are already successfully holding voluntary technical interchanges that are achieving the regulation's goals.

    Response: The existing language at DFARS 231.205-18 does not include a requirement for technical interchanges. These technical interchanges are key to ensuring that both IR&D performers and their potential DoD customers have sufficient awareness of each other's effort. The fact that voluntary technical interchanges already exist, and are successfully achieving the regulation's goals, is consistent with the overall approach to the rulemaking effort.

    4. Adverse Impact on Innovation

    Comment: Several respondents stated that the proposed rule will adversely impact innovative ideas. Another respondent cautioned that the rule will create a barrier to innovation and entry to the marketplace.

    Response: DoD believes that this rule supports and promotes innovative ideas and technologies, and will incentivize entry into the marketplace by ensuring that IR&D performers and their potential DoD customers have sufficient awareness of each other's efforts and that DoD can provide industry with feedback on the relevance of proposed and completed IR&D work.

    5. Cost/Administrative Burden

    Comment: A number of respondents stated that the rule will cost taxpayers more. One respondent stated that the rule will impose an administrative burden on contractors, administrative contracting officers (ACOs), and DoD personnel. Another respondent expressed concern with the significant costs associated with planning and conducting technical interchanges and the costs accrued prior to the technical interchange.

    Response: While acknowledging that this rule imposes a slight administrative burden on contractors, ACOs, and DoD personnel, such burdens are overshadowed by the net benefit of ensuring that IR&D performers and their potential DoD customers have sufficient awareness of each other's efforts and that DoD can provide industry with feedback on the relevance of proposed and completed IR&D work. Moreover, the long-term priorities outlined in Better Buying Power 3.0 are a strategic imperative for DoD.

    6. Process Issues and Practicality

    Comment: A number of respondents stated that the rule will create an unnecessary bureaucracy, citing concerns that the rule will create a “bottleneck” that will slow down industry IR&D efforts and require the shifting of DoD technical resources to evaluate the IR&D projects and respond to contractors. The respondents claimed that the requirement to conduct and document the interchange of information between contractor and DoD personnel with respect to IR&D projects prior to their commencement is not practical.

    Response: The rule does not establish a requirement for DoD to evaluate or approve IR&D projects; rather, the rule requires contractors to communicate new IR&D efforts to appropriate DoD personnel via a technical interchange prior to the initiation of the investment. The requirement for technical interchanges is an extension of DoD's long-standing policy to engage in robust communication with all entities supporting the defense industrial base and promote transparent engagement with IR&D participants regarding research and development, including basic research, applied research, and development. This policy is outlined in DoD Instruction 3204.01, “DoD Policy for Oversight of Independent Research and Development (IR&D).” The technical interchanges are intended to be informal communications between IR&D participants and DoD. Their objective is to ensure that both IR&D performers and their potential DoD customers have sufficient awareness of each other's efforts and to provide industry with some feedback on the relevance of proposed and completed IR&D work. Note, the requirement for including a summary of results in the annual update on IR&D projects is removed in the final rule, thus easing any administrative burden.

    7. Statutory Concerns

    Comment: A number of respondents stated that the rule is in violation of existing statute and recreates the historic DoD technical reviews rejected by Congress.

    Response: This rule is consistent with 10 U.S.C 2372 subsection (a), Regulations, which states that the Secretary of Defense shall prescribe regulations governing the payment, by the Department of Defense, of expenses incurred by contractors for independent research and development and bid and proposal (B&P) costs. To that extent, subsection (c), Additional Controls, states that the regulations prescribed pursuant to subsection (a) may include implementation of regular methods for transmission from contractors to the Department of Defense, in a reasonable manner, of information regarding progress by the contractor on the contractor's independent research and development programs.

    8. DoD Responsiveness

    Comment: A number of respondents expressed concern with DoD responsiveness to requests for technical interchanges, citing that the rule fails to outline DoD's obligations and unfairly saddles contractors with the full consequence of DoD's failure to take part in a technical interchange. One respondent is concerned that the proposed rule creates practical, time, resource, and data disclosure challenges for conducting technical interchanges, and that DoD Components will not have an adequate number of personnel designated to conduct the technical interchanges in the time mandated. Another respondent questioned the recourse contractors will have if DoD personnel refuse to engage.

    Response: To assist contractors in ensuring that technical interchanges take place in a timely manner, the rule has been revised to identify the primary DoD focal point for technical interchanges as OASD R&E. Contact information for this office is available at http://www.acq.osd.mil/rd/contacts/. If a Contractor experiences difficulties scheduling a technical interchange, or does not have a point of contact for the technical interchange, the contractor may contact OASD R&E.

    9. Protection of Data

    Comment: Several respondents were concerned about reporting and protection of proprietary and classified information.

    Response: This rule merely requires reporting of the name of the technical or operational DoD Government employee and the date of the technical interchange. The requirement to include a summary of results of the technical interchange in the annual update is removed in the final rule. There is an existing requirement at DFARS 231.205-18(c)(iii)(C) for submission of project summaries and annual updates to the DTIC Web site. It remains DoD policy to protect proprietary information in accordance with applicable laws and agency regulations. Firms have discretion regarding presentation of information they regard as sensitive when they submit project summaries; however, only unclassified IR&D project summary information should be provided. Both database screens and printouts will be marked “Proprietary.” Any markings on attachments provided by a contractor will not be altered.

    Adequate controls are in place to protect information from compromise. It is DoD policy to protect national security information in accordance with national-level policy issuances. In accordance with DoD Instruction 5200.01, DoD Information Security Program and Protection of Sensitive Compartmented Information, DoD shall—

    • Identify and protect national security information and controlled unclassified information (CUI) in accordance with national level policy issuances.

    • Promote information sharing, facilitate judicious use of resources, and simplify management through implementation of uniform and standardized processes.

    • Protect CUI from unauthorized disclosure by appropriately marking, safeguarding, disseminating, and destroying such information.

    10. Additional Information

    Comment: One respondent stated that DFARS language should be added stating that the Government may require additional information from the contractor.

    Response: The objective of the technical interchanges is to ensure that both IR&D performers and their potential DoD customers have sufficient awareness of each other's efforts and to provide industry with some feedback on the relevance of proposed and completed IR&D work. Within that framework, the DoD personnel involved in technical interchanges will not be seeking additional information, i.e., formal documentation from the contractor.

    11. Reporting Burden

    Comment: One respondent stated that the proposed rule inaccurately suggests that it does not require changes to reporting or recordkeeping. Another respondent stated that the rule adds to the contractor's existing reporting burden.

    Response: As stated in the proposed rule, the impact of this rule on a contractor's reporting burden is negligible. Currently, contractors are required to (1) report IR&D projects to the Defense Technical Information Center (DTIC) using the DTIC's online IR&D database and (2) update these inputs at least annually and when the project is completed. This rule merely changes the web address for submission of this report and requires major contractors to include in the report the name of the Government employee with which a technical interchange was held prior to initiation of the IR&D effort and the date of such interchange. In addition, the requirement to include a summary of results in the annual update on IR&D projects is removed in the final rule.

    12. DoD Government Employee

    Comment: One respondent stated that the rule does not specify the needed level of detail for the technical interchange or “who” in DoD should receive the technical information. Another respondent is concerned that the proposed rule does not adequately define the term “DoD Government employee.”

    Response: In accordance with the rule, contractors shall engage in technical interchanges with a technical or operational DoD Government employee who is informed of related ongoing and future potential interest opportunities. If the contractor does not have a point of contact for the technical interchange, the contractor may contact OASD R&E. Contact information for OASD R&E can be found at http://www.acq.osd.mil/rd/contacts/.

    13. Advance Approval Requirement

    Comment: One respondent recommended eliminating the DoD advance approval requirement of contractor's IR&D efforts.

    Response: The rule does not contain a requirement for DoD to approve a contractor's IR&D efforts in advance.

    14. Administrative Guidance/Standards for Technical Interchanges

    Comment: One respondent asked if DoD will write additional administrative rules to outline DoD's obligation to participate in technical interchanges. Another respondent suggested that DoD adopt administrative rules, best practices, and guidance to counter the inconsistent support among DoD agencies and provide uniformity to the technical interchange process.

    Response: The rule is intentionally crafted to allow informal technical interchanges to ensure that IR&D performers and their potential DoD customers have sufficient awareness of each other's efforts and that DoD can provide industry with feedback on the relevance of proposed and completed IR&D work.

    15. Cost Allowability

    Comment: One respondent recommended DoD reconsider the prerequisite for a determination of allowability. Another recommended the rule include a proviso allowing costs expended before the effective date of the final rule. One respondent states that DoD should not make allowability of IR&D costs contingent on the timing of technical exchange meetings. One respondent was concerned that the proposed rule restricts the allowability of costs related to mandatory technical interchanges; specifically, the proposed rule states that the contractor must engage in a technical interchange “before IR&D costs are generated.” Another respondent was concerned of the lack of specificity regarding verification for purposes of allowability determinations.

    Response: The requirement to determine the allowability of IR&D costs is a preestablished requirement in DFARS 231.205-18(c)(iii)(C), which sets forth the requirement that for a contractor's annual IR&D costs to be allowable, the IR&D projects generating the costs must be reported to DTIC using the DTIC's online input form. This rule merely adds the requirement that contractors also engage in a technical interchange with a technical or operational DoD Government employee, and record the name of the employee and the date the technical interchange occurred using DTIC's online form. The rule applies to IR&D projects initiated in the contractor's fiscal year 2017 and later. However, as with all DFARS rules, unless otherwise stated, the rule is only effective upon publication. Therefore, IR&D costs incurred prior to the effective date of this rule are not subject to the requirements of this rule.

    16. Dollar Threshold

    Comment: Two respondents suggested DoD establish a dollar threshold for requiring technical interchanges.

    Response: The requirements of this rule only apply to major contractors. Establishing an IR&D project dollar threshold would require speculative estimate of the IR&D project costs and, as such, would be impossible to administer, thus defeating the purpose of the technical interchange.

    17. Cost Bases and Pools

    Comment: Two respondents stated that the rule will require contractors to establish multiple accounting costs bases and pools.

    Response: This rule does not impose new cost accounting requirements. The IR&D cost principle at Federal Acquisition Regulation (FAR) 31.205-18(b) states, “The requirements of 48 CFR 9904.420, Accounting for independent research and development costs and B&P costs, are incorporated in their entirety. . . .” The cost accounting standard at 48 CFR 9904.420-40, Fundamental requirement, paragraph (a) states, “The basic unit for identification and accumulation of IR&D and B&P costs shall be the individual IR&D or B&P project.”

    18. Annual Briefings/Frequency

    Comment: A number of respondents questioned the frequency of the technical interchanges and whether they will be required annually. One respondent stated that many IR&D projects span several years, changing and evolving through the process, and that it is not clear whether these projects would need to be stopped and briefed annually. One respondent noted that one of the benefits of contractor IR&D is the ability to rapidly change direction as result of discovery or in response to a shifting market or defense environment.

    Response: There is no requirement to brief IR&D projects annually. The rule requires the technical interchange to occur at the onset of the IR&D project, prior to generating any costs, for the annual IR&D costs to be considered allowable.

    C. Other Changes

    This final rule includes the following technical amendments:

    1. The proposed paragraph regarding contractors that do not meet the threshold of major contractor is renumbered as DFARS 231.205-18(c)(iv) in the final rule.

    2. At DFARS 242.771-3, the entity responsible for a regular method for communication is changed from the “Director, Defense Research and Engineering (USD(A&T)DDR&E)” to the “Office of the Assistant Secretary of Defense for Research and Engineering (OASD R&E).”

    III. Applicability to Contracts at or Below the Simplified Acquisition Threshold and for Commercial Items, Including Commercially Available Off-the-Shelf Items

    This rule does not add any new provisions or clauses or impact any existing provisions or clauses.

    IV. Executive Orders 12866 and 13563

    Executive Orders (E.O.s) 12866 and 13563 direct agencies to assess all costs and benefits of available regulatory alternatives and, if regulation is necessary, to select regulatory approaches that maximize net benefits (including potential economic, environmental, public health and safety effects, distributive impacts, and equity). E.O. 13563 emphasizes the importance of quantifying both costs and benefits, of reducing costs, of harmonizing rules, and of promoting flexibility. This is not a significant regulatory action and, therefore, was not subject to review under section 6(b) of E.O. 12866, Regulatory Planning and Review, dated September 30, 1993. This rule is not a major rule under 5 U.S.C. 804.

    V. Regulatory Flexibility Act

    A final regulatory flexibility analysis has been prepared consistent with the Regulatory Flexibility Act, 5 U.S.C. 601, et seq., and is summarized as follows:

    The objective of this final rule is to (1) ensure that both independent research and development (IR&D) performers and their potential DoD customers have sufficient awareness of each other's efforts and (2) provide industry with feedback on the relevance of proposed and completed IR&D work.

    There were no significant issues raised by the public in response to the initial regulatory flexibility analysis.

    DoD does not expect this final rule to have a significant economic impact on a substantial number of small entities, because DFARS 231.205-18(c)(iii) applies only to major contractors, which are defined as those whose covered segments allocated a total of more than $11 million in IR&D and bid and proposal costs to covered contracts during the preceding fiscal year. The final rule requires major contractors to communicate proposed new IR&D efforts to DoD personnel in a technical interchange prior to the initiation of such investments.

    This rule impacts existing reporting and recordkeeping requirements in a very minor way. Only one element is being added to the existing reporting requirement to require major contractors to include the name of the DoD employee with which a technical interchange was held and the date of such interchange.

    There are no known significant alternatives to the rule. The rule impacts major contractors and, as such, will have minimal impact on small entities.

    VI. Paperwork Reduction Act

    The final rule affects the information collection requirements at Defense Federal Acquisition Regulation Supplement (DFARS) 231.205-18, currently approved under the Office of Management and Budget (OMB) Control Number 0704-0483, entitled “Independent Research and Development Technical Descriptions,” in accordance with the Paperwork Reduction Act (44 U.S.C. chapter 35); however, the impact of this rule is negligible. Currently, contractors are required to (1) report IR&D projects to DTIC using the DTIC's online IR&D database and (2) update these inputs at least annually and when the project is completed. This rule merely changes the web address for submission of this report and requires major contractors to include in the report the name of the DoD Government employee with which a technical interchange was held and the date of such interchange.

    List of Subjects in 48 CFR Parts 231 and 242

    Government procurement.

    Jennifer L. Hawes, Editor, Defense Acquisition Regulations System.

    Therefore, 48 CFR parts 231 and 242 are amended as follows:

    1. The authority citation for 48 CFR parts 231 and 242 continues to read as follows: Authority:

    41 U.S.C. 1303 and 48 CFR chapter 1.

    PART 231—CONTRACT COST PRINCIPLES AND PROCEDURES 2. Amend section 231.205-18 by— a. Revising paragraph (c)(iii)(C); b. Redesignating paragraphs (c)(iv) and (v) as paragraphs (c)(v) and (vi), respectively; and c. Adding a new paragraph (c)(iv).

    The revision and addition read as follows:

    231.205-18 Independent research and development and bid and proposal costs.

    (c) * * *

    (iii) * * *

    (C) For annual IR&D costs to be allowable—

    (1) The IR&D projects generating the costs must be reported to the Defense Technical Information Center (DTIC) using the DTIC's online input form and instructions at http://www.defenseinnovationmarketplace.mil/;

    (2) The inputs must be updated at least annually and when the project is completed;

    (3) Copies of the input and updates must be made available for review by the cognizant administrative contracting officer (ACO) and the cognizant Defense Contract Audit Agency auditor to support the allowability of the costs; and

    (4) For IR&D projects initiated in the contractor's fiscal year 2017 and later, as a prerequisite for the subsequent determination of allowability, the contractor shall—

    (i) Engage in a technical interchange with a technical or operational DoD Government employee before IR&D costs are generated so that contractor plans and goals for IR&D projects benefit from the awareness of and feedback by a DoD Government employee who is informed of related ongoing and future potential interest opportunities. If the contractor does not have a point of contact for the technical interchange, the contractor may contact the Office of the Assistant Secretary of Defense for Research and Engineering (OASD R&E). Contact information for OASD R&E can be found at http://www.acq.osd.mil/rd/contacts/; and

    (ii) Use the online input form for IR&D projects reported to DTIC to document the technical interchange, which includes the name of the DoD Government employee and the date the technical interchange occurred.

    (iv) Contractors not meeting the threshold of a major contractor are encouraged to use the DTIC online input form to report IR&D projects to provide DoD with visibility into the technical content of the contractors' IR&D activities.

    PART 242—CONTRACT ADMINISTRATION AND AUDIT SERVICES
    242.771-3 [Amended]
    3. In section 242.771-3, amend paragraph (d) introductory text by removing “Director, Defense Research and Engineering (OUSD(AT&L)DDR&E)” and adding “Office of the Assistant Secretary of Defense for Research and Engineering (OASD R&E)” in its place.
    [FR Doc. 2016-26366 Filed 11-3-16; 8:45 am] BILLING CODE 5001-06-P
    DEPARTMENT OF DEFENSE Defense Acquisition Regulations System 48 CFR Part 247 [Docket DARS-2016-0036] RIN 0750-AJ09 Defense Federal Acquisition Regulation Supplement: Contiguous United States (DFARS Case 2016-D005) AGENCY:

    Defense Acquisition Regulations System, Department of Defense (DoD).

    ACTION:

    Final rule.

    SUMMARY:

    DoD is issuing a final rule to amend the Defense Federal Acquisition Regulation Supplement (DFARS) to remove the acronym for contiguous United States.

    DATES:

    Effective November 4, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Ms. Julie Hammond, telephone 571-372-6174.

    SUPPLEMENTARY INFORMATION:

    I. Background

    DoD is amending the DFARS to remove the acronym for contiguous United States (CONUS). While the term “contiguous United States (CONUS)” is defined in Federal Acquisition Regulation (FAR) 2.101, the acronym is sometimes misinterpreted as “continental United States.” Spelling out the acronym in the DFARS will eliminate any confusion.

    II. Discussion and Analysis

    DFARS 274.301 is amended to update the reference to transportation guidance in DFARS Procedures, Guidance, and Information and, as a result, remove the acronym CONUS.

    DFARS 274.301-71 is amended to spell out “the contiguous United States” in lieu of CONUS.

    III. Applicability to Contracts at or Below the Simplified Acquisition Threshold (SAT) and for Commercial Items, Including Commercially Available Off-the-Shelf (COTS) Items

    This case does not add any new provisions or clauses or impact any existing provisions or clauses.

    IV. Publication of This Final Rule for Public Comment Is Not Required by Statute

    The statute that applies to the publication of the FAR is 41 U.S.C. 1707 entitled “Publication of Proposed Regulations.” Paragraph (a)(1) of the statute requires that a procurement policy, regulation, procedure, or form (including an amendment or modification thereof) must be published for public comment if it relates to the expenditure of appropriated funds, and has either a significant effect beyond the internal operating procedures of the agency issuing the policy, regulation, procedure, or form, or has a significant cost or administrative impact on contractors or offerors. This final rule is not required to be published for public comment, because it is just removing and spelling out the acronym for “contiguous United States”.

    V. Executive Orders 12866 and 13563

    Executive Orders (E.O.s) 12866 and 13563 direct agencies to assess all costs and benefits of available regulatory alternatives and, if regulation is necessary, to select regulatory approaches that maximize net benefits (including potential economic, environmental, public health and safety effects, distributive impacts, and equity). E.O. 13563 emphasizes the importance of quantifying both costs and benefits, of reducing costs, of harmonizing rules, and of promoting flexibility. This is not a significant regulatory action and, therefore, was not subject to review under section 6(b) of E.O. 12866, Regulatory Planning and Review, dated September 30, 1993. This rule is not a major rule under 5 U.S.C. 804.

    VI. Regulatory Flexibility Act

    The Regulatory Flexibility Act does not apply to this rule because this final rule does not constitute a significant DFARS revision within the meaning of FAR 1.501-1, and 41 U.S.C. 1707 does not require publication for public comment.

    VII. Paperwork Reduction Act

    The rule does not contain any information collection requirements that require the approval of the Office of Management and Budget under the Paperwork Reduction Act (44 U.S.C. chapter 35).

    List of Subjects in 48 CFR Part 247

    Government procurement.

    Jennifer L. Hawes, Editor, Defense Acquisition Regulations System.

    Therefore, 48 CFR part 247 is amended as follows:

    PART 247—TRANSPORTATION 1. The authority citation for 48 CFR part 247 continues to read as follows: Authority:

    41 U.S.C. 1303 and 48 CFR chapter 1.

    247.301 [Amended]
    2. In section 247.301, remove the phrase “that require shipments to destinations outside CONUS”.
    247.301-71 [Amended]
    3. In section 247.301-71, remove “outside CONUS” and add “outside the contiguous United States” in its place.
    [FR Doc. 2016-26367 Filed 11-3-16; 8:45 am] BILLING CODE 5001-06-P
    DEPARTMENT OF DEFENSE Defense Acquisition Regulations System 48 CFR Parts 212 and 252 [Docket DARS-2016-0015] RIN 0750-AI93 Defense Federal Acquisition Regulation Supplement: Pilot Program on Acquisition of Military Purpose Nondevelopmental Items (DFARS Case 2016-D014) AGENCY:

    Defense Acquisition Regulations System, Department of Defense (DoD).

    ACTION:

    Final rule.

    SUMMARY:

    DoD is adopting as final, with changes, an interim rule amending the Defense Federal Acquisition Regulation Supplement (DFARS) to implement a section of the National Defense Authorization Act for Fiscal Year 2016 that changes the criteria for the pilot program for acquisition of military purpose nondevelopmental items.

    DATES:

    Effective November 4, 2016.

    FOR FURTHER INFORMATION CONTACT:

    Ms. Carrie Moore, telephone 571-372-6093.

    SUPPLEMENTARY INFORMATION:

    I. Background

    DoD published an interim rule in the Federal Register at 80 FR 42557 on June 30, 2016, to implement section 892 of the National Defense Authorization Action (NDAA) for Fiscal Year (FY) 2016 (Pub. L. 114-92). Section 892 removes the requirements under the pilot program for the use of competitive procedures and for awards to be made to nontraditional defense contractors. Section 892 also increases the threshold for use of the pilot program to contracts up to $100 million. Two respondents submitted public comments in response to the interim rule.

    II. Discussion and Analysis

    DoD reviewed the public comments in the development of the final rule. A discussion of the comments received and the changes made to the rule as a result of those comments is provided, as follows:

    A. Summary of Significant Changes From the Interim Rule

    One change is made in the final rule as a result of a public comment. The prescription at DFARS 212.7103 for DFARS provision 252.212-7002, Pilot Program for Acquisition of Military-Purpose Nondevelopmental Items, is revised to clarify its use in solicitations when use of the pilot program is planned and the applicability criteria are met.

    B. Analysis of Public Comments 1. General

    Comment: One respondent recommended rewording the prescription at DFARS 212.7103 to clarify proper use of the provision.

    Response: The prescription at DFARS 212.7103 is revised to state, “Use the provision at 252.212-7002, Pilot Program for Acquisition of Military-Purpose Nondevelopmental Items, in solicitations when use of the pilot program is planned and the applicability criteria of 212.7102-1 are met.”

    2. Implementation

    Comment: One respondent suggested revising the text at DFARS 212.7102-1(d) to capture the removal of the requirement to use competitive procedures under the pilot program by adding, “Each contract entered into under the pilot program shall be exempt from the requirement for the use of competitive procedures.”

    Response: The respondent's suggestion is outside the scope of the authority provided by section 892. The statute removes the requirement that each contract under the pilot program be awarded using the competitive procedures at 10 U.S.C. chapter 137. Section 892 does not provide any further exemptions to the competition requirements outlined in the FAR and DFARS. The interim rule accomplishes the goal of section 892 by removing from the applicability criteria for the pilot program at DFARS 212.7102-1 the requirement to award using competitive procedures. No additional text is required.

    III. Applicability to Contracts at or Below the Simplified Acquisition Threshold (SAT) and for Commercial Items, Including Commercially Available Off-the-Shelf (COTS) Items

    The requirements of section 892 of the NDAA for FY 2016 do not apply to contracts at or below the SAT. Additionally, while FAR part 12 commercial procedures may be used to acquire military purpose nondevelopmental items under this pilot program, the rule does not apply to the acquisition of commercial items, including COTS items.

    IV. Executive Orders 12866 and 13563

    Executive Orders (E.O.s) 12866 and 13563 direct agencies to assess all costs and benefits of available regulatory alternatives and, if regulation is necessary, to select regulatory approaches that maximize net benefits (including potential economic, environmental, public health and safety effects, distributive impacts, and equity). E.O. 13563 emphasizes the importance of quantifying both costs and benefits, of reducing costs, of harmonizing rules, and of promoting flexibility. This is not a significant regulatory action and, therefore, was not subject to review under section 6(b) of E.O. 12866, Regulatory Planning and Review, dated September 30, 1993. This rule is not a major rule under 5 U.S.C. 804.

    V. Regulatory Flexibility Act

    A final regulatory flexibility analysis (FRFA) has been prepared consistent with the Regulatory Flexibility Act, 5 U.S.C. 601, et seq. The FRFA is summarized as follows:

    This final rule amends the Defense Federal Acquisition Regulation Supplement (DFARS) to implement section 892 of the National Defense Authorization Act for Fiscal Year 2016. The objective of the rule is to modify the criteria for the pilot program at DFARS 212.71, Pilot Program for the Acquisition of Military Purpose Nondevelopmental Items, to increase the opportunities for use of the program. The rule removes the criteria that contracts must be awarded to “nontraditional defense contractors” and awards must be made using competitive procedures. The rule also increases the dollar threshold for the program to allow use on procurements up to $100 million.

    There were no significant issues raised by the public in response to the initial regulatory flexibility analysis provided in the interim rule.

    The changes to the pilot program will have a positive economic impact on small businesses that did not meet the definition of “nontraditional defense contractors” and have developed products that could be applied to a military purpose. According to data available in the Federal Procurement Data System for FY 2015, 6,514 unique small businesses were awarded a DoD contract in excess of the certified cost and pricing threshold ($750,000) and therefore did not meet the definition of “nontraditional defense contractor.” Prior to the changes made by this rule these small businesses were not eligible for an award under the pilot program. These small businesses will now be able to participate in the pilot program if they are developing a military purpose nondevelopmental item.

    This rule does not impose any new reporting, recordkeeping, or other compliance requirements. No significant alternatives were identified during the development of this rule.

    VI. Paperwork Reduction Act

    The rule does not contain any information collection requirements that require the approval of the Office of Management and Budget under the Paperwork Reduction Act (44 U.S.C. chapter 35).

    List of Subjects in 48 CFR Parts 212 and 252

    Government procurement.

    Jennifer L. Hawes, Editor, Defense Acquisition Regulations System.

    Accordingly, the interim rule amending 48 CFR parts 212 and 252, which was published in the Federal Register at 80 FR 42557 on June 30, 2016, is adopted as a final rule with the following change:

    PART 212—ACQUISITION OF COMMERCIAL ITEMS 1. The authority citation for part 212 continues to read as follows: Authority:

    41 U.S.C. 1303 and 48 CFR chapter 1.

    2. Revise section 212.7103 to read as follows:
    212.7103 Solicitation provision.

    Use the provision at 252.212-7002, Pilot Program for Acquisition of Military-Purpose Nondevelopmental Items, in solicitations when use of the pilot program is planned and the applicability criteria of 212.7102-1 are met.

    [FR Doc. 2016-26368 Filed 11-3-16; 8:45 am] BILLING CODE 5001-06-P
    81 214 Friday, November 4, 2016 Proposed Rules DEPARTMENT OF DEFENSE Defense Acquisition Regulations System 48 CFR Parts 215 and 252 [Docket DARS-2016-0004] RIN 0750-AI84 Defense Federal Acquisition Regulation Supplement: Independent Research and Development Expenses (DFARS Case 2016-D017) AGENCY:

    Defense Acquisition Regulations System, Department of Defense (DoD).

    ACTION:

    Proposed rule.

    SUMMARY:

    DoD is proposing to amend the Defense Federal Acquisition Regulation Supplement (DFARS) to ensure that substantial future independent research and development expenses, as a means to reduce evaluated bid prices in competitive source selections, are evaluated in a uniform way during competitive source selections.

    DATES:

    Comments on the proposed rule should be submitted in writing to the address shown below on or before January 3, 2017, to be considered in the formation of a final rule.

    ADDRESSES:

    Submit comments identified by DFARS Case 2016-D017, using any of the following methods:

    Federal eRulemaking Portal: http://www.regulations.gov. Search for “DFARS Case 2016-D017.” Select “Comment Now” and follow the instructions provided to submit a comment. Please include “DFARS Case 2016-D017” on any attached documents.

    Email: [email protected]. Include DFARS Case 2016-D017 in the subject line of the message.

    Fax: 571-372-6094.

    Mail: Defense Acquisition Regulations System, Attn: Mr. Mark Gomersall, OUSD(AT&L)DPAP/DARS, Room 3B941, 3060 Defense Pentagon, Washington, DC 20301-3060.

    Comments received generally will be posted without change to http://www.regulations.gov, including any personal information provided. To confirm receipt of your comment(s), please check www.regulations.gov, approximately two to three days after submission to verify posting (except allow 30 days for posting of comments submitted by mail).

    FOR FURTHER INFORMATION CONTACT:

    Mr. Mark Gomersall, telephone 571-372-6099.

    SUPPLEMENTARY INFORMATION:

    I. Background

    As expressed in the “Implementation Directive for Better Buying Power 3.0—Achieving Dominant Capabilities through Technical Excellence and Innovation,” dated April 9, 2015, the Under Secretary of Defense for Acquisition, Technology, and Logistics noted a concern when “promised future IRAD [independent research and development] expenditures are used to substantially reduce the bid price on competitive procurements. In these cases, development price proposals are reduced by using a separate source of government funding (allowable IRAD overhead expenses spread across the total business) to gain a price advantage in a specific competitive bid. This is not the intended purpose of making IRAD an allowable cost.”

    DoD published an advanced notice of proposed rulemaking (ANPR) in the Federal Register at 81 FR 6488 on February 8, 2016, to seek information to assist in the development of a revision to the DFARS to ensure that substantial future independent research and development (IR&D) expenses, used as a means to reduce evaluated bid prices, are evaluated in a uniform way during competitive source selections. A public meeting was held on March 3, 2016, to hear the views of interested parties.

    II. Discussion and Analysis

    DoD is proposing to amend the DFARS to require contracting officers to adjust the total evaluated price of major defense acquisition programs and major automated information systems proposals, for evaluation purposes only, to include the amount by which the offerors propose that future independent research and development investments reduce the price of the proposals.

    III. Applicability to Contracts at or Below the Simplified Acquisition Threshold and for Commercial Items, Including Commercially Available Off-the-Shelf Items

    This rule proposes to create a new clause: DFARS 252.215-70XX, Notification of Inclusion of Evaluation Criteria for Reliance Upon Future Government-Reimbursed Independent Research and Development Investments. DoD plans not to apply this clause to contracts at or below the simplified acquisition threshold or to commercial items, including commercially available off-the-shelf items.

    IV. Executive Orders 12866 and 13563

    Executive Orders (E.O.s) 12866 and 13563 direct agencies to assess all costs and benefits of available regulatory alternatives and, if regulation is necessary, to select regulatory approaches that maximize net benefits (including potential economic, environmental, public health and safety effects, distributive impacts, and equity). E.O. 13563 emphasizes the importance of quantifying both costs and benefits, of reducing costs, of harmonizing rules, and of promoting flexibility. This is not a significant regulatory action and, therefore, was not subject to review under section 6(b) of E.O. 12866, Regulatory Planning and Review, dated September 30, 1993. This rule is not a major rule under 5 U.S.C. 804.

    V. Regulatory Flexibility Act

    DoD does not expect this proposed rule to have a significant economic impact on a substantial number of small entities within the meaning of the Regulatory Flexibility Act 5 U.S.C. 601, et seq. However, an initial regulatory flexibility analysis has been prepared and is summarized as follows:

    DoD is proposing to amend the DFARS to require contracting officers to adjust the total evaluated price of major defense acquisition programs and major automated information systems proposals, for evaluation purposes only, to include the amount by which the offerors propose that future independent research and development investments reduce the price of the proposals.

    The objective of this rule is to ensure that substantial future independent research and development expenses, as a means to reduce evaluated bid prices in competitive source selections, are evaluated in a uniform way during competitive source selections.

    The rule has limited application and will apply only to major defense acquisition programs (as defined in 10 U.S.C. 2430) and major automated information systems acquisitions (as defined in 10 U.S.C. 2445a). This rule should not impact small entities, since major defense acquisition programs and major automated information systems acquisition policies normally apply to large contractors, because the cost, magnitude, and production requirements of such programs are generally beyond the capability or capacity of small entities as prime contractors.

    There is no change to reporting and recordkeeping as a result of this rule. The rule does not duplicate, overlap, or conflict with any other Federal rules. There are no known significant alternative approaches to the rule that would meet the requirements.

    DoD invites comments from small business concerns and other interested parties on the expected impact of this rule on small entities.

    DoD will also consider comments from small entities concerning the existing regulations in subparts affected by this rule in accordance with 5 U.S.C. 610. Interested parties must submit such comments separately and should cite 5 U.S.C. 610 (DFARS Case 2016-D017), in correspondence.

    VI. Paperwork Reduction Act

    The rule does not contain any information collection requirements that require the approval of the Office of Management and Budget under the Paperwork Reduction Act (44 U.S.C. chapter 35).

    List of Subjects in 48 CFR Parts 215 and 252

    Government procurement.

    Jennifer L. Hawes, Editor, Defense Acquisition Regulations System.

    Therefore, 48 CFR parts 215 and 252 are proposed to be amended as follows:

    1. The authority citation for 48 CFR parts 215 and 252 continues to read as follows: Authority:

    41 U.S.C. 1303 and 48 CFR chapter 1.

    PART 215—CONTRACTING BY NEGOTIATION 2. In section 215.305, add paragraph (a)(1) to read as follows:
    215.305 Proposal evaluation.

    (a)(1) Cost or price evaluation. For major defense acquisition programs and major automated information systems in a development phase, when an offeror proposes a cost or price that is reduced due to reliance upon future Government-reimbursed independent research and development projects, the contracting officer shall, for evaluation purposes only, adjust the total evaluated cost or price of the proposal to include the amount by which such investments reduce the price of the proposal.

    3. Amend section 215.408 by— a. Redesignating paragraphs (2) through (5) as paragraphs (3) through (6), respectively; and b. Adding a new paragraph (2) to read as follows:
    215.408 Solicitation provisions and contract clauses.

    (2) Use the provision at 252.215-70XX, Notification of Inclusion of Evaluation Criteria for Reliance Upon Future Government-Reimbursed Independent Research and Development Investments, in all competitive solicitations for major defense acquisition programs (as defined in 10 U.S.C. 2430) and major automated information systems acquisitions (as defined in 10 U.S.C. 2445a) in a development phase.

    PART 252—SOLICITATION PROVISIONS AND CONTRACT CLAUSES 4. Add section 252.215-70XX to read as follows:
    252.215-70XX Notification of Inclusion of Evaluation Criteria for Reliance Upon Future Government-Reimbursed Independent Research and Development Investments.

    As prescribed in 215.408(2), use the following provision:

    Notification of Inclusion of Evaluation Criteria for Reliance Upon Future Government-Reimbursed Independent Research and Development Investments (Date)

    (a) This solicitation includes price evaluation criteria that consider the Offeror's intended use of future Government-reimbursed independent research and development (IR&D) projects if the Offeror proposes a cost or price that is reduced due to reliance upon expected future Government-reimbursed IR&D projects.

    (b) If the Offeror, in the performance of any contract resulting from this solicitation, intends to use IR&D to meet the contract requirements, the Offeror's proposal shall include documentation in its price proposal to support this proposed approach.

    (c) For evaluation purposes only, the Contracting Officer will adjust the Offeror's total evaluated cost or price to include the amount that such future IR&D investments reduce the price of the proposal.

    (End of provision)

    [FR Doc. 2016-26369 Filed 11-3-16; 8:45 am] BILLING CODE 5001-06-P
    DEPARTMENT OF DEFENSE Defense Acquisition Regulations System 48 CFR Parts 202, 215, 225, and 252 [Docket DARS-2015-0027] RIN 0750-AI59 Defense Federal Acquisition Regulation Supplement: Offset Costs (DFARS Case 2015-D028) AGENCY:

    Defense Acquisition Regulations System, Department of Defense (DoD).

    ACTION:

    Proposed rule.

    SUMMARY:

    DoD is issuing a proposed rule amending the Defense Federal Acquisition Regulation Supplement (DFARS) to implement a section of the National Defense Authorization Act for Fiscal Year 2016 related to costs associated with indirect offsets under foreign military sales agreements.

    DATES:

    Comments on the proposed rule should be submitted in writing to the address shown below on or before January 3, 2017, to be considered in the formation of a final rule.

    ADDRESSES:

    Submit comments identified by DFARS Case 2015-D028, using any of the following methods:

    Federal eRulemaking Portal: http://www.regulations.gov. Search for “DFARS Case 2015-D028.” Select “Comment Now” and follow the instructions provided to submit a comment. Please include “DFARS Case 2015-D028” on any attached documents.

    Email: [email protected]. Include DFARS Case 2015-D028 in the subject line of the message.

    Fax: 571-372-6094.

    Mail: Defense Acquisition Regulations System, Attn: Mr. Mark Gomersall, OUSD(AT&L)DPAP/DARS, Room 3B941, 3060 Defense Pentagon, Washington, DC 20301-3060.

    Comments received generally will be posted without change to http://www.regulations.gov, including any personal information provided. To confirm receipt of your comment(s), please check www.regulations.gov, approximately two to three days after submission to verify posting (except allow 30 days for posting of comments submitted by mail).

    FOR FURTHER INFORMATION CONTACT:

    Mr. Mark Gomersall, telephone 571-372-6099.

    SUPPLEMENTARY INFORMATION:

    I. Background

    This proposed rule expands on interim rule guidance and incorporates the requirements of section 812 of the National Defense Authorization Act (NDAA) for Fiscal Year (FY) 2016.

    DoD published an interim rule in the Federal Register (80 FR 31309) on June 2, 2015. The comment period closed on August 3, 2015. The interim rule revised DFARS 225.7303-2, Cost of Doing Business with a Foreign Government or an International Organization, by providing guidelines to contracting officers when an indirect offset is a condition of a foreign military sales (FMS) acquisition. Specifically, the interim rule set forth that all offset costs that involve benefits provided by the U.S. defense contractor to the FMS customer that are unrelated to the item being purchased under the Letter of Offer and Acceptance (LOA) (indirect offset costs) are deemed reasonable for purposes of FAR part 31 with no further analysis necessary on the part of the contracting officer, provided that the U.S. defense contractor submits to the contracting officer a signed offset agreement or other documentation showing that the FMS customer has made the provision of an indirect offset of a certain dollar value a condition of the FMS acquisition. FMS customers are placed on notice through the LOA that indirect offset costs are deemed reasonable without any further analysis by the contracting officer.

    II. Discussion and Analysis

    DoD reviewed the public comments submitted in response to the interim rule in the development of this proposed rule. A discussion of the comments and the changes made to the rule as a result of those comments is provided, as follows:

    A. Summary of Significant Changes

    Section 812 of the NDAA for FY 2016 amended 10 U.S.C. 2306a(b)(1) to state that submission of certified cost or pricing data shall not be required in the case of a contract, a subcontract, or modification of a contract or subcontract to the extent such data—

    (i) Relates to an offset agreement in connection with a contract for the sale of a weapon system or defense-related item to a foreign country or foreign firm; and

    (ii) Does not relate to a contract or subcontract under the offset agreement for work performed in such foreign country or by such foreign firm that is directly related to the weapon system or defense-related item being purchased under the contract.

    This proposed rule amends DFARS 215.403-1(b), Exceptions to Certified Cost or Pricing Data Requirements, and adds DFARS clause 252.215-70XX, Requirements for Certified Cost or Pricing Data for Foreign Military Sales Indirect Offset Agreements, to incorporate the revisions implemented in section 812.

    Additionally, this proposed rule relocates the language at DFARS Procedures, Guidance, and Information (PGI) 225.7303-2(a)(3) into DFARS 225.7303-2(a)(3) for clarity. In response to public comments, the rule also adds: (1) Definitions of “offset” and “offset costs” at 202.101, and (2) the appropriate reference to Federal Acquisition Regulation (FAR) part 15 and deletes the phrase “of a certain dollar value” in DFARS 225.7303-2(a)(3).

    B. Analysis of Public Comments

    Comment: One respondent is supportive of the U.S. Government's goal to add clarity on the evaluation of offset costs within an FMS contract, and concurs with the U.S. Government's determination in this rule that indirect offsets are to be deemed reasonable for the purposes of FAR parts 15 and 31.

    Response: Noted.

    Comment: One respondent recommended that the determination of reasonableness in this rule be made applicable to all offset agreements, both “direct” and “indirect.”

    Response: DFARS 225.7301(b) requires that the U.S. Government conduct FMS acquisitions under the same acquisition and contract management procedures used for other defense acquisitions. This requires the contracting officer to adhere to FAR regulations concerning the negotiation of contracts and subcontracts (FAR part 15) and contract cost principles (FAR part 31), and thus attest to the reasonableness of FMS contract prices. Contracting officers must follow these regulations even though no DoD-appropriated funds are being used to pay for the effort. While DoD contracting officers have no insight to pricing of the indirect offset, and shall not encourage, enter directly into, or commit U.S. companies to any offset arrangement in connection with the sale of defense goods or services to foreign governments, it is reasonable to maintain the requirement that contracting officers determine that prices are fair and reasonable for direct offsets, as they directly tie to the FMS end item(s).

    Comment: One respondent recommended that the rule include definitions of direct and indirect offsets. The respondent recommended that the DFARS define indirect offset as “an offset transaction unrelated to the article(s) or service(s) exported or to be exported pursuant to the military export sales agreement.”

    Response: A definition of offsets is provided at DFARS 202.101 for clarity.

    Comment: A number of respondents suggested making the rule applicable to FAR part 15, as well as FAR part 31.

    Response: The rule is clarified to state that indirect offset costs are deemed reasonable for purposes of FAR part 15 as well as FAR part 31.

    Comment: One respondent requested that the rule clarify what forms of documentation will be acceptable to the contracting officer. Frequently the contractor will be able to document the legal, contractual or policy requirement for offsets (e.g., published guidelines) and infer the dollar value. However, a signed, specific offset agreement rarely predates the LOA. Further, a country's offset guidelines may allow for both direct and indirect projects, but the defense contractor and foreign government might not decide on the specific mix of direct versus indirect projects until after the LOA is signed. As such, this requirement could effectively negate much of the benefit of the rule. The respondent suggested that the rule clarify acceptable documentation as a “signed offset agreement or other documentation, which may include, but is not limited to, the FMS customer's offset guidelines, requirements, regulations or law, policy, or historical requirements.”

    Response: While the costs associated with such indirect offset agreements are deemed reasonable for purposes of FAR parts 15 and 31 with no further analysis necessary on the part of the contracting officer, the U.S. defense contractor must still provide evidence of a signed offset agreement or other documentation showing that the FMS customer has made the provision of an indirect offset a condition of the FMS acquisition to support this determination. While this rule does not define the specific documentation required, such documentation must support the specific FMS acquisition.

    Comment: One respondent stated that often the type of offset projects to be implemented will not yet be specified, and the dollar value associated with an offset budget in an FMS contract is only an estimate. The respondent recommended that the rule be revised to clarify how contracting officers will consider offset costs when the exact nature and value of the individual projects that will help fulfill the overall offset obligation remains to be negotiated and finalized between the contractor and the foreign customer at the time of submission of the proposal.

    Response: This is precisely why this rule is necessary. DoD contracting officers are not provided the information necessary to negotiate cost or price of the indirect offsets, particularly with respect to price reasonableness determinations. Therefore, indirect offset costs are deemed reasonable for purposes of FAR parts 15 and 31 with no further analysis necessary on the part of the contracting officer.

    Comment: One respondent suggested that a sentence stating that “if the FMS customer requires additional information on offsets, they should discuss directly with the seller” be inserted to emphasize that all offset obligations/projects are negotiated between the contractor and the foreign customer.

    Response: Since a determination of fair and reasonable pricing is established for indirect offset costs, the statement that FMS customers are placed on notice through the LOA that indirect offset costs are deemed reasonable without any further analysis by the contracting officer is included in DFARS 225.7303-2(a)(3).

    Comment: One respondent stated that by deeming indirect offset costs to be reasonable, the rule appears to conflict with FAR 31.201-3(a), which states, “No presumption of reasonableness shall be attached to the incurrence of costs by a contractor.” The apparent conflicting language may create confusion in the field as contracting officers attempt to execute the FAR and DFARS rules and guidance regarding reasonableness. The respondent recommended amending FAR 31.201-3(a) to acknowledge the existence of a DFARS exception to the rule of no presumption of reasonableness with respect to indirect offset costs.

    Response: It is unnecessary and inappropriate to amend the FAR to acknowledge the existence of DFARS supplementary language. The FAR System consists of the FAR, which is the primary document, and agency acquisition regulations that implement or supplement the FAR. The DFARS implements or supplements the FAR to incorporate DoD policies, procedures, contract clauses, solicitation provisions, and forms that govern the contracting process or otherwise control the relationship between DoD and contractors or prospective contractors. To include a FAR reference for each occurrence of an agency supplement to the FAR would be unwieldy. Further, since this is a DFARS rule, making such a reference in the FAR would be out of scope for this rule.

    Comment: One respondent questioned whether the contractor's costs associated with administering offset agreements are also deemed reasonable for the purposes of FAR part 31 with no further analysis by the contracting officer.

    Response: Unlike the specific indirect offset costs, contracting officers do have insight into the administration costs associated with direct and indirect offset agreements. Therefore, costs associated with administering indirect offset agreements are not deemed reasonable without further analysis under this rule.

    Comment: One respondent stated that offset agreements often include values associated with “offset credits” that may or may not be representative of the costs of the supplies or services being acquired or performed. The respondent suggested clarifying the meaning of the term “certain dollar value” and questioned whether that term refers to the “offset credit” value that is included in the offset agreement, or whether the offset agreement needs to set out the anticipated cost of the actual supplies or services being contracted for under the FMS contract.

    Response: The phrase “of a certain dollar value” has been removed as a clarifier to the documentation requirements to indicate the existence of an indirect offset agreement as a condition of an FMS acquisition.

    III. Applicability to Contracts at or Below the Simplified Acquisition Threshold and for Commercial Items, Including Commercially Available Off-the-Shelf Items

    This rule proposes to create a new clause: DFARS 252.215-70XX, Requirements for Certified Cost or Pricing Data for Foreign Military Sales Indirect Offset Agreements. DoD plans not to apply this clause to contracts at or below the simplified acquisition threshold or to commercial items, including commercially available off-the-shelf items.

    IV. Executive Orders 12866 and 13563

    Executive Orders (E.O.s) 12866 and 13563 direct agencies to assess all costs and benefits of available regulatory alternatives and, if regulation is necessary, to select regulatory approaches that maximize net benefits (including potential economic, environmental, public health and safety effects, distributive impacts, and equity). E.O. 13563 emphasizes the importance of quantifying both costs and benefits, of reducing costs, of harmonizing rules, and of promoting flexibility. This is not a significant regulatory action and, therefore, was not subject to review under section 6(b) of E.O. 12866, Regulatory Planning and Review, dated September 30, 1993. This rule is not a major rule under 5 U.S.C. 804.

    V. Regulatory Flexibility Act

    DoD does not expect this rule to have a significant economic impact on a substantial number of small entities within the meaning of the Regulatory Flexibility Act, 5 U.S.C. 601, et seq. However, an initial regulatory flexibility analysis has been performed, and is summarized as follows:

    This rule amends the DFARS to clarify requirements related to indirect offset costs associated with Foreign Military Sales offset agreements.

    The objective of this rule is to expand on the DFARS interim rule published in the Federal Register (80 FR 31309) on June 2, 2015, and implement the requirements of section 812 of the National Defense Authorization Act for Fiscal Year 2016.

    DoD does not expect this rule to have a significant economic impact on a substantial number of small entities within the meaning of the Regulatory Flexibility Act, 5 U.S.C. 601, et seq. because indirect offset agreements are not incorporated into FMS contracts with small entities and the DFARS amendments merely clarify that contracting officers are not responsible for making a determination of price reasonableness for indirect offset agreements for which they have no purview.

    This rule does not add any reporting or recordkeeping requirements. The rule does not duplicate, overlap, or conflict with any other Federal rules. There are no known significant alternatives to this rule.

    DoD invites comments from small business concerns and other interested parties on the expected impact of this rule on small entities.

    DoD will also consider comments from small entities concerning the existing regulations in subparts affected by this rule in accordance with 5 U.S.C. 610. Interested parties must submit such comments separately and should cite 5 U.S.C 610 (DFARS Case 2015-D028), in correspondence.

    VI. Paperwork Reduction Act

    The rule does not contain any information collection requirements that require the approval of the Office of Management and Budget under the Paperwork Reduction Act (44 U.S.C. chapter 35).

    List of Subjects in 48 CFR Parts 202, 215, 225, and 252

    Government procurement.

    Jennifer L. Hawes, Editor, Defense Acquisition Regulations System.

    Therefore, 48 CFR parts 202, 215, 225, and 252 are proposed to be amended as follows:

    1. The authority citation for parts 202, 215, 225, and 252 continues to read as follows: Authority:

    41 U.S.C. 1303 and 48 CFR chapter 1.

    PART 202—DEFINITIONS OF WORDS AND TERMS 2. In section 202.101, add in alphabetical order definitions of “offset” and “offset costs” to read as follows:
    202.101 Definitions.

    Offset means a benefit or obligation agreed to by a contractor and a foreign government or international organization as an inducement or condition to purchase supplies or services pursuant to a foreign military sale (FMS). There are two types of offsets: Direct offsets and indirect offsets.

    (1) A direct offset involves benefits or obligations, including supplies or services, that are related to the item being purchased. For example, as a condition of a foreign military sale, the contractor may require or agree to permit the customer to produce in its country certain components or subsystems of the item being sold. Generally, direct offsets must be performed within a specified period, because they are integral to the deliverable of the FMS contract.

    (2) An indirect offset involves benefits, including supplies or services, that are unrelated to the item being purchased. For example, as a condition of a foreign military sale, the contractor may agree to purchase certain manufactured products, agricultural commodities, raw materials, or services required by the FMS customer, or may agree to build a school or road. Indirect offsets may be accomplished without a clearly defined period of performance.

    Offset costs means the costs to the contractor of providing any direct or indirect offsets required (explicitly or implicitly) as a condition of a foreign military sale.

    PART 215—CONTRACTING BY NEGOTIATION 3. In section 215.403-1, revise paragraph (b) to read as follows:
    215.403-1 Prohibition on obtaining certified cost or pricing data (10 U.S.C. 2306a and 41 U.S.C. chapter 35).

    (b) Exceptions to certified cost or pricing data requirements. (i) Follow the procedures at PGI 215.403-1(b).

    (ii) Submission of certified cost or pricing data shall not be required in the case of a contract, subcontract, or modification of a contract or subcontract to the extent such data relates to an indirect offset.

    4. In section 215.408, add paragraph (6) to read as follows:
    215.408 Solicitation provisions and contract clauses.

    (6) Requirements for certified cost or pricing data for foreign military sales offset agreements. Use the clause at 252.215-70XX, Requirements for Certified Cost or Pricing Data for Foreign Military Sales Indirect Offset Agreements, in solicitations and contracts that contain the provision at FAR 52.215-20, Requirements for Certified Cost or Pricing Data and Data Other Than Certified Cost or Pricing Data, when it is reasonably certain that—

    (i) The contract is expected to include costs associated with an indirect offset; and

    (ii) The submission of certified cost or pricing data or data other than certified cost or pricing data will be required.

    PART 225—FOREIGN ACQUISITION
    225.7301 [Amended]
    5. Amend section 225.7301 in paragraph (a) by removing “defense articles” and adding “supplies” in its place. 6. In section 225.7303-2, revise paragraph (a)(3) to read as follows:
    225.7303-2 Cost of doing business with a foreign government or an international organization.

    (a) * * *

    (3) Offsets. For additional information see 225.7306.

    (i) An offset agreement is the contractual arrangement between the FMS customer and the U.S. defense contractor that identifies the offset obligation imposed by the FMS customer that has been accepted by the U.S. defense contractor as a condition of the FMS customer's purchase. These agreements are distinct and independent of the LOA and the FMS contract. Further information about offsets and LOAs may be found in the Defense Security Cooperation Agency (DSCA) Security Assistance Management Manual (DSCA 5105.38-M), chapter 6, paragraph 6.3.9. (http://samm.dsca.mil/chapter/chapter-6).

    (ii) A U.S. defense contractor may recover all costs incurred for offset agreements with a foreign government or international organization if the LOA is financed wholly with foreign government or international organization customer cash or repayable foreign military finance credits.

    (iii) The U.S. Government assumes no obligation to satisfy or administer the offset agreement or to bear any of the associated costs.

    (iv) Indirect offset costs are deemed reasonable for purposes of FAR parts 15 and 31 with no further analysis necessary on the part of the contracting officer, provided that the U.S. defense contractor submits to the contracting officer a signed offset agreement or other documentation showing that the FMS customer has made the provision of an indirect offset a condition of the FMS acquisition. FMS customers are placed on notice through the LOA that indirect offset costs are deemed reasonable without any further analysis by the contracting officer.

    PART 252—SOLICITATION PROVISIONS AND CONTRACT CLAUSES 7. Add section 252.215-70XX to read as follows:
    252.215-70XX Requirements for Certified Cost or Pricing Data for Foreign Military Sales Indirect Offset Agreements.

    As prescribed in 215.408(6)(i), use the following clause:

    Requirements for Certified Cost or Pricing Data for Foreign Military Sales Indirect Offset Agreements (Date)

    (a) Definition. As used in this clause—

    Offset means a benefit or obligation agreed to by a contractor and a foreign government or international organization as an inducement or condition to purchase supplies or services pursuant to a foreign military sale (FMS). There are two types of offsets: Direct offsets and indirect offsets.

    (1) A direct offset involves benefits or obligations, including supplies or services, that are related to the item being purchased. For example, as a condition of a foreign military sale, the contractor may require or agree to permit the customer to produce in its country certain components or subsystems of the item being sold. Generally, direct offsets must be performed within a specified period because they are integral to the deliverable of the FMS contract.

    (2) An indirect offset involves benefits, including supplies or services, that are unrelated to the item being purchased. For example, as a condition of a foreign military sale the contractor may agree to purchase certain manufactured products, agricultural commodities, raw materials, or services required by the FMS customer, or may agree to build a school or road. Indirect offsets may be accomplished without a clearly defined period of performance.

    (b) Exceptions from certified cost or pricing data requirements. Notwithstanding the requirements of Federal Acquisition Regulation (FAR) 52.215-20, Requirements for Certified Cost or Pricing Data and Data Other Than Certified Cost or Pricing Data, in the case of this contract or a subcontract, and FAR 52.215-21, Requirements for Certified Cost or Pricing Data and Data Other Than Certified Cost or Pricing Data—Modifications, in the case of modification of this contract or a subcontract, submission of certified cost or pricing data will not be required to the extent such data relates to an indirect offset (10 U.S.C. 2306a(b)(1)).

    (End of clause)

    [FR Doc. 2016-26377 Filed 11-3-16; 8:45 am] BILLING CODE 5001-06-P
    CategoryRegulatory Information
    CollectionFederal Register
    sudoc ClassAE 2.7:
    GS 4.107:
    AE 2.106:
    PublisherOffice of the Federal Register, National Archives and Records Administration

    2024 Federal Register | Disclaimer | Privacy Policy
    USC | CFR | eCFR